summaryrefslogtreecommitdiff
path: root/core
diff options
context:
space:
mode:
Diffstat (limited to 'core')
-rw-r--r--core/.dockerignore0
-rw-r--r--core/.gitattributes1
-rw-r--r--core/.gitignore12
-rw-r--r--core/CONTRIBUTING.md33
-rw-r--r--core/Dockerfile35
-rw-r--r--core/LICENSE.md21
-rw-r--r--core/README.md97
-rwxr-xr-xcore/build/configure.sh43
-rw-r--r--core/build/supervisord.conf9
-rw-r--r--core/database/Dockerfile8
-rw-r--r--core/database/README.md13
-rw-r--r--core/database/gwf_converter/gwf_converter.py115
-rw-r--r--core/database/gwf_converter/requirements.txt1
-rw-r--r--core/database/gwf_converter/traces/default.gwf6
-rw-r--r--core/database/rebuild-database.py32
-rw-r--r--core/database/rebuild.bat3
-rw-r--r--core/database/schema.sql818
-rw-r--r--core/database/test.sql381
-rw-r--r--core/database/view-table.py17
-rw-r--r--core/docker-compose.yml84
-rw-r--r--core/images/logo.pngbin0 -> 2825 bytes
-rw-r--r--core/images/opendc-component-diagram.pngbin0 -> 11875 bytes
-rw-r--r--core/images/opendc-frontend-construction.PNGbin0 -> 76461 bytes
-rw-r--r--core/images/opendc-frontend-simulation-zoom.PNGbin0 -> 100583 bytes
-rw-r--r--core/images/opendc-frontend-simulation.PNGbin0 -> 96351 bytes
-rw-r--r--core/mongodb/Dockerfile5
-rw-r--r--core/mongodb/docker-compose.yml30
-rw-r--r--core/mongodb/mongo-init-opendc-db.sh122
-rwxr-xr-xcore/mongodb/prefab.py112
-rwxr-xr-xcore/mongodb/prefabs.py124
-rw-r--r--core/opendc-api-spec.yml999
31 files changed, 3121 insertions, 0 deletions
diff --git a/core/.dockerignore b/core/.dockerignore
new file mode 100644
index 00000000..e69de29b
--- /dev/null
+++ b/core/.dockerignore
diff --git a/core/.gitattributes b/core/.gitattributes
new file mode 100644
index 00000000..526c8a38
--- /dev/null
+++ b/core/.gitattributes
@@ -0,0 +1 @@
+*.sh text eol=lf \ No newline at end of file
diff --git a/core/.gitignore b/core/.gitignore
new file mode 100644
index 00000000..e31adcb6
--- /dev/null
+++ b/core/.gitignore
@@ -0,0 +1,12 @@
+# JetBrains platform
+.idea/
+
+# Credential setup file
+keys.json
+
+# pyenv version files
+.python-version
+mongodb/opendc_testing/*
+
+# macOS-specific files
+.DS_Store
diff --git a/core/CONTRIBUTING.md b/core/CONTRIBUTING.md
new file mode 100644
index 00000000..a90c1fb5
--- /dev/null
+++ b/core/CONTRIBUTING.md
@@ -0,0 +1,33 @@
+# Contributing to the OpenDC Frontend
+
+First of all, thanks for wanting to contribute! 🎉
+
+
+## 💬 Have a question or general feedback relating to OpenDC?
+
+Contact us at 📧[opendc@atlarge-research.com](mailto:opendc@atlarge-research.com)!
+
+
+## 🐞 Want to report a bug or suggest a feature?
+
+Please file an issue in one of our GitHub repos! OpenDC is a stack of a number of components, so here is a list of the different components with their respective domains. Please go to the one fits your bug or feature best:
+
+* Docker or database setup? Go to [the main OpenDC repo](https://github.com/atlarge-research/opendc/issues).
+* The web application? Go to [the frontend repo](https://github.com/atlarge-research/opendc-frontend/issues).
+* The web server? Go to [the web server repo](https://github.com/atlarge-research/opendc-web-server/issues).
+* The simulator? Go to [the simulator repo](https://github.com/atlarge-research/opendc-simulator/issues).
+
+Once you are on the appropriate page for your issue report, have a look if someone has already filed an issue addressing your concern.
+
+If there already is such an issue, feel free to comment on the issue to show your support for it, or to add additional information that might be helpful. You can also just react with a thumbs-up 👍 to the issue, to indicate that you'd be interested in its resolution. This can help us prioritize what we spend our development time on.
+
+If you can't find an issue that fits your problem or feature request, open a new one. Describe actual and expected behavior, and be as detailed as you can. We'll get back to you asap.
+
+
+## 💻 Want to contribute code?
+
+Great! If your contribution concerns overall stack setup (relating to Docker or the database), this repo is the right place to be! However, if you rather want to contribute to one of the components (the frontend, the web server, or the simulator), go to their respective repositories and look for documentation and guidelines for contribution there.
+
+If you want to contribute to the main repository, [fork it](https://github.com/atlarge-research/opendc/new/master) and submit a PR here when you're ready! Be sure to describe *what* you changed and *why* you changed it, to help us understand what your contribution is about.
+
+A quick note on commit messages: Please follow common Git standards when writing commit messages, see [this post](https://chris.beams.io/posts/git-commit/) for details.
diff --git a/core/Dockerfile b/core/Dockerfile
new file mode 100644
index 00000000..f7f36d87
--- /dev/null
+++ b/core/Dockerfile
@@ -0,0 +1,35 @@
+FROM node:14.2.0
+MAINTAINER Sacheendra Talluri <sacheendra.t@gmail.com>
+
+# Adding the mongodb repo and installing the client
+RUN wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | apt-key add - \
+ && echo "deb http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.2 main" | tee /etc/apt/sources.list.d/mongodb-org-4.2.list \
+ && apt-get update \
+ && apt-get install -y mongodb-org
+
+# Installing python and web-server dependencies
+RUN echo "deb http://ftp.debian.org/debian stretch main" >> /etc/apt/sources.list \
+ && apt-get update \
+ && apt-get install -y python3 python3-pip yarn git sed mysql-client pymongo \
+ && pip3 install oauth2client eventlet flask-socketio flask-compress mysql-connector-python-rf \
+ && pip3 install --upgrade pyasn1-modules \
+ && rm -rf /var/lib/apt/lists/*
+
+# Copy OpenDC directory
+COPY ./ /opendc
+
+# Setting up simulator
+RUN pip install -e /opendc/opendc-web-server \
+ && python /opendc/opendc-web-server/setup.py install \
+ && chmod 555 /opendc/build/configure.sh \
+ && cd /opendc/opendc-frontend \
+ && rm -rf ./build \
+ && rm -rf ./node_modules \
+ && yarn \
+ && export REACT_APP_OAUTH_CLIENT_ID=$(cat ../keys.json | python -c "import sys, json; print json.load(sys.stdin)['OAUTH_CLIENT_ID']") \
+ && yarn build
+
+# Set working directory
+WORKDIR /opendc
+
+CMD ["sh", "-c", "./build/configure.sh && python3 opendc-web-server/main.py keys.json"]
diff --git a/core/LICENSE.md b/core/LICENSE.md
new file mode 100644
index 00000000..57288ae2
--- /dev/null
+++ b/core/LICENSE.md
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2017 atlarge-research
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/core/README.md b/core/README.md
new file mode 100644
index 00000000..2bb734e8
--- /dev/null
+++ b/core/README.md
@@ -0,0 +1,97 @@
+<h1 align="center">
+ <img src="images/logo.png" width="100" alt="OpenDC">
+ <br>
+ OpenDC
+</h1>
+<p align="center">
+ Collaborative Datacenter Simulation and Exploration for Everybody
+</p>
+
+<br>
+
+OpenDC is an open-source simulator for datacenters aimed at both research and education.
+
+![opendc-frontend-construction](https://raw.githubusercontent.com/tudelft-atlarge/opendc/master/images/opendc-frontend-construction.PNG)
+
+Users can construct datacenters (see above) and define experiments to see how these datacenters perform under different workloads and schedulers (see below).
+
+![opendc-frontend-simulation](https://raw.githubusercontent.com/tudelft-atlarge/opendc/master/images/opendc-frontend-simulation.PNG)
+
+The simulator is accessible both as a ready-to-use website hosted by Delft University of Technology at [opendc.org](http://opendc.org), and as source code that users can run locally on their own machine.
+
+OpenDC is a project by the [@Large Research Group](http://atlarge-research.com).
+
+## Architecture
+
+OpenDC consists of four components: a Kotlin simulator, a MariaDB database, a Python Flask web server, and a React.js frontend.
+
+<p align="center">
+ <img src="https://raw.githubusercontent.com/tudelft-atlarge/opendc/master/images/opendc-component-diagram.png" alt="OpenDC Component Diagram">
+</p>
+
+On the frontend, users can construct a topology by specifying a datacenter's rooms, racks and machines, and create experiments to see how a workload trace runs on that topology. The frontend communicates with the web server over SocketIO, through a custom REST request/response layer. For example, the frontend might make a `GET` request to `/api/v1/users/{userId}`, but this request is completed via SocketIO, not plain HTTP requests.
+
+The (Swagger/ OpenAPI compliant) API spec specifies what requests the frontend can make to the web server. To view this specification, go to the [Swagger UI](http://petstore.swagger.io/) and "Explore" [opendc-api-spec.json](https://raw.githubusercontent.com/tudelft-atlarge/opendc/master/opendc-api-spec.json).
+
+The web server receives API requests and processes them in the SQLite database. When the frontend requests to run a new experiment, the web server adds it to the `experiments` table in the database and sets is `state` as `QUEUED`.
+
+The simulator monitors the database for `QUEUED` experiments, and simulates them as they are submitted. It writes the resulting `machine_states` and `task_states` to the database, which the frontend can then again retrieve via the web server.
+
+## Setup
+
+### Preamble
+
+The official way to run OpenDC is using Docker. Other options include building and running locally, and building and running to deploy on a server.
+
+For all of these options, you have to create a Google API Console project and client ID, which the OpenDC frontend and web server will use to authenticate users and requests. Follow [these steps](https://developers.google.com/identity/sign-in/web/devconsole-project) to make such a project. In the 'Authorized JavaScript origins' field, be sure to add `http://localhost:8081` as origin. Download the JSON of the OAuth 2.0 client ID you created from the Credentials tab, and specifically note the `client_id`, which you'll need to build OpenDC.
+
+### Installing Docker
+
+GNU/Linux, Mac OS X and Windows 10 Professional users can install Docker by following the instructions [here](https://www.docker.com/products/docker).
+
+Users of Windows 10 Home and previous editions of Windows can use [Docker Toolbox](https://www.docker.com/products/docker-toolbox). If you're using the toolbox, don't forget to setup port forwarding (see the following subsection if you haven't done that, yet).
+
+#### Port Forwarding
+
+Open VirtualBox, navigate to the settings of your default docker VM, and go to the 'Network' tab. There, hidden in the 'Advanced' panel, is the 'Port forwarding' feature, where you can set a rule for exposing a port of the VM to the host OS. Add one from guest IP `10.0.2.15` to host IP `127.0.0.1`, both on port `8081`. This enables you to open a browser on your host OS and navigate to `http://localhost:8081`, once the server is running.
+
+### Running OpenDC
+
+To build and run the full OpenDC stack locally on Linux or Mac, you first need to clone the project:
+
+```bash
+# Clone the repo and its submodules
+git clone --recursive https://github.com/atlarge-research/opendc.git
+
+# Enter the directory
+cd opendc/
+
+# If you're on Windows:
+# Turn off automatic line-ending conversion in the simulator sub-repository
+cd opendc-simulator/
+git config core.autocrlf false
+cd ..
+```
+
+In the directory you just entered, you need to set up a small configuration file. To do this, create a file called `keys.json` in the `opendc` folder. In this file, simply replace `your-google-oauth-client-id` with your `client_id` from the OAuth client ID you created. For a standard setup, you can leave the other settings as-is.
+
+```json
+{
+ "FLASK_SECRET": "This is a super duper secret flask key",
+ "OAUTH_CLIENT_ID": "your-google-oauth-client-id",
+ "ROOT_DIR": "/opendc",
+ "SERVER_BASE_URL": "http://localhost:8081"
+}
+```
+
+Now, start the server:
+
+```bash
+# Build the Docker image
+docker-compose build
+
+# Start the OpenDC container and the database container
+docker-compose up
+```
+
+Wait a few seconds and open `http://localhost:8081` in your browser to use OpenDC.
diff --git a/core/build/configure.sh b/core/build/configure.sh
new file mode 100755
index 00000000..ceb1e616
--- /dev/null
+++ b/core/build/configure.sh
@@ -0,0 +1,43 @@
+if [ -z "$MONGO_DB" ]; then
+ echo "MONGO_DB environment variable not specified"
+ exit 1
+fi
+
+if [ -z "$MONGO_DB_USER" ]; then
+ echo "MONGO_DB_USER environment variable not specified"
+ exit 1
+fi
+
+if [ -z "$MONGO_DB_PASSWORD" ]; then
+ echo "MONGO_DB_PASSWORD environment variable not specified"
+ exit 1
+fi
+
+#MYSQL_COMMAND="mysql -h mariadb -u $MYSQL_USER --password=$MYSQL_PASSWORD"
+
+MONGO_COMMAND="mongo $MONGO_DB -h $MONGO_DB_HOST --port $MONGO_DB_PORT -u $MONGO_DB_USERNAME -p $MONGO_DB_PASSWORD --authenticationDatabase $MONGO_DB"
+
+until eval $MONGO_COMMAND --eval 'db.getCollectionNames();' ; do
+ echo "MongoDB is unavailable - sleeping"
+ sleep 1
+done
+
+echo "MongoDB available"
+
+#NUM_TABLES=$(eval "$MYSQL_COMMAND -B --disable-column-names -e \"SELECT count(*) FROM information_schema.tables WHERE table_schema='$MYSQL_DATABASE';\"")
+
+# Check if database is empty
+#if [ "$NUM_TABLES" -eq 0 ]; then
+# eval $MYSQL_COMMAND "$MYSQL_DATABASE" < ./database/schema.sql
+# eval $MYSQL_COMMAND "$MYSQL_DATABASE" < ./database/test.sql
+#fi
+
+# Writing databse config values to keys.json
+cat keys.json | python -c "import os, sys, json; ks = json.load(sys.stdin); \
+ ks['MONGODB_HOST'] = os.environ['MONGO_DB_HOST']; \
+ ks['MONGODB_PORT'] = os.environ['MONGO_DB_PORT']; \
+ ks['MONGODB_DATABASE'] = os.environ['MONGO_DB']; \
+ ks['MYSQL_USER'] = os.environ['MONGO_DB_USER']; \
+ ks['MYSQL_PASSWORD'] = os.environ['MONGO_DB_PASSWORD']; \
+ print json.dumps(ks, indent=4)" > new_keys.json
+mv new_keys.json keys.json
diff --git a/core/build/supervisord.conf b/core/build/supervisord.conf
new file mode 100644
index 00000000..37b5cc16
--- /dev/null
+++ b/core/build/supervisord.conf
@@ -0,0 +1,9 @@
+[supervisord]
+nodaemon=true
+
+[program:web-server]
+command=/usr/bin/python2.7 /opendc/opendc-web-server/main.py /opendc/keys.json
+stdout_logfile=/dev/stdout
+stdout_logfile_maxbytes=0
+stderr_logfile=/dev/stderr
+stderr_logfile_maxbytes=0
diff --git a/core/database/Dockerfile b/core/database/Dockerfile
new file mode 100644
index 00000000..e30aed51
--- /dev/null
+++ b/core/database/Dockerfile
@@ -0,0 +1,8 @@
+FROM mariadb:10.1
+MAINTAINER Fabian Mastenbroek <f.s.mastenbroek@student.tudelft.nl>
+
+# Import schema into database
+ADD schema.sql /docker-entrypoint-initdb.d
+
+# Add test data into database
+#ADD test.sql /docker-entrypoint-initdb.d
diff --git a/core/database/README.md b/core/database/README.md
new file mode 100644
index 00000000..9fba2d5c
--- /dev/null
+++ b/core/database/README.md
@@ -0,0 +1,13 @@
+# OpenDC Database
+
+To rebuild the database at a location (or in this directory if none is specified):
+
+```bash
+python rebuild-database.py "path/to/database/directory"
+```
+
+To view a table in the database:
+
+```bash
+python view-table.py "path/to/database/directory" table_name
+```
diff --git a/core/database/gwf_converter/gwf_converter.py b/core/database/gwf_converter/gwf_converter.py
new file mode 100644
index 00000000..902bd93f
--- /dev/null
+++ b/core/database/gwf_converter/gwf_converter.py
@@ -0,0 +1,115 @@
+import os
+import sys
+
+import mysql.connector as mariadb
+
+
+class Job:
+ def __init__(self, gwf_id):
+ self.gwf_id = gwf_id
+ self.db_id = -1
+ self.tasks = []
+
+
+class Task:
+ def __init__(self, gwf_id, job, submit_time, run_time, num_processors, dependency_gwf_ids):
+ self.gwf_id = gwf_id
+ self.job = job
+ self.submit_time = submit_time
+ self.run_time = run_time
+ self.cores = num_processors
+ self.flops = 4000 * run_time * num_processors
+ self.dependency_gwf_ids = dependency_gwf_ids
+ self.db_id = -1
+ self.dependencies = []
+
+
+def get_jobs_from_gwf_file(file_name):
+ jobs = {}
+ tasks = {}
+
+ with open(file_name, "r") as f:
+ # Skip first CSV header line
+ f.readline()
+
+ for line in f:
+ if line.startswith("#") or len(line.strip()) == 0:
+ continue
+
+ values = [col.strip() for col in line.split(",")]
+ cast_values = [int(values[i]) for i in range(len(values) - 1)]
+ job_id, task_id, submit_time, run_time, num_processors, req_num_processors = cast_values
+ dependency_gwf_ids = [int(val) for val in values[-1].split(" ") if val != ""]
+
+ if job_id not in jobs:
+ jobs[job_id] = Job(job_id)
+
+ new_task = Task(task_id, jobs[job_id], submit_time, run_time, num_processors, dependency_gwf_ids)
+ tasks[task_id] = new_task
+ jobs[job_id].tasks.append(new_task)
+
+ for task in tasks.values():
+ for dependency_gwf_id in task.dependency_gwf_ids:
+ if dependency_gwf_id in tasks:
+ task.dependencies.append(tasks[dependency_gwf_id])
+
+ return jobs.values()
+
+
+def write_to_db(conn, trace_name, jobs):
+ cursor = conn.cursor()
+
+ trace_id = execute_insert_query(conn, cursor, "INSERT INTO traces (name) VALUES ('%s')" % trace_name)
+
+ for job in jobs:
+ job.db_id = execute_insert_query(conn, cursor, "INSERT INTO jobs (name, trace_id) VALUES ('%s',%d)"
+ % ("Job %d" % job.gwf_id, trace_id))
+
+ for task in job.tasks:
+ task.db_id = execute_insert_query(conn, cursor,
+ "INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) "
+ "VALUES (%d,%d,%d,%d)"
+ % (task.submit_time, task.flops, task.cores, job.db_id))
+
+ for job in jobs:
+ for task in job.tasks:
+ for dependency in task.dependencies:
+ execute_insert_query(conn, cursor, "INSERT INTO task_dependencies (first_task_id, second_task_id) "
+ "VALUES (%d,%d)"
+ % (dependency.db_id, task.db_id))
+
+def execute_insert_query(conn, cursor, sql):
+ try:
+ cursor.execute(sql)
+ except mariadb.Error as error:
+ print("SQL Error: {}".format(error))
+
+ conn.commit()
+ return cursor.lastrowid
+
+
+def main(trace_path):
+ trace_name = sys.argv[2] if (len(sys.argv) > 2) else \
+ os.path.splitext(os.path.basename(trace_path))[0]
+ gwf_jobs = get_jobs_from_gwf_file(trace_path)
+
+ host = os.environ.get('PERSISTENCE_HOST','localhost')
+ user = os.environ.get('PERSISTENCE_USER','opendc')
+ password = os.environ.get('PERSISTENCE_PASSWORD','opendcpassword')
+ database = os.environ.get('PERSISTENCE_DATABASE','opendc')
+ conn = mariadb.connect(host=host, user=user, password=password, database=database)
+ write_to_db(conn, trace_name, gwf_jobs)
+ conn.close()
+
+
+if __name__ == "__main__":
+ if len(sys.argv) < 2:
+ sys.exit("Usage: %s file [name]" % sys.argv[0])
+
+ if sys.argv[1] in ("-a", "--all"):
+ for f in os.listdir("traces"):
+ if f.endswith(".gwf"):
+ print("Converting {}".format(f))
+ main(os.path.join("traces", f))
+ else:
+ main(sys.argv[1])
diff --git a/core/database/gwf_converter/requirements.txt b/core/database/gwf_converter/requirements.txt
new file mode 100644
index 00000000..0eaebf12
--- /dev/null
+++ b/core/database/gwf_converter/requirements.txt
@@ -0,0 +1 @@
+mysql
diff --git a/core/database/gwf_converter/traces/default.gwf b/core/database/gwf_converter/traces/default.gwf
new file mode 100644
index 00000000..b1c55a17
--- /dev/null
+++ b/core/database/gwf_converter/traces/default.gwf
@@ -0,0 +1,6 @@
+WorkflowID, JobID , SubmitTime , RunTime , NProcs , ReqNProcs , Dependencies
+0 , 1 , 1 , 1 , 1 , 1, 5 4 3
+0 , 2 , 2 , 2 , 2 , 2, 3
+0 , 3 , 3 , 3 , 3 , 3, 5
+0 , 4 , 4 , 4 , 4 , 4,
+0 , 5 , 5 , 5 , 5 , 5,
diff --git a/core/database/rebuild-database.py b/core/database/rebuild-database.py
new file mode 100644
index 00000000..0cbeb27a
--- /dev/null
+++ b/core/database/rebuild-database.py
@@ -0,0 +1,32 @@
+import os
+import sqlite3
+import sys
+
+sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
+
+try:
+ BASE_DIR = directory_name=sys.argv[1]
+except:
+ BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+db_location = os.path.join(BASE_DIR, 'opendc.db')
+
+if os.path.exists(db_location):
+ print "Removing old database..."
+ os.remove(db_location)
+
+print "Connecting to new database..."
+conn = sqlite3.connect(db_location)
+c = conn.cursor()
+
+print "Importing schema..."
+with open('schema.sql') as schema:
+ c.executescript(schema.read())
+
+print "Importing test data..."
+with open('test.sql') as test:
+ c.executescript(test.read())
+
+conn.commit()
+conn.close()
+
+print "Done."
diff --git a/core/database/rebuild.bat b/core/database/rebuild.bat
new file mode 100644
index 00000000..c0f38da1
--- /dev/null
+++ b/core/database/rebuild.bat
@@ -0,0 +1,3 @@
+del database.db
+sqlite3 database.db < schema.sql
+sqlite3 database.db < test.sql \ No newline at end of file
diff --git a/core/database/schema.sql b/core/database/schema.sql
new file mode 100644
index 00000000..f6286260
--- /dev/null
+++ b/core/database/schema.sql
@@ -0,0 +1,818 @@
+-- Tables referred to in foreign key constraints are defined after the constraints are defined
+SET FOREIGN_KEY_CHECKS = 0;
+
+/*
+* A user is identified by their google_id, which the server gets by authenticating with Google.
+*/
+
+-- Users
+DROP TABLE IF EXISTS users;
+CREATE TABLE users (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ google_id TEXT NOT NULL,
+ email TEXT,
+ given_name TEXT,
+ family_name TEXT
+);
+
+/*
+* The authorizations table defines which users are authorized to "OWN", "EDIT", or "VIEW" a simulation. The
+* authorization_level table defines the permission levels.
+*/
+
+-- User authorizations
+DROP TABLE IF EXISTS authorizations;
+CREATE TABLE authorizations (
+ user_id INTEGER NOT NULL,
+ simulation_id INTEGER NOT NULL,
+ authorization_level VARCHAR(50) NOT NULL,
+
+ FOREIGN KEY (user_id) REFERENCES users (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (simulation_id) REFERENCES simulations (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (authorization_level) REFERENCES authorization_levels (level)
+);
+
+CREATE UNIQUE INDEX authorizations_index
+ ON authorizations (
+ user_id,
+ simulation_id
+ );
+
+-- Authorization levels
+DROP TABLE IF EXISTS authorization_levels;
+CREATE TABLE authorization_levels (
+ level VARCHAR(50) PRIMARY KEY NOT NULL
+);
+INSERT INTO authorization_levels (level) VALUES ('OWN');
+INSERT INTO authorization_levels (level) VALUES ('EDIT');
+INSERT INTO authorization_levels (level) VALUES ('VIEW');
+
+/*
+* A Simulation has several Paths, which define the topology of the datacenter at different times. A Simulation also
+* has several Experiments, which can be run on a combination of Paths, Schedulers and Traces. Simulations also serve
+* as the scope to which different Users can be Authorized.
+*
+* The datetime_created and datetime_last_edited columns are in a subset of ISO-8601 (second fractions are ommitted):
+* YYYY-MM-DDTHH:MM:SS, where...
+* - YYYY is the four-digit year,
+* - MM is the two-digit month (1-12)
+* - DD is the two-digit day of the month (1-31)
+* - HH is the two-digit hours part (0-23)
+* - MM is the two-digit minutes part (0-59)
+* - SS is the two-digit seconds part (0-59)
+*/
+
+-- Simulation
+DROP TABLE IF EXISTS simulations;
+CREATE TABLE simulations (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ datetime_created VARCHAR(50) NOT NULL CHECK (datetime_created LIKE '____-__-__T__:__:__'),
+ datetime_last_edited VARCHAR(50) NOT NULL CHECK (datetime_last_edited LIKE '____-__-__T__:__:__'),
+ name VARCHAR(50) NOT NULL
+);
+
+/*
+* An Experiment consists of a Path, a Scheduler, and a Trace. The Path defines the topology of the datacenter at
+* different times in the simulation. The Scheduler defines which scheduler to use to simulate this experiment. The
+* Trace defines which tasks have to be run in the simulation.
+*/
+
+DROP TABLE IF EXISTS experiments;
+CREATE TABLE experiments (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ simulation_id INTEGER NOT NULL,
+ path_id INTEGER NOT NULL,
+ trace_id INTEGER NOT NULL,
+ scheduler_name VARCHAR(50) NOT NULL,
+ name TEXT NOT NULL,
+ state TEXT NOT NULL,
+ last_simulated_tick INTEGER NOT NULL DEFAULT 0 CHECK (last_simulated_tick >= 0),
+
+ FOREIGN KEY (simulation_id) REFERENCES simulations (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (path_id) REFERENCES paths (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (trace_id) REFERENCES traces (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (scheduler_name) REFERENCES schedulers (name)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+/*
+* A Simulation has several Paths, which each contain Sections. A Section details which Datacenter topology to use
+* starting at which point in time (known internally as a "tick"). So, combining the several Sections in a Path
+* tells us which Datacenter topology to use at each tick.
+*/
+
+-- Path
+DROP TABLE IF EXISTS paths;
+CREATE TABLE paths (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ simulation_id INTEGER NOT NULL,
+ name TEXT,
+ datetime_created VARCHAR(50) NOT NULL CHECK (datetime_created LIKE '____-__-__T__:__:__'),
+
+ FOREIGN KEY (simulation_id) REFERENCES simulations (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- Sections
+DROP TABLE IF EXISTS sections;
+CREATE TABLE sections (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ path_id INTEGER NOT NULL,
+ datacenter_id INTEGER NOT NULL,
+ start_tick INTEGER NOT NULL CHECK (start_tick >= 0),
+
+ FOREIGN KEY (path_id) REFERENCES paths (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (datacenter_id) REFERENCES datacenters (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- Scheduler names
+DROP TABLE IF EXISTS schedulers;
+CREATE TABLE schedulers (
+ name VARCHAR(50) PRIMARY KEY NOT NULL
+);
+INSERT INTO schedulers (name) VALUES ('FIFO-FIRSTFIT');
+INSERT INTO schedulers (name) VALUES ('FIFO-BESTFIT');
+INSERT INTO schedulers (name) VALUES ('FIFO-WORSTFIT');
+INSERT INTO schedulers (name) VALUES ('FIFO-RANDOM');
+INSERT INTO schedulers (name) VALUES ('SRTF-FIRSTFIT');
+INSERT INTO schedulers (name) VALUES ('SRTF-BESTFIT');
+INSERT INTO schedulers (name) VALUES ('SRTF-WORSTFIT');
+INSERT INTO schedulers (name) VALUES ('SRTF-RANDOM');
+INSERT INTO schedulers (name) VALUES ('RANDOM-FIRSTFIT');
+INSERT INTO schedulers (name) VALUES ('RANDOM-BESTFIT');
+INSERT INTO schedulers (name) VALUES ('RANDOM-WORSTFIT');
+INSERT INTO schedulers (name) VALUES ('RANDOM-RANDOM');
+
+/*
+* Each simulation has a single trace. A trace contains tasks and their start times.
+*/
+
+-- A trace describes when tasks arrives in a datacenter
+DROP TABLE IF EXISTS traces;
+CREATE TABLE traces (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ name TEXT NOT NULL
+);
+
+-- A job
+DROP TABLE IF EXISTS jobs;
+CREATE TABLE jobs (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ name TEXT NOT NULL,
+ trace_id INTEGER NOT NULL,
+
+ FOREIGN KEY (trace_id) REFERENCES traces (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- A task that's defined in terms of how many flops (floating point operations) it takes to complete
+DROP TABLE IF EXISTS tasks;
+CREATE TABLE tasks (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ start_tick INTEGER NOT NULL CHECK (start_tick >= 0),
+ total_flop_count BIGINT NOT NULL CHECK (total_flop_count >= 0),
+ core_count INTEGER NOT NULL CHECK (core_count >= 0),
+ job_id INTEGER NOT NULL,
+
+ FOREIGN KEY (job_id) REFERENCES jobs (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- A dependency between two tasks.
+DROP TABLE IF EXISTS task_dependencies;
+CREATE TABLE task_dependencies (
+ first_task_id INTEGER NOT NULL,
+ second_task_id INTEGER NOT NULL,
+
+ PRIMARY KEY (first_task_id, second_task_id),
+ FOREIGN KEY (first_task_id) REFERENCES tasks (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (second_task_id) REFERENCES tasks (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+/*
+* A task_state describes how much of a task has already been completed at the time of the current tick. Several
+* machine_states show which machines worked on the task.
+*/
+
+-- A state for a task_flop
+DROP TABLE IF EXISTS task_states;
+CREATE TABLE task_states (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ task_id INTEGER NOT NULL,
+ experiment_id INTEGER NOT NULL,
+ tick INTEGER NOT NULL CHECK (tick >= 0),
+ flops_left INTEGER NOT NULL CHECK (flops_left >= 0),
+ cores_used INTEGER NOT NULL CHECK (cores_used >= 0),
+
+ FOREIGN KEY (task_id) REFERENCES tasks (id),
+ FOREIGN KEY (experiment_id) REFERENCES experiments (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- The measurements of a single stage
+DROP TABLE IF EXISTS stage_measurements;
+CREATE TABLE stage_measurements (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ experiment_id INTEGER NOT NULL,
+ tick INTEGER NOT NULL CHECK (tick >= 0),
+ stage INTEGER NOT NULL CHECK (stage >= 0),
+ cpu BIGINT NOT NULL CHECK (cpu >= 0),
+ wall BIGINT NOT NULL CHECK (wall >= 0),
+ size INTEGER NOT NULL CHECK (size >= 0),
+ iterations INTEGER NOT NULL CHECK (iterations >= 0),
+
+ FOREIGN KEY (experiment_id) REFERENCES experiments (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- Metrics of a job task
+DROP TABLE IF EXISTS job_metrics;
+CREATE TABLE job_metrics (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ experiment_id INTEGER NOT NULL,
+ job_id INTEGER NOT NULL,
+ critical_path INTEGER NOT NULL CHECK (critical_path >= 0),
+ critical_path_length INTEGER NOT NULL CHECK (critical_path_length >= 0),
+ waiting_time INTEGER NOT NULL CHECK (waiting_time >= 0),
+ makespan INTEGER NOT NULL CHECK (makespan >= 0),
+ nsl INTEGER NOT NULL CHECK (nsl >= 0),
+
+ FOREIGN KEY (experiment_id) REFERENCES experiments (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (job_id) REFERENCES jobs (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- Metrics of a single task
+DROP TABLE IF EXISTS task_metrics;
+CREATE TABLE task_metrics (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ experiment_id INTEGER NOT NULL,
+ task_id INTEGER NOT NULL,
+ job_id INTEGER NOT NULL,
+ waiting INTEGER NOT NULL CHECK (waiting >= 0),
+ execution INTEGER NOT NULL CHECK (execution >= 0),
+ turnaround INTEGER NOT NULL CHECK (turnaround >= 0),
+
+ FOREIGN KEY (experiment_id) REFERENCES experiments (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (task_id) REFERENCES tasks (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (job_id) REFERENCES jobs (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- A machine state
+DROP TABLE IF EXISTS machine_states;
+CREATE TABLE machine_states (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ machine_id INTEGER NOT NULL,
+ experiment_id INTEGER NOT NULL,
+ tick INTEGER NOT NULL,
+ temperature_c REAL,
+ in_use_memory_mb INTEGER,
+ load_fraction REAL CHECK (load_fraction >= 0 AND load_fraction <= 1),
+
+ FOREIGN KEY (machine_id) REFERENCES machines (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (experiment_id) REFERENCES experiments (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+/*
+* A Section references a Datacenter topology, which can be used by multiple Sections to create Paths that go back and
+* forth between different topologies.
+*/
+
+-- Datacenters
+DROP TABLE IF EXISTS datacenters;
+CREATE TABLE datacenters (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ simulation_id INTEGER NOT NULL,
+ starred INTEGER CHECK (starred = 0 OR starred = 1),
+
+ FOREIGN KEY (simulation_id) REFERENCES simulations (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+/*
+* A datacenter consists of several rooms. A room has a type that specifies what kind of objects can be in it.
+*/
+
+-- Rooms in a datacenter
+DROP TABLE IF EXISTS rooms;
+CREATE TABLE rooms (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ name TEXT NOT NULL,
+ datacenter_id INTEGER NOT NULL,
+ type VARCHAR(50) NOT NULL,
+ topology_id INTEGER,
+
+ FOREIGN KEY (datacenter_id) REFERENCES datacenters (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (type) REFERENCES room_types (name)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (topology_id) REFERENCES rooms (id)
+ ON DELETE NO ACTION
+ ON UPDATE CASCADE
+);
+
+DROP TABLE IF EXISTS room_types;
+CREATE TABLE room_types (
+ name VARCHAR(50) PRIMARY KEY NOT NULL
+);
+INSERT INTO room_types (name) VALUES ('SERVER');
+INSERT INTO room_types (name) VALUES ('HALLWAY');
+INSERT INTO room_types (name) VALUES ('OFFICE');
+INSERT INTO room_types (name) VALUES ('POWER');
+INSERT INTO room_types (name) VALUES ('COOLING');
+
+/*
+* A room consists of tiles that have a quantized (x,y) position. The same tile can't be in multiple rooms. All tiles
+* in a room must touch at least one edge to another tile in that room. A tile is occupied by a single object, which
+* has a type from the object_types table.
+*/
+
+-- Tiles in a room
+DROP TABLE IF EXISTS tiles;
+CREATE TABLE tiles (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ position_x INTEGER NOT NULL,
+ position_y INTEGER NOT NULL,
+ room_id INTEGER NOT NULL,
+ object_id INTEGER,
+ topology_id INTEGER,
+
+ FOREIGN KEY (room_id) REFERENCES rooms (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (object_id) REFERENCES objects (id),
+ FOREIGN KEY (topology_id) REFERENCES tiles (id)
+ ON DELETE NO ACTION
+ ON UPDATE CASCADE,
+
+ UNIQUE (position_x, position_y, room_id), -- only one tile can be in the same position in a room
+ UNIQUE (object_id) -- an object can only be on one tile
+);
+
+DELIMITER //
+
+-- Make sure this datacenter doesn't already have a tile in this location
+-- and tiles in a room are connected.
+DROP TRIGGER IF EXISTS before_insert_tiles_check_existence;
+CREATE TRIGGER before_insert_tiles_check_existence
+ BEFORE INSERT
+ ON tiles
+ FOR EACH ROW
+ BEGIN
+ -- checking tile overlap
+ -- a tile already exists such that..
+ IF EXISTS(SELECT datacenter_id
+ FROM tiles
+ JOIN rooms ON tiles.room_id = rooms.id
+ WHERE (
+
+ -- it's in the same datacenter as the new tile...
+ datacenter_id = (SELECT datacenter_id
+ FROM rooms
+ WHERE rooms.id = NEW.room_id)
+
+ -- and in the the same position as the new tile.
+ AND NEW.position_x = tiles.position_x AND NEW.position_y = tiles.position_y
+ ))
+ THEN
+ -- raise an error
+ SIGNAL SQLSTATE '45000'
+ SET MESSAGE_TEXT = 'OccupiedTilePosition';
+ END IF;
+
+ -- checking tile adjacency
+ -- this isn't the first tile, ...
+ IF (EXISTS(SELECT *
+ FROM tiles
+ WHERE (NEW.room_id = tiles.room_id))
+
+ -- and the new tile isn't directly to right, to the left, above, or below an exisiting tile.
+ AND NOT EXISTS(SELECT *
+ FROM tiles
+ WHERE (
+ NEW.room_id = tiles.room_id AND (
+ (NEW.position_x + 1 = tiles.position_x AND NEW.position_y = tiles.position_y) -- right
+ OR (NEW.position_x - 1 = tiles.position_x AND NEW.position_y = tiles.position_y) -- left
+ OR (NEW.position_x = tiles.position_x AND NEW.position_y + 1 = tiles.position_y) -- above
+ OR (NEW.position_x = tiles.position_x AND NEW.position_y - 1 = tiles.position_y) -- below
+ )
+ )))
+ THEN
+ -- raise an error
+ SIGNAL SQLSTATE '45000'
+ SET MESSAGE_TEXT = 'InvalidTilePosition';
+ END IF;
+ END//
+
+DELIMITER ;
+
+/*
+* Objects are on tiles and have a type. They form an extra abstraction layer to make it easier to find what object is
+* on a tile, as well as to enforce that only objects of the right type are in a certain room.
+*
+* To add a PSU, cooling item, or rack to a tile, first add an object. Then use that object's ID as the value for the
+* object_id column of the PSU, cooling item, or rack table.
+*
+* The allowed_object table specifies what types of objects are allowed in what types of rooms.
+*/
+
+-- Objects
+DROP TABLE IF EXISTS objects;
+CREATE TABLE objects (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ type VARCHAR(50) NOT NULL,
+
+ FOREIGN KEY (type) REFERENCES object_types (name)
+);
+
+-- Object types
+DROP TABLE IF EXISTS object_types;
+CREATE TABLE object_types (
+ name VARCHAR(50) PRIMARY KEY NOT NULL
+);
+INSERT INTO object_types (name) VALUES ('PSU');
+INSERT INTO object_types (name) VALUES ('COOLING_ITEM');
+INSERT INTO object_types (name) VALUES ('RACK');
+
+-- Allowed objects table
+DROP TABLE IF EXISTS allowed_objects;
+CREATE TABLE allowed_objects (
+ room_type VARCHAR(50) NOT NULL,
+ object_type VARCHAR(50) NOT NULL,
+
+ FOREIGN KEY (room_type) REFERENCES room_types (name),
+ FOREIGN KEY (object_type) REFERENCES object_types (name)
+);
+
+-- Allowed objects per room
+INSERT INTO allowed_objects (room_type, object_type) VALUES ('SERVER', 'RACK');
+-- INSERT INTO allowed_objects (room_type, object_type) VALUES ('POWER', 'PSU');
+-- INSERT INTO allowed_objects (room_type, object_type) VALUES ('COOLING', 'COOLING_ITEM');
+
+DELIMITER //
+
+-- Make sure objects are added to tiles in rooms they're allowed to be in.
+DROP TRIGGER IF EXISTS before_update_tiles;
+CREATE TRIGGER before_update_tiles
+ BEFORE UPDATE
+ ON tiles
+ FOR EACH ROW
+ BEGIN
+
+ IF ((NEW.object_id IS NOT NULL) AND (
+
+ -- the type of the object being added to the tile...
+ (
+ SELECT objects.type
+ FROM objects
+ JOIN tiles ON tiles.object_id = objects.id
+ WHERE tiles.id = NEW.id
+ )
+
+ -- is not in the set of allowed object types for the room the tile is in.
+ NOT IN (
+ SELECT object_type
+ FROM allowed_objects
+ JOIN rooms ON rooms.type = allowed_objects.room_type
+ WHERE rooms.id = NEW.room_id
+ )
+ ))
+ THEN
+ -- raise an error
+ SIGNAL SQLSTATE '45000'
+ SET MESSAGE_TEXT = 'ForbiddenObjectType';
+ END IF;
+ END//
+
+DELIMITER ;
+
+/*
+* PSUs are a type of object.
+*/
+
+-- PSUs on tiles
+DROP TABLE IF EXISTS psus;
+CREATE TABLE psus (
+ id INTEGER NOT NULL,
+ energy_kwh INTEGER NOT NULL CHECK (energy_kwh > 0),
+ type VARCHAR(50) NOT NULL,
+ failure_model_id INTEGER NOT NULL,
+
+ FOREIGN KEY (id) REFERENCES objects (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (failure_model_id) REFERENCES failure_models (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+
+ PRIMARY KEY (id)
+);
+
+/*
+* Cooling items are a type of object.
+*/
+
+-- Cooling items on tiles
+DROP TABLE IF EXISTS cooling_items;
+CREATE TABLE cooling_items (
+ id INTEGER NOT NULL,
+ energy_consumption_w INTEGER NOT NULL CHECK (energy_consumption_w > 0),
+ type VARCHAR(50) NOT NULL,
+ failure_model_id INTEGER NOT NULL,
+
+ FOREIGN KEY (id) REFERENCES objects (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (failure_model_id) REFERENCES failure_models (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+
+ PRIMARY KEY (id)
+);
+
+/*
+* Racks are a type of object.
+*/
+
+-- Racks on tiles
+DROP TABLE IF EXISTS racks;
+CREATE TABLE racks (
+ id INTEGER NOT NULL,
+ name TEXT,
+ capacity INTEGER NOT NULL CHECK (capacity > 0),
+ power_capacity_w INTEGER NOT NULL CHECK (power_capacity_w > 0),
+ topology_id INTEGER,
+
+ FOREIGN KEY (id) REFERENCES objects (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (topology_id) REFERENCES racks (id)
+ ON DELETE NO ACTION
+ ON UPDATE CASCADE,
+
+ PRIMARY KEY (id)
+);
+
+/*
+* A rack contains a number of machines. A rack cannot have more than its capacity of machines in it. No more than one
+* machine can occupy a position in a rack at the same time.
+*/
+
+-- Machines in racks
+DROP TABLE IF EXISTS machines;
+CREATE TABLE machines (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ rack_id INTEGER NOT NULL,
+ position INTEGER NOT NULL CHECK (position > 0),
+ topology_id INTEGER,
+
+ FOREIGN KEY (rack_id) REFERENCES racks (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (topology_id) REFERENCES machines (id)
+ ON DELETE NO ACTION
+ ON UPDATE CASCADE,
+
+ -- Prevent machines from occupying the same position in a rack.
+ UNIQUE (rack_id, position)
+);
+
+DELIMITER //
+
+-- Make sure a machine is not inserted at a position that does not exist for its rack.
+DROP TRIGGER IF EXISTS before_insert_machine;
+CREATE TRIGGER before_insert_machine
+ BEFORE INSERT
+ ON machines
+ FOR EACH ROW
+ BEGIN
+ IF (
+ NEW.position > (SELECT capacity
+ FROM racks
+ WHERE racks.id = NEW.rack_id)
+ )
+ THEN
+ -- raise an error
+ SIGNAL SQLSTATE '45000'
+ SET MESSAGE_TEXT = 'InvalidMachinePosition';
+ END IF;
+ END//
+
+DELIMITER ;
+
+/*
+* A machine can have a tag for easy search and filtering.
+*/
+
+-- Tags for machines
+DROP TABLE IF EXISTS machine_tags;
+CREATE TABLE machine_tags (
+ name TEXT NOT NULL,
+ machine_id INTEGER NOT NULL,
+
+ FOREIGN KEY (machine_id) REFERENCES machines (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+/*
+* A failure model defines the probability of a machine breaking at any given time.
+*/
+
+-- Failure models
+DROP TABLE IF EXISTS failure_models;
+CREATE TABLE failure_models (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ name TEXT NOT NULL,
+ rate REAL NOT NULL CHECK (rate >= 0 AND rate <= 1)
+);
+
+/*
+* A cpu stores information about a type of cpu. The machine_cpu table keeps track of which cpus are in which machines.
+*/
+
+-- CPU specs
+DROP TABLE IF EXISTS cpus;
+CREATE TABLE cpus (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ manufacturer TEXT NOT NULL,
+ family TEXT NOT NULL,
+ generation TEXT NOT NULL,
+ model TEXT NOT NULL,
+ clock_rate_mhz INTEGER NOT NULL CHECK (clock_rate_mhz > 0),
+ number_of_cores INTEGER NOT NULL CHECK (number_of_cores > 0),
+ energy_consumption_w REAL NOT NULL CHECK (energy_consumption_w > 0),
+ failure_model_id INTEGER NOT NULL,
+
+ FOREIGN KEY (failure_model_id) REFERENCES failure_models (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- CPUs in machines
+DROP TABLE IF EXISTS machine_cpus;
+CREATE TABLE machine_cpus (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ machine_id INTEGER NOT NULL,
+ cpu_id INTEGER NOT NULL,
+
+ FOREIGN KEY (machine_id) REFERENCES machines (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (cpu_id) REFERENCES cpus (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+/*
+* A gpu stores information about a type of gpu. The machine_gpu table keeps track of which gpus are in which machines.
+*/
+
+-- GPU specs
+DROP TABLE IF EXISTS gpus;
+CREATE TABLE gpus (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ manufacturer TEXT NOT NULL,
+ family TEXT NOT NULL,
+ generation TEXT NOT NULL,
+ model TEXT NOT NULL,
+ clock_rate_mhz INTEGER NOT NULL CHECK (clock_rate_mhz > 0),
+ number_of_cores INTEGER NOT NULL CHECK (number_of_cores > 0),
+ energy_consumption_w REAL NOT NULL CHECK (energy_consumption_w > 0),
+ failure_model_id INTEGER NOT NULL,
+
+ FOREIGN KEY (failure_model_id) REFERENCES failure_models (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- GPUs in machines
+DROP TABLE IF EXISTS machine_gpus;
+CREATE TABLE machine_gpus (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ machine_id INTEGER NOT NULL,
+ gpu_id INTEGER NOT NULL,
+
+ FOREIGN KEY (machine_id) REFERENCES machines (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (gpu_id) REFERENCES gpus (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+/*
+* A memory stores information about a type of memory. The machine_memory table keeps track of which memories are in
+* which machines.
+*/
+
+-- Memory specs
+DROP TABLE IF EXISTS memories;
+CREATE TABLE memories (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ manufacturer TEXT NOT NULL,
+ family TEXT NOT NULL,
+ generation TEXT NOT NULL,
+ model TEXT NOT NULL,
+ speed_mb_per_s INTEGER NOT NULL CHECK (speed_mb_per_s > 0),
+ size_mb INTEGER NOT NULL CHECK (size_mb > 0),
+ energy_consumption_w REAL NOT NULL CHECK (energy_consumption_w > 0),
+ failure_model_id INTEGER NOT NULL,
+
+ FOREIGN KEY (failure_model_id) REFERENCES failure_models (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- Memory in machines
+DROP TABLE IF EXISTS machine_memories;
+CREATE TABLE machine_memories (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ machine_id INTEGER NOT NULL,
+ memory_id INTEGER NOT NULL,
+
+ FOREIGN KEY (machine_id) REFERENCES machines (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (memory_id) REFERENCES memories (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+/*
+* A storage stores information about a type of storage. The machine_storage table keeps track of which storages are in
+* which machines.
+*/
+
+-- Storage specs
+DROP TABLE IF EXISTS storages;
+CREATE TABLE storages (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ manufacturer TEXT NOT NULL,
+ family TEXT NOT NULL,
+ generation TEXT NOT NULL,
+ model TEXT NOT NULL,
+ speed_mb_per_s INTEGER NOT NULL CHECK (speed_mb_per_s > 0),
+ size_mb INTEGER NOT NULL CHECK (size_mb > 0),
+ energy_consumption_w REAL NOT NULL CHECK (energy_consumption_w > 0),
+ failure_model_id INTEGER NOT NULL,
+
+ FOREIGN KEY (failure_model_id) REFERENCES failure_models (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
+
+-- Storage in machines
+DROP TABLE IF EXISTS machine_storages;
+CREATE TABLE machine_storages (
+ id INTEGER PRIMARY KEY NOT NULL AUTO_INCREMENT,
+ machine_id INTEGER NOT NULL,
+ storage_id INTEGER NOT NULL,
+
+ FOREIGN KEY (machine_id) REFERENCES machines (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE,
+ FOREIGN KEY (storage_id) REFERENCES storages (id)
+ ON DELETE CASCADE
+ ON UPDATE CASCADE
+);
diff --git a/core/database/test.sql b/core/database/test.sql
new file mode 100644
index 00000000..55801b76
--- /dev/null
+++ b/core/database/test.sql
@@ -0,0 +1,381 @@
+-- Users
+INSERT INTO users (google_id, email, given_name, family_name)
+VALUES ('106671218963420759042', 'l.overweel@gmail.com', 'Leon', 'Overweel');
+INSERT INTO users (google_id, email, given_name, family_name)
+VALUES ('118147174005839766927', 'jorgos.andreadis@gmail.com', 'Jorgos', 'Andreadis');
+
+-- Simulations
+INSERT INTO simulations (name, datetime_created, datetime_last_edited)
+VALUES ('Test Simulation 1', '2016-07-11T11:00:00', '2016-07-11T11:00:00');
+
+-- Authorizations
+INSERT INTO authorizations (user_id, simulation_id, authorization_level)
+VALUES (1, 1, 'OWN');
+INSERT INTO authorizations (user_id, simulation_id, authorization_level)
+VALUES (2, 1, 'OWN');
+
+-- Paths
+INSERT INTO paths (simulation_id, datetime_created)
+VALUES (1, '2016-07-11T11:00:00');
+INSERT INTO paths (simulation_id, datetime_created)
+VALUES (1, '2016-07-18T09:00:00');
+
+-- Datacenter
+INSERT INTO datacenters (starred, simulation_id) VALUES (0, 1);
+INSERT INTO datacenters (starred, simulation_id) VALUES (0, 1);
+INSERT INTO datacenters (starred, simulation_id) VALUES (0, 1);
+
+-- Sections
+INSERT INTO sections (path_id, datacenter_id, start_tick) VALUES (1, 1, 0);
+INSERT INTO sections (path_id, datacenter_id, start_tick) VALUES (1, 2, 50);
+INSERT INTO sections (path_id, datacenter_id, start_tick) VALUES (1, 3, 100);
+
+INSERT INTO sections (path_id, datacenter_id, start_tick) VALUES (2, 3, 0);
+
+-- Default Test Trace
+INSERT INTO traces (name) VALUES ('Default');
+
+-- Jobs
+INSERT INTO jobs (name, trace_id) VALUES ('Default', 1);
+
+-- Tasks
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 400000, 1, 1);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (25, 10000, 1, 1);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (25, 10000, 1, 1);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (26, 10000, 1, 1);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (80, 200000, 1, 1);
+
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (1, 5);
+
+-- Image Processing Trace
+INSERT INTO traces (name) VALUES ('Image Processing');
+
+-- Jobs
+INSERT INTO jobs (name, trace_id) VALUES ('Image Processing', 2);
+
+-- Tasks
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (10, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (20, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (10, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (20, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (1, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (21, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (1, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (21, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (10, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (20, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (10, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (20, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (1, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (21, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (1, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (21, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (10, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (20, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (10, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (20, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (1, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (21, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (1, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (21, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (10, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (20, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (10, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (20, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (1, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (21, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (1, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (21, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (10, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (20, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (10, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (20, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (1, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (21, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (1, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 100000, 1, 2);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (21, 100000, 1, 2);
+
+-- Path Planning Trace
+INSERT INTO traces (name) VALUES ('Path planning');
+
+-- Jobs
+INSERT INTO jobs (name, trace_id) VALUES ('Path planning', 3);
+
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 1000000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (12, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (13, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (14, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (12, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (13, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (14, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (12, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (13, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (14, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (11, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (12, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (13, 200000, 1, 3);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (14, 200000, 1, 3);
+
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 67);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 68);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 69);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 70);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 71);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 72);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 73);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 74);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 75);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 76);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 77);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 78);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 79);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 80);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 81);
+INSERT INTO task_dependencies (first_task_id, second_task_id) VALUES (66, 82);
+
+-- Parallelizable Trace
+INSERT INTO traces (name) VALUES ('Parallel heavy trace');
+
+-- Jobs
+INSERT INTO jobs (name, trace_id) VALUES ('Parallel heavy trace', 4);
+
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 4);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 900000, 1, 4);
+
+-- Sequential Trace
+INSERT INTO traces (name) VALUES ('Sequential heavy trace');
+
+-- Jobs
+INSERT INTO jobs (name, trace_id) VALUES ('Sequential heavy trace', 5);
+
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 100000, 1, 5);
+INSERT INTO tasks (start_tick, total_flop_count, core_count, job_id) VALUES (0, 900000, 1, 5);
+
+-- Experiments
+INSERT INTO experiments (simulation_id, path_id, trace_id, scheduler_name, name, state, last_simulated_tick)
+VALUES (1, 1, 3, 'fifo-bestfit', 'Path planning trace, FIFO', 'QUEUED', 0);
+INSERT INTO experiments (simulation_id, path_id, trace_id, scheduler_name, name, state, last_simulated_tick)
+VALUES (1, 1, 1, 'srtf-firstfit', 'Default trace, SRTF', 'QUEUED', 0);
+INSERT INTO experiments (simulation_id, path_id, trace_id, scheduler_name, name, state, last_simulated_tick)
+VALUES (1, 1, 2, 'srtf-firstfit', 'Image processing trace, SRTF', 'QUEUED', 0);
+INSERT INTO experiments (simulation_id, path_id, trace_id, scheduler_name, name, state, last_simulated_tick)
+VALUES (1, 1, 3, 'fifo-firstfit', 'Path planning trace, FIFO', 'QUEUED', 0);
+
+-- Rooms
+INSERT INTO rooms (name, datacenter_id, type) VALUES ('room 1', 1, 'SERVER');
+INSERT INTO rooms (name, datacenter_id, type, topology_id) VALUES ('room 1', 2, 'SERVER', 1);
+INSERT INTO rooms (name, datacenter_id, type, topology_id) VALUES ('room 1', 3, 'SERVER', 1);
+INSERT INTO rooms (name, datacenter_id, type) VALUES ('room 2', 3, 'SERVER');
+INSERT INTO rooms (name, datacenter_id, type) VALUES ('Power Room', 1, 'POWER');
+
+-- Tiles
+INSERT INTO tiles (position_x, position_y, room_id) VALUES (10, 10, 1);
+INSERT INTO tiles (position_x, position_y, room_id) VALUES (9, 10, 1);
+INSERT INTO tiles (position_x, position_y, room_id) VALUES (10, 11, 1);
+
+INSERT INTO tiles (position_x, position_y, room_id, topology_id) VALUES (10, 10, 2, 1);
+INSERT INTO tiles (position_x, position_y, room_id, topology_id) VALUES (9, 10, 2, 2);
+INSERT INTO tiles (position_x, position_y, room_id, topology_id) VALUES (10, 11, 2, 3);
+INSERT INTO tiles (position_x, position_y, room_id) VALUES (11, 11, 2);
+
+INSERT INTO tiles (position_x, position_y, room_id, topology_id) VALUES (10, 10, 3, 1);
+INSERT INTO tiles (position_x, position_y, room_id, topology_id) VALUES (9, 10, 3, 2);
+INSERT INTO tiles (position_x, position_y, room_id, topology_id) VALUES (10, 11, 3, 3);
+INSERT INTO tiles (position_x, position_y, room_id, topology_id) VALUES (11, 11, 3, 7);
+
+INSERT INTO tiles (position_x, position_y, room_id) VALUES (11, 10, 4);
+INSERT INTO tiles (position_x, position_y, room_id) VALUES (12, 10, 4);
+
+INSERT INTO tiles (position_x, position_y, room_id) VALUES (10, 12, 5);
+INSERT INTO tiles (position_x, position_y, room_id) VALUES (10, 13, 5);
+
+-- Racks
+INSERT INTO objects (type) VALUES ('RACK');
+INSERT INTO racks (id, capacity, name, power_capacity_w) VALUES (1, 42, 'Rack 1', 5000);
+UPDATE tiles
+SET object_id = 1
+WHERE id = 1;
+INSERT INTO objects (type) VALUES ('RACK');
+INSERT INTO racks (id, capacity, name, power_capacity_w) VALUES (2, 42, 'Rack 2', 5000);
+UPDATE tiles
+SET object_id = 2
+WHERE id = 2;
+
+INSERT INTO objects (type) VALUES ('RACK');
+INSERT INTO racks (id, capacity, name, power_capacity_w, topology_id) VALUES (3, 42, 'Rack 1', 5000, 1);
+UPDATE tiles
+SET object_id = 3
+WHERE id = 4;
+INSERT INTO objects (type) VALUES ('RACK');
+INSERT INTO racks (id, capacity, name, power_capacity_w, topology_id) VALUES (4, 42, 'Rack 2', 5000, 2);
+UPDATE tiles
+SET object_id = 4
+WHERE id = 5;
+INSERT INTO objects (type) VALUES ('RACK');
+INSERT INTO racks (id, capacity, name, power_capacity_w) VALUES (5, 42, 'Rack 3', 5000);
+UPDATE tiles
+SET object_id = 5
+WHERE id = 7;
+
+INSERT INTO objects (type) VALUES ('RACK');
+INSERT INTO racks (id, capacity, name, power_capacity_w, topology_id) VALUES (6, 42, 'Rack 1', 5000, 1);
+UPDATE tiles
+SET object_id = 6
+WHERE id = 8;
+
+INSERT INTO objects (type) VALUES ('RACK');
+INSERT INTO racks (id, capacity, name, power_capacity_w, topology_id) VALUES (7, 42, 'Rack 2', 5000, 2);
+UPDATE tiles
+SET object_id = 7
+WHERE id = 9;
+
+INSERT INTO objects (type) VALUES ('RACK');
+INSERT INTO racks (id, capacity, name, power_capacity_w, topology_id) VALUES (8, 42, 'Rack 3', 5000, 5);
+UPDATE tiles
+SET object_id = 8
+WHERE id = 11;
+
+INSERT INTO objects (type) VALUES ('RACK');
+INSERT INTO racks (id, capacity, name, power_capacity_w) VALUES (9, 42, 'Rack 4', 5000);
+UPDATE tiles
+SET object_id = 9
+WHERE id = 12;
+
+-- Machines
+INSERT INTO machines (rack_id, position) VALUES (1, 1);
+INSERT INTO machines (rack_id, position) VALUES (1, 2);
+INSERT INTO machines (rack_id, position) VALUES (1, 6);
+INSERT INTO machines (rack_id, position) VALUES (1, 10);
+INSERT INTO machines (rack_id, position) VALUES (2, 1);
+INSERT INTO machines (rack_id, position) VALUES (2, 2);
+
+INSERT INTO machines (rack_id, position, topology_id) VALUES (3, 1, 1);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (3, 2, 2);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (3, 6, 3);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (3, 10, 4);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (4, 1, 5);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (4, 2, 6);
+INSERT INTO machines (rack_id, position) VALUES (5, 1);
+INSERT INTO machines (rack_id, position) VALUES (5, 2);
+INSERT INTO machines (rack_id, position) VALUES (5, 3);
+
+INSERT INTO machines (rack_id, position, topology_id) VALUES (6, 1, 1);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (6, 2, 2);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (6, 6, 3);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (6, 10, 4);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (7, 1, 5);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (7, 2, 6);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (8, 1, 13);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (8, 2, 14);
+INSERT INTO machines (rack_id, position, topology_id) VALUES (8, 3, 15);
+INSERT INTO machines (rack_id, position) VALUES (9, 4);
+INSERT INTO machines (rack_id, position) VALUES (9, 5);
+INSERT INTO machines (rack_id, position) VALUES (9, 6);
+INSERT INTO machines (rack_id, position) VALUES (9, 7);
+
+-- Tags
+INSERT INTO machine_tags (name, machine_id) VALUES ('my fave machine', 1);
+INSERT INTO machine_tags (name, machine_id) VALUES ('my best machine', 2);
+
+-- Failure models
+INSERT INTO failure_models (name, rate) VALUES ('test_model', 0);
+
+-- CPUs
+INSERT INTO cpus (manufacturer, family, generation, model, clock_rate_mhz, number_of_cores, energy_consumption_w,
+ failure_model_id) VALUES ('intel', 'i7', 'v6', '6700k', 4100, 4, 70, 1);
+INSERT INTO cpus (manufacturer, family, generation, model, clock_rate_mhz, number_of_cores, energy_consumption_w,
+ failure_model_id) VALUES ('intel', 'i5', 'v6', '6700k', 3500, 2, 50, 1);
+
+-- GPUs
+INSERT INTO gpus (manufacturer, family, generation, model, clock_rate_mhz, number_of_cores, energy_consumption_w,
+ failure_model_id) VALUES ('NVIDIA', 'GTX', '4', '1080', 1200, 200, 250, 1);
+
+-- CPUs in machines
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (1, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (1, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (1, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (2, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (2, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (3, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (3, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (3, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (4, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (4, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (4, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (5, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (6, 1);
+
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (7, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (7, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (7, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (8, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (8, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (9, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (9, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (9, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (10, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (10, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (10, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (11, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (12, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (13, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (14, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (15, 1);
+
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (16, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (16, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (16, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (17, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (17, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (18, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (18, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (18, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (19, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (19, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (19, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (20, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (21, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (22, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (23, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (24, 1);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (25, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (26, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (27, 2);
+INSERT INTO machine_cpus (machine_id, cpu_id) VALUES (28, 2);
+
+-- GPUs
+INSERT INTO gpus (manufacturer, family, generation, model, clock_rate_mhz, number_of_cores, energy_consumption_w,
+ failure_model_id) VALUES ('nvidia', 'GeForce GTX Series', '10', '80', 1607, 2560, 70, 1);
+
+-- Memories
+
+INSERT INTO memories (manufacturer, family, generation, model, speed_mb_per_s, size_mb, energy_consumption_w,
+ failure_model_id) VALUES ('samsung', 'PC DRAM', 'K4A4G045WD', 'DDR4', 16000, 4000, 10, 1);
+
+-- Storages
+
+INSERT INTO storages (manufacturer, family, generation, model, speed_mb_per_s, size_mb, energy_consumption_w,
+ failure_model_id) VALUES ('samsung', 'EVO', '2016', 'SATA III', 6000, 250000, 10, 1);
diff --git a/core/database/view-table.py b/core/database/view-table.py
new file mode 100644
index 00000000..615b4081
--- /dev/null
+++ b/core/database/view-table.py
@@ -0,0 +1,17 @@
+import os
+import sqlite3
+import sys
+
+try:
+ BASE_DIR = directory_name=sys.argv[1]
+except:
+ BASE_DIR = os.path.dirname(os.path.abspath(__file__))
+db_location = os.path.join(BASE_DIR, 'opendc.db')
+
+conn = sqlite3.connect(db_location)
+c = conn.cursor()
+
+rows = c.execute('SELECT * FROM ' + sys.argv[2])
+
+for row in rows:
+ print row
diff --git a/core/docker-compose.yml b/core/docker-compose.yml
new file mode 100644
index 00000000..3f4ad20a
--- /dev/null
+++ b/core/docker-compose.yml
@@ -0,0 +1,84 @@
+version: "3"
+services:
+ frontend:
+ build: ./
+ image: frontend
+ restart: on-failure
+ ports:
+ - "8081:8081"
+ links:
+ - mariadb
+ depends_on:
+ - mariadb
+ environment:
+ - MYSQL_DATABASE=opendc
+ - MYSQL_USER=opendc
+ - MYSQL_PASSWORD=opendcpassword
+ - MONGO_DB=opendc
+ - MONGO_DB_USERNAME=opendc
+ - MONGO_DB_PASSWORD=opendcpassword
+ - MONGO_DB_HOST=mongo
+ - MONGO_DB_PORT=27017
+
+ simulator:
+ build:
+ context: ./opendc-simulator
+ dockerfile: opendc-model-odc/setup/Dockerfile
+ image: simulator
+ restart: on-failure
+ links:
+ - mariadb
+ depends_on:
+ - mariadb
+ environment:
+ - PERSISTENCE_URL=jdbc:mysql://mariadb:3306/opendc
+ - PERSISTENCE_USER=opendc
+ - PERSISTENCE_PASSWORD=opendcpassword
+ - COLLECT_MACHINE_STATES=ON
+ - COLLECT_TASK_STATES=ON
+ - COLLECT_STAGE_MEASUREMENTS=OFF
+ - COLLECT_TASK_METRICS=OFF
+ - COLLECT_JOB_METRICS=OFF
+ mariadb:
+ build:
+ context: ./database
+ image: database
+ restart: on-failure
+ ports:
+ - "3306:3306" # comment this line out in production
+ environment:
+ - MYSQL_DATABASE=opendc
+ - MYSQL_USER=opendc
+ - MYSQL_PASSWORD=opendcpassword
+ - MYSQL_RANDOM_ROOT_PASSWORD=yes
+ # uncomment in production
+ # volumes:
+ # - "/data/mariadb:/var/lib/mysql"
+ mongo:
+ build:
+ context: ./mongodb
+ restart: on-failure
+ environment:
+ - MONGO_INITDB_ROOT_USERNAME=root
+ - MONGO_INITDB_ROOT_PASSWORD=rootpassword
+ - MONGO_INITDB_DATABASE=admin
+ - OPENDC_DB=opendc
+ - OPENDC_DB_USERNAME=opendc
+ - OPENDC_DB_PASSWORD=opendcpassword
+ ports:
+ - 27017:27017
+ #volumes:
+ # - mongo-volume:/data/db
+
+ mongo-express:
+ image: mongo-express
+ restart: on-failure
+ ports:
+ - 8082:8081
+ environment:
+ ME_CONFIG_MONGODB_ADMINUSERNAME: root
+ ME_CONFIG_MONGODB_ADMINPASSWORD: rootpassword
+
+volumes:
+ mongo-volume:
+ external: false \ No newline at end of file
diff --git a/core/images/logo.png b/core/images/logo.png
new file mode 100644
index 00000000..d743038b
--- /dev/null
+++ b/core/images/logo.png
Binary files differ
diff --git a/core/images/opendc-component-diagram.png b/core/images/opendc-component-diagram.png
new file mode 100644
index 00000000..4aa535b9
--- /dev/null
+++ b/core/images/opendc-component-diagram.png
Binary files differ
diff --git a/core/images/opendc-frontend-construction.PNG b/core/images/opendc-frontend-construction.PNG
new file mode 100644
index 00000000..223e8d48
--- /dev/null
+++ b/core/images/opendc-frontend-construction.PNG
Binary files differ
diff --git a/core/images/opendc-frontend-simulation-zoom.PNG b/core/images/opendc-frontend-simulation-zoom.PNG
new file mode 100644
index 00000000..d7744926
--- /dev/null
+++ b/core/images/opendc-frontend-simulation-zoom.PNG
Binary files differ
diff --git a/core/images/opendc-frontend-simulation.PNG b/core/images/opendc-frontend-simulation.PNG
new file mode 100644
index 00000000..bbf4cbd6
--- /dev/null
+++ b/core/images/opendc-frontend-simulation.PNG
Binary files differ
diff --git a/core/mongodb/Dockerfile b/core/mongodb/Dockerfile
new file mode 100644
index 00000000..b4eb9dd1
--- /dev/null
+++ b/core/mongodb/Dockerfile
@@ -0,0 +1,5 @@
+FROM mongo:4.2.5
+MAINTAINER Jacob Burley <j.burley@vu.nl>
+
+# Import init script
+ADD mongo-init-opendc-db.sh /docker-entrypoint-initdb.d \ No newline at end of file
diff --git a/core/mongodb/docker-compose.yml b/core/mongodb/docker-compose.yml
new file mode 100644
index 00000000..aa54a74c
--- /dev/null
+++ b/core/mongodb/docker-compose.yml
@@ -0,0 +1,30 @@
+version: "3"
+services:
+ mongo:
+ build:
+ context: ./
+ restart: on-failure
+ environment:
+ MONGO_INITDB_ROOT_USERNAME: root
+ MONGO_INITDB_ROOT_PASSWORD: rootpassword
+ MONGO_INITDB_DATABASE: admin
+ OPENDC_DB: opendc
+ OPENDC_DB_USERNAME: opendc
+ OPENDC_DB_PASSWORD: opendcpassword
+ ports:
+ - 27017:27017
+ #volumes:
+ # - mongo-volume:/data/db
+
+ mongo-express:
+ image: mongo-express
+ restart: on-failure
+ ports:
+ - 8082:8081
+ environment:
+ ME_CONFIG_MONGODB_ADMINUSERNAME: root
+ ME_CONFIG_MONGODB_ADMINPASSWORD: rootpassword
+
+volumes:
+ mongo-volume:
+ external: false
diff --git a/core/mongodb/mongo-init-opendc-db.sh b/core/mongodb/mongo-init-opendc-db.sh
new file mode 100644
index 00000000..e7a787fe
--- /dev/null
+++ b/core/mongodb/mongo-init-opendc-db.sh
@@ -0,0 +1,122 @@
+#!/bin/bash
+
+echo 'Creating opendc user and db'
+
+mongo opendc --host localhost \
+ --port 27017 \
+ -u "$MONGO_INITDB_ROOT_USERNAME" \
+ -p "$MONGO_INITDB_ROOT_PASSWORD" \
+ --authenticationDatabase admin \
+ --eval "db.createUser({user: '$OPENDC_DB_USERNAME', pwd: '$OPENDC_DB_PASSWORD', roles:[{role:'dbOwner', db: '$OPENDC_DB'}]});"
+
+MONGO_CMD="mongo $OPENDC_DB -u $OPENDC_DB_USERNAME -p $OPENDC_DB_PASSWORD --authenticationDatabase $OPENDC_DB"
+
+echo 'Creating collections'
+
+$MONGO_CMD --eval 'db.createCollection("users");'
+$MONGO_CMD --eval 'db.createCollection("simulations");'
+$MONGO_CMD --eval 'db.createCollection("topologies");'
+$MONGO_CMD --eval 'db.createCollection("experiments");'
+$MONGO_CMD --eval 'db.createCollection("prefabs");'
+
+echo 'Loading test data'
+
+$MONGO_CMD --eval 'db.users.insertOne(
+ {
+ "googleId": "23483578932789231",
+ "email": "jorgos.andreadis@gmail.com",
+ "givenName": "Jorgos",
+ "familyName": "Andreadis",
+ "authorizations": []
+ });'
+
+$MONGO_CMD --eval 'db.prefabs.insertOne(
+ {
+ "type": "rack",
+ "name": "testRack3",
+ "size": 42,
+ "depth": 42,
+ "author": "Jacob Burley",
+ "visibility": "public",
+ "children": [
+ {
+ "type": "switch",
+ "ports": 48,
+ "powerDraw": 150,
+ "psus": 1,
+ "size": 1
+ },
+ {
+ "type": "chassis",
+ "size": 4,
+ "children": [
+ {
+ "type": "mainboard",
+ "sockets": 1,
+ "dimmSlots": 4,
+ "nics": 1,
+ "pcieSlots": 2,
+ "children": [
+ {
+ "type": "CPU",
+ "coreCount": 4,
+ "SMT": true,
+ "baseClk": 3.5,
+ "boostClk": 3.9,
+ "brand": "Intel",
+ "SKU": "i7-3770K",
+ "socket": "LGA1155",
+ "TDP": 77
+ },
+ {
+ "type": "DDR3",
+ "capacity": 4096,
+ "memfreq": 1333,
+ "ecc": false
+ },
+ {
+ "type": "DDR3",
+ "capacity": 4096,
+ "memfreq": 1333,
+ "ecc": false
+ },
+ {
+ "type": "DDR3",
+ "capacity": 4096,
+ "memfreq": 1333,
+ "ecc": false
+ },
+ {
+ "type": "DDR3",
+ "capacity": 4096,
+ "memfreq": 1333,
+ "ecc": false
+ },
+ {
+ "type": "GPU",
+ "VRAM": 8192,
+ "coreCount": 2304,
+ "brand": "AMD",
+ "technologies": "OpenCL",
+ "pcieGen": "3x16",
+ "tdp": 169,
+ "slots": 2
+ }
+ ]
+ },
+ {
+ "type": "PSU",
+ "wattage": 550,
+ "ac": true
+ },
+ {
+ "type": "disk",
+ "size": 2000,
+ "interface": "SATA",
+ "media": "flash",
+ "formFactor": 2.5
+ }
+ ]
+ }
+ ]
+ });'
diff --git a/core/mongodb/prefab.py b/core/mongodb/prefab.py
new file mode 100755
index 00000000..124f45e3
--- /dev/null
+++ b/core/mongodb/prefab.py
@@ -0,0 +1,112 @@
+#!/Users/jacobburley/thesis-src/opendc/mongodb/opendc_testing/bin/python3
+#Change shebang to /usr/bin/python3 before using with docker
+# encoding: utf-8
+"""
+prefab
+
+CLI frontend for viewing, modifying and creating prefabs in OpenDC.
+
+"""
+import sys
+import prefabs
+
+def usage():
+ print("Usage: prefab add <prefab>: imports a prefab from JSON")
+ print(" list: lists all (public) prefabs")
+ print(" export <prefab> [json|yaml]: exports the specified prefab to the specified filetype (with JSON used by default)")
+ print(" clone <prefab> [new prefab name]: clones the specified prefab, giving the new prefab a name if specified")
+ print(" remove <prefab>: removes the specified prefab from the database")
+
+def interactive(): #interactive CLI mode: recommended
+ print("OpenDC Prefab CLI")
+ running = True
+ while(exit):
+ print(">", end=" ")
+ try:
+ command = input()
+ command = command.split()
+ except EOFError as e:
+ print("exit")
+ print("bye!")
+ exit()
+ except KeyboardInterrupt as KI:
+ print("\nbye!")
+ exit()
+ if(len(command) >= 1):
+ if(command[0] == "exit"):
+ print("bye!")
+ exit()
+ elif(command[0] == "list"): # decrypt
+ prefabs.list()
+ elif(command[0] == "help"): # decrypt
+ usage()
+ elif(command[0] == "add"):
+ if(len(command) == 3):
+ prefabs.add(command[1], command[2])
+ else:
+ prefabs.add(command[1], None)
+ elif(command[0] == "clone"):
+ if(len(command) == 3):
+ prefabs.clone(command[1], command[2])
+ else:
+ prefabs.clone(command[1], None)
+ elif(command[0] == "export"):
+ #print(sys.argv[2])
+ prefabs.export(command[1], "json")
+ elif(command[0] == "remove"):
+ print("WARNING: Doing so will permanently remove the specified prefab. \nThis action CANNOT be undone. Please type the name of the prefab to confirm deletion.")
+ confirm = input()
+ if confirm == command[1]:
+ prefabs.remove(command[1])
+ print(f'Prefab {command[1]} has been removed.')
+ else:
+ print("Confirmation failed. The prefab has not been removed.")
+ else:
+ print("prefabs: try 'help' for more information\n")
+ else:
+ print("prefabs: try 'help' for more information\n")
+
+
+def main():
+ if(len(sys.argv) >= 2):
+ if(sys.argv[1] == "list"): # decrypt
+ prefabs.list()
+ exit()
+ #elif(sys.argv[1] == "-e"): # encrypt
+ # encrypt(sys.argv[2], sys.argv[3], sys.argv[4])
+ #elif(sys.argv[1] == "-v"): # verify
+ # verify(sys.argv[2], sys.argv[3], sys.argv[4])
+ elif(sys.argv[1] == "help"): # decrypt
+ usage()
+ exit()
+ elif(sys.argv[1] == "add"):
+ if(sys.argv[3]):
+ prefabs.add(sys.argv[2], sys.argv[3])
+ else:
+ prefabs.add(sys.argv[2])
+ exit()
+ elif(sys.argv[1] == "export"):
+ #print(sys.argv[2])
+ prefabs.export(sys.argv[2], "json")
+ exit()
+ elif(sys.argv[1] == "remove"):
+ print("WARNING: Doing so will permanently remove the specified prefab. \nThis action CANNOT be undone. Please type the name of the prefab to confirm deletion.")
+ confirm = input()
+ if confirm == sys.argv[2]:
+ prefabs.remove(sys.argv[2])
+ print(f'Prefab {sys.argv[2]} has been removed.')
+ else:
+ print("Confirmation failed. The prefab has not been removed.")
+ exit()
+ else:
+ print("prefabs: try 'prefabs help' for more information\n")
+ elif(len(sys.argv) == 1):
+ interactive()
+
+ else:
+ # print "Incorrect number of arguments!\n"
+ print("prefabs: try 'prefabs help' for more information\n")
+
+
+if __name__ == "__main__":
+ main() \ No newline at end of file
diff --git a/core/mongodb/prefabs.py b/core/mongodb/prefabs.py
new file mode 100755
index 00000000..f6f46cbc
--- /dev/null
+++ b/core/mongodb/prefabs.py
@@ -0,0 +1,124 @@
+#!/Users/jacobburley/thesis-src/opendc/mongodb/opendc_testing/bin/python3
+#Change shebang to /usr/bin/python3 before using with docker
+# encoding: utf-8
+"""
+prefabs
+
+Python Library for interacting with mongoDB prefabs collection.
+
+"""
+import urllib.parse
+import pprint
+import sys
+import os
+import json
+import re
+import ujson
+#import pyyaml
+
+from pymongo import MongoClient
+from bson.json_util import loads, dumps, RELAXED_JSON_OPTIONS, CANONICAL_JSON_OPTIONS
+
+#mongodb_opendc_db = os.environ['OPENDC_DB']
+#mongodb_opendc_user = os.environ['OPENDC_DB_USERNAME']
+#mongodb_opendc_password = os.environ['OPENDC_DB_PASSWORD']
+
+#if mongodb_opendc_db == None or mongodb_opendc_user == None or mongodb_opendc_password == None:
+# print("One or more environment variables are not set correctly. \nYou may experience issues connecting to the mongodb database.")
+
+user = urllib.parse.quote_plus('opendc') #TODO: replace this with environment variable
+password = urllib.parse.quote_plus('opendcpassword') #TODO: same as above
+database = urllib.parse.quote_plus('opendc')
+
+client = MongoClient('mongodb://%s:%s@localhost/default_db?authSource=%s' % (user, password, database))
+opendcdb = client.opendc
+prefabs_collection = opendcdb.prefabs
+
+
+def add(prefab_file, name):
+ if(re.match(r"\w+(\\\ \w*)*\.json", prefab_file)):
+ try:
+ with open(prefab_file, "r") as json_file:
+ json_prefab = json.load(json_file)
+ #print(json_prefab)
+ if name != None:
+ json_prefab["name"] = name
+ try:
+ prefab_id = prefabs_collection.insert(json_prefab)
+ except ConnectionFailure:
+ print("ERROR: Could not connect to the mongoDB database.")
+ except DuplicateKeyError:
+ print("ERROR: A prefab with the same unique ID already exists in the database. \nPlease remove the '_id' before trying again.\nYour prefab has not been imported.")
+ except:
+ print("ERROR: A general error has occurred. Your prefab has not been imported.")
+ if prefab_id != None:
+ if name != None:
+ print(f'Prefab "{name}" has been imported successfully.')
+ else:
+ print(f'Prefab "{prefab_file}" has been imported successfully.')
+ except FileNotFoundError:
+ print(f"ERROR: {prefab_file} could not be found in the specified path. No prefabs have been imported.")
+ elif(re.match(r"\w+(\\\ \w*)*\.yml", prefab_file)):
+ print("expecting a yaml file here")
+ #yaml
+ else:
+ print("The filetype provided is an unsupported filetype.")
+ #unsupported filetype
+
+def clone(prefab_name, new_name):
+ bson = prefabs_collection.find_one({'name': prefab_name})
+ json_string = dumps(bson) #convert BSON representation to JSON
+ chosen_prefab = json.loads(json_string) #load as a JSON object
+
+ chosen_prefab.pop("_id") # clean out our _id field from the export: mongo will generate a new one if this is imported back in
+
+ if new_name != None:
+ chosen_prefab["name"] = new_name
+ try:
+ prefab_id = prefabs_collection.insert_one(chosen_prefab)
+ except ConnectionFailure:
+ print("ERROR: Could not connect to the mongoDB database.")
+ except:
+ print("ERROR: A general error has occurred. Your selected prefab has not been cloned.")
+ if prefab_id != None:
+ if new_name != None:
+ print(f'Prefab "{prefab_name}" has been cloned successfully as {new_name}.')
+ else:
+ print(f'Prefab "{prefab_name}" has been cloned successfully.')
+
+def export(prefab_name, type):
+ bson = prefabs_collection.find_one({'name': prefab_name})
+ json_string = dumps(bson) #convert BSON representation to JSON
+ chosen_prefab = json.loads(json_string) #load as a JSON object
+
+ chosen_prefab.pop("_id") # clean out our _id field from the export: mongo will generate a new one if this is imported back in
+
+ with open(f'{prefab_name}.json', 'w', encoding='utf8') as f:
+ json.dump(chosen_prefab, f, ensure_ascii=False, indent=4)
+ print(f'Prefab {prefab_name} written to {os.getcwd()}/{prefab_name}.json.')
+ #pprint.pprint(json_string)
+ #pprint.pprint(json.loads(str(json_string)))
+
+def list():
+ #TODO: why does it output in single quotations?
+ cursor = prefabs_collection.find()
+ prefabs = []
+ for record in cursor:
+ #pprint.pprint(record)
+ #print(record)
+ json_string = dumps(record, json_options=RELAXED_JSON_OPTIONS) ##pymongo retrieves BSON objects, which need to be converted to json for pythons json module
+ prefabs.append(json.loads(json_string))
+
+ #print(f'There are {str(len(prefabs))} prefabs in the database. They are:')
+ print("Name Author")
+ for prefab in prefabs:
+ if(prefab['visibility'] == "private"):
+ continue
+ print(f"{prefab['name']} {prefab['author']}")
+ #pprint.pprint(prefab)
+
+
+def remove(prefab_name):
+ prefabs_collection.delete_one({'name': prefab_name})
+
+
diff --git a/core/opendc-api-spec.yml b/core/opendc-api-spec.yml
new file mode 100644
index 00000000..56f1d529
--- /dev/null
+++ b/core/opendc-api-spec.yml
@@ -0,0 +1,999 @@
+swagger: '2.0'
+info:
+ version: 1.0.0
+ title: OpenDC API
+ description: 'OpenDC is an open-source datacenter simulator for education, featuring real-time online collaboration, diverse simulation models, and detailed performance feedback statistics.'
+host: opendc.org
+basePath: /v2
+schemes:
+ - https
+
+paths:
+ '/users':
+ get:
+ tags:
+ - users
+ description: Search for a User using their email address.
+ parameters:
+ - name: email
+ in: query
+ description: User's email address.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully searched Users.
+ schema:
+ $ref: '#/definitions/User'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '404':
+ description: User not found.
+ post:
+ tags:
+ - users
+ description: Add a new User.
+ parameters:
+ - name: user
+ in: body
+ description: The new User.
+ required: true
+ schema:
+ $ref: '#/definitions/User'
+ responses:
+ '200':
+ description: Successfully added User.
+ schema:
+ $ref: '#/definitions/User'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '409':
+ description: User already exists.
+ '/users/{userId}':
+ get:
+ tags:
+ - users
+ description: Get this User.
+ parameters:
+ - name: userId
+ in: path
+ description: User's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved User.
+ schema:
+ $ref: '#/definitions/User'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '404':
+ description: User not found.
+ put:
+ tags:
+ - users
+ description: Update this User's given name and/ or family name.
+ parameters:
+ - name: userId
+ in: path
+ description: User's ID.
+ required: true
+ type: string
+ - name: user
+ in: body
+ description: User's new properties.
+ required: true
+ schema:
+ properties:
+ givenName:
+ type: string
+ familyName:
+ type: string
+ responses:
+ '200':
+ description: Successfully updated User.
+ schema:
+ $ref: '#/definitions/User'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from updating User.
+ '404':
+ description: User not found.
+ delete:
+ tags:
+ - users
+ description: Delete this User.
+ parameters:
+ - name: userId
+ in: path
+ description: User's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully deleted User.
+ schema:
+ $ref: '#/definitions/User'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from deleting User.
+ '404':
+ description: User not found.
+ '/simulations':
+ post:
+ tags:
+ - simulations
+ description: Add a Simulation.
+ parameters:
+ - name: simulation
+ in: body
+ description: The new Simulation.
+ required: true
+ schema:
+ properties:
+ name:
+ type: string
+ responses:
+ '200':
+ description: Successfully added Simulation.
+ schema:
+ $ref: '#/definitions/Simulation'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '/simulations/{simulationId}':
+ get:
+ tags:
+ - simulations
+ description: Get this Simulation.
+ parameters:
+ - name: simulationId
+ in: path
+ description: Simulation's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Simulation.
+ schema:
+ $ref: '#/definitions/Simulation'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from retrieving Simulation.
+ '404':
+ description: Simulation not found
+ put:
+ tags:
+ - simulations
+ description: Update this Simulation.
+ parameters:
+ - name: simulationId
+ in: path
+ description: Simulation's ID.
+ required: true
+ type: string
+ - name: simulation
+ in: body
+ description: Simulation's new properties.
+ required: true
+ schema:
+ properties:
+ simulation:
+ $ref: '#/definitions/Simulation'
+ responses:
+ '200':
+ description: Successfully updated Simulation.
+ schema:
+ $ref: '#/definitions/Simulation'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from updating Simulation.
+ '404':
+ description: Simulation not found.
+ delete:
+ tags:
+ - simulations
+ description: Delete this simulation.
+ parameters:
+ - name: simulationId
+ in: path
+ description: Simulation's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully deleted Simulation.
+ schema:
+ $ref: '#/definitions/Simulation'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from deleting Simulation.
+ '404':
+ description: Simulation not found.
+ '/simulations/{simulationId}/authorizations':
+ get:
+ tags:
+ - simulations
+ description: Get this Simulation's Authorizations.
+ parameters:
+ - name: simulationId
+ in: path
+ description: Simulation's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Simulation's Authorizations.
+ schema:
+ type: array
+ items:
+ type: object
+ properties:
+ userId:
+ type: string
+ simulationId:
+ type: string
+ authorizationLevel:
+ type: string
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from retrieving this Simulation's Authorizations.
+ '404':
+ description: Simulation not found.
+ '/simulations/{simulationId}/topologies':
+ post:
+ tags:
+ - simulations
+ description: Add a Topology.
+ parameters:
+ - name: simulationId
+ in: path
+ description: Simulation's ID.
+ required: true
+ type: string
+ - name: topology
+ in: body
+ description: The new Topology.
+ required: true
+ schema:
+ properties:
+ topology:
+ $ref: '#/definitions/Topology'
+ responses:
+ '200':
+ description: Successfully added Topology.
+ schema:
+ $ref: '#/definitions/Topology'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '/topologies/{topologyId}':
+ get:
+ tags:
+ - topologies
+ description: Get this Topology.
+ parameters:
+ - name: topologyId
+ in: path
+ description: Topology's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Topology.
+ schema:
+ $ref: '#/definitions/Topology'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from retrieving Topology.
+ '404':
+ description: Topology not found.
+ put:
+ tags:
+ - topologies
+ description: Update this Topology's name.
+ parameters:
+ - name: topologyId
+ in: path
+ description: Topology's ID.
+ required: true
+ type: string
+ - name: topology
+ in: body
+ description: Topology's new properties.
+ required: true
+ schema:
+ properties:
+ topology:
+ $ref: '#/definitions/Topology'
+ responses:
+ '200':
+ description: Successfully updated Topology.
+ schema:
+ $ref: '#/definitions/Topology'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from updating Topology.
+ '404':
+ description: Topology not found.
+ delete:
+ tags:
+ - topologies
+ description: Delete this Topology.
+ parameters:
+ - name: topologyId
+ in: path
+ description: Topology's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully deleted Topology.
+ schema:
+ $ref: '#/definitions/Topology'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from deleting Topology.
+ '404':
+ description: Topology not found.
+ '/simulations/{simulationId}/experiments':
+ get:
+ tags:
+ - experiments
+ description: Get this Simulation's Experiments.
+ parameters:
+ - name: simulationId
+ in: path
+ description: Simulation's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Experiments.
+ schema:
+ type: array
+ items:
+ $ref: '#/definitions/Experiment'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from retrieving Simulation's Experiments.
+ '404':
+ description: Simulation not found.
+ post:
+ tags:
+ - experiments
+ description: Add a new Experiment for this Simulation.
+ parameters:
+ - name: simulationId
+ in: path
+ description: Simulation's ID.
+ required: true
+ type: string
+ - name: experiment
+ in: body
+ description: Experiment to add to this Simulation.
+ required: true
+ schema:
+ $ref: '#/definitions/Experiment'
+ responses:
+ '200':
+ description: Successfully added new Experiment.
+ schema:
+ $ref: '#/definitions/Experiment'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from adding an Experiment to this Simulation.
+ '404':
+ description: 'Simulation, Topology, Scheduler or Trace not found.'
+ '/experiments/{experimentId}':
+ get:
+ tags:
+ - experiments
+ description: Get this Experiment.
+ parameters:
+ - name: experimentId
+ in: path
+ description: Experiment's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Experiment.
+ schema:
+ $ref: '#/definitions/Experiment'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from retrieving Experiment.
+ '404':
+ description: Experiment not found.
+ put:
+ tags:
+ - experiments
+ description: "Update this Experiment's Topology, Trace, Scheduler, and/or name."
+ parameters:
+ - name: experimentId
+ in: path
+ description: Experiment's ID.
+ required: true
+ type: string
+ - name: experiment
+ in: body
+ description: Experiment's new properties.
+ required: true
+ schema:
+ properties:
+ topologyId:
+ type: string
+ traceId:
+ type: string
+ schedulerName:
+ type: string
+ name:
+ type: string
+ responses:
+ '200':
+ description: Successfully updated Experiment.
+ schema:
+ $ref: '#/definitions/Experiment'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from updating Experiment.
+ '404':
+ description: 'Experiment, Topology, Trace, or Scheduler not found.'
+ delete:
+ tags:
+ - experiments
+ description: Delete this Experiment.
+ parameters:
+ - name: experimentId
+ in: path
+ description: Experiment's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully deleted Experiment.
+ schema:
+ $ref: '#/definitions/Experiment'
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from deleting Experiment.
+ '404':
+ description: Experiment not found.
+ '/experiments/{experimentId}/last-simulated-tick':
+ get:
+ tags:
+ - simulations
+ description: Get this Experiment's last simulated tick.
+ parameters:
+ - name: experimentId
+ in: path
+ description: Experiment's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Experiment's last simulated tick.
+ schema:
+ properties:
+ lastSimulatedTick:
+ type: integer
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized
+ '403':
+ description: Forbidden from getting this simulation
+ '404':
+ description: Simulation not found
+ '/experiments/{experimentId}/machine-states':
+ get:
+ tags:
+ - simulations
+ - states
+ description: Get this experiment's Machine States.
+ parameters:
+ - name: experimentId
+ in: path
+ description: Experiment's ID.
+ required: true
+ type: string
+ - name: tick
+ in: query
+ description: Tick to filter on.
+ required: false
+ type: integer
+ - name: machineId
+ in: query
+ description: Machine's ID to filter on.
+ required: false
+ type: string
+ - name: rackId
+ in: query
+ description: Rack's ID to filter on.
+ required: false
+ type: string
+ - name: roomId
+ in: query
+ description: Room's ID to filter on.
+ required: false
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Machine States.
+ schema:
+ type: array
+ items:
+ $ref: '#/definitions/MachineState'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from getting Experiment's Machine States.
+ '404':
+ description: 'Experiment, Machine, Rack, Room or Tick not found.'
+ '/experiments/{experimentId}/rack-states':
+ get:
+ tags:
+ - simulations
+ - states
+ description: Get this Experiment's Rack States.
+ parameters:
+ - name: experimentId
+ in: path
+ description: Experiment's ID.
+ required: true
+ type: string
+ - name: tick
+ in: query
+ description: Tick to filter on.
+ required: false
+ type: integer
+ - name: rackId
+ in: query
+ description: Rack's ID to filter on.
+ required: false
+ type: string
+ - name: roomId
+ in: query
+ description: Room's ID to filter on.
+ required: false
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Rack States.
+ schema:
+ type: array
+ items:
+ $ref: '#/definitions/RackState'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from getting Experiment's Rack States.
+ '404':
+ description: 'Experiment, Room, Rack or Tick not found.'
+ '/experiments/{experimentId}/room-states':
+ get:
+ tags:
+ - simulations
+ - states
+ description: Get this Experiment's Room States.
+ parameters:
+ - name: experimentId
+ in: path
+ description: Experiment's ID.
+ required: true
+ type: string
+ - name: tick
+ in: query
+ description: Tick to filter on.
+ required: false
+ type: integer
+ - name: roomId
+ in: query
+ description: Room's ID to filter on.
+ required: false
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Room States.
+ schema:
+ type: array
+ items:
+ $ref: '#/definitions/RoomState'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from getting Experiment's Room States.
+ '404':
+ description: 'Experiment, Room or Tick not found.'
+ '/experiments/{experimentId}/task-states':
+ get:
+ tags:
+ - simulations
+ - states
+ description: Get this Experiment's Task States.
+ parameters:
+ - name: experimentId
+ in: path
+ description: Experiment's ID.
+ required: true
+ type: string
+ - name: tick
+ in: query
+ description: Tick to filter on.
+ required: false
+ type: integer
+ - name: taskId
+ in: query
+ description: Task's ID to filter on.
+ required: false
+ type: string
+ - name: machineId
+ in: query
+ description: Machine's ID to filter on.
+ required: false
+ type: string
+ - name: rackId
+ in: query
+ description: ID the rack whose machines' task states to get
+ required: false
+ type: string
+ - name: roomId
+ in: query
+ description: ID of the room whose racks' machines' states to get
+ required: false
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Task States.
+ schema:
+ type: array
+ items:
+ $ref: '#/definitions/TaskState'
+ '400':
+ description: Missing or incorrectly typed parameter.
+ '401':
+ description: Unauthorized.
+ '403':
+ description: Forbidden from retrieving Experiment's Task States.
+ '404':
+ description: 'Experiment, Tick, Task, Machine, Rack or Room not found.'
+ /schedulers:
+ get:
+ tags:
+ - experiments
+ description: Get all available Schedulers
+ responses:
+ '200':
+ description: Successfully retrieved Schedulers.
+ schema:
+ type: array
+ items:
+ $ref: '#/definitions/Scheduler'
+ '401':
+ description: Unauthorized.
+ /traces:
+ get:
+ tags:
+ - experiments
+ description: Get all available Traces (non-populated).
+ responses:
+ '200':
+ description: Successfully retrieved Traces (non-populated).
+ schema:
+ type: array
+ items:
+ type: object
+ properties:
+ _id:
+ type: string
+ name:
+ type: string
+ '401':
+ description: Unauthorized.
+ '/traces/{traceId}':
+ get:
+ tags:
+ - experiments
+ description: Get this Trace.
+ parameters:
+ - name: traceId
+ in: path
+ description: Trace's ID.
+ required: true
+ type: string
+ responses:
+ '200':
+ description: Successfully retrieved Trace.
+ schema:
+ $ref: '#/definitions/Trace'
+ '401':
+ description: Unauthorized.
+ '404':
+ description: Trace not found.
+
+definitions:
+ Experiment:
+ type: object
+ properties:
+ _id:
+ type: string
+ simulationId:
+ type: string
+ topologyId:
+ type: string
+ traceId:
+ type: string
+ schedulerName:
+ type: string
+ name:
+ type: string
+ MachineState:
+ type: object
+ properties:
+ _id:
+ type: string
+ machineId:
+ type: string
+ experimentId:
+ type: string
+ tick:
+ type: integer
+ inUseMemoryMb:
+ type: integer
+ loadFraction:
+ type: number
+ format: float
+ RackState:
+ type: object
+ properties:
+ _id:
+ type: string
+ rackId:
+ type: string
+ experimentId:
+ type: string
+ tick:
+ type: integer
+ inUseMemoryMb:
+ type: integer
+ loadFraction:
+ type: number
+ format: float
+ RoomState:
+ type: object
+ properties:
+ _id:
+ type: string
+ roomId:
+ type: string
+ experimentId:
+ type: string
+ tick:
+ type: integer
+ inUseMemoryMb:
+ type: integer
+ loadFraction:
+ type: number
+ format: float
+ Scheduler:
+ type: object
+ properties:
+ name:
+ type: string
+ Simulation:
+ type: object
+ properties:
+ _id:
+ type: string
+ name:
+ type: string
+ datetimeCreated:
+ type: string
+ format: dateTime
+ datetimeLastEdited:
+ type: string
+ format: dateTime
+ topologyIds:
+ type: array
+ items:
+ type: string
+ TaskState:
+ type: object
+ properties:
+ _id:
+ type: string
+ taskId:
+ type: string
+ experimentId:
+ type: string
+ tick:
+ type: integer
+ flopsLeft:
+ type: integer
+ Topology:
+ type: object
+ properties:
+ _id:
+ type: string
+ name:
+ type: string
+ rooms:
+ type: array
+ items:
+ type: object
+ properties:
+ _id:
+ type: string
+ name:
+ type: string
+ tiles:
+ type: array
+ items:
+ type: object
+ properties:
+ _id:
+ type: string
+ positionX:
+ type: integer
+ positionY:
+ type: integer
+ object:
+ type: object
+ properties:
+ capacity:
+ type: integer
+ powerCapacityW:
+ type: integer
+ machines:
+ type: array
+ items:
+ type: object
+ properties:
+ position:
+ type: integer
+ cpuItems:
+ type: array
+ items:
+ type: object
+ properties:
+ name:
+ type: string
+ clockRateMhz:
+ type: integer
+ numberOfCores:
+ type: integer
+ gpuItems:
+ type: array
+ items:
+ type: object
+ properties:
+ name:
+ type: string
+ clockRateMhz:
+ type: integer
+ numberOfCores:
+ type: integer
+ memoryItems:
+ type: array
+ items:
+ type: object
+ properties:
+ name:
+ type: string
+ speedMbPerS:
+ type: integer
+ sizeMb:
+ type: integer
+ storageItems:
+ type: array
+ items:
+ type: integer
+ properties:
+ name:
+ type: string
+ speedMbPerS:
+ type: integer
+ sizeMb:
+ type: integer
+ Trace:
+ type: object
+ properties:
+ _id:
+ type: string
+ name:
+ type: string
+ jobs:
+ type: array
+ items:
+ type: object
+ properties:
+ _id:
+ type: string
+ name:
+ type: string
+ tasks:
+ type: array
+ items:
+ type: object
+ properties:
+ startTick:
+ type: integer
+ totalFlopCount:
+ type: integer
+ User:
+ type: object
+ properties:
+ _id:
+ type: string
+ googleId:
+ type: integer
+ email:
+ type: string
+ givenName:
+ type: string
+ familyName:
+ type: string
+ authorizations:
+ type: array
+ items:
+ type: object
+ properties:
+ simulationId:
+ type: string
+ authorizationLevel:
+ type: string