123 Commits

Author SHA1 Message Date
mr
c87245e83f Oc-Datacenter Allowed Resource And Prepull Images For Efficient process 2026-03-25 11:11:03 +01:00
mr
dab61463f0 WatchDog Kube 2026-03-24 10:50:36 +01:00
mr
a7ffede3e2 Run to UTC 2026-03-19 08:22:52 +01:00
mr
6c0b07b49d Missing Create Namespace 2026-03-18 16:43:44 +01:00
mr
e4834db518 defer log live 2026-02-25 13:22:11 +01:00
mr
b45c795002 Datacenter no more handle booking but is fully charged with Kube & minio allocate per NATS 2026-02-25 09:08:40 +01:00
mr
41750dc054 ForwardAuth API 2026-02-20 10:37:05 +01:00
mr
3d27da3e7c Prometheus X Vector 2026-02-17 17:23:54 +01:00
mr
c99f161a51 brokeback alpine 2026-02-09 15:47:50 +01:00
mr
bcd82675fc better tagging 2026-02-09 09:44:56 +01:00
mr
6dbee462b4 dockerfile scratch 2026-02-09 09:02:14 +01:00
mr
9bfcf910ed remove apk git 2026-02-09 08:55:11 +01:00
mr
bb8adef43e publish-registry ci 2026-02-05 12:02:30 +01:00
mr
276c0f793a publish-kind ci way 2026-02-05 11:58:48 +01:00
mr
68ab051103 publish-registry: 2026-02-05 11:57:14 +01:00
mr
466ca6984c security injection appname 2026-02-04 09:45:36 +01:00
mr
1a4dbb172b compact conf 2026-02-03 16:18:18 +01:00
mr
df076755f6 oclib-debug 2026-02-03 09:40:51 +01:00
mr
f3add8d75c oclib update 2026-02-03 08:46:56 +01:00
mr
1b3eb0e61c Update DataCenter to Oclib 2026-02-02 14:27:17 +01:00
mr
66de8d7541 CLUSTER NAME in Makefile 2026-01-20 11:19:09 +01:00
mr
e1eb658037 add swagger 2026-01-08 10:40:37 +01:00
mr
fde7031bf4 Adjust makefile dockerfile 2026-01-08 09:50:58 +01:00
mr
607c357273 no kube host because ENV ENV 2025-11-13 10:06:37 +01:00
mr
be18c0bfb3 Merge branch 'main' of https://cloud.o-forge.io/core/oc-datacenter
gitignore
2025-11-13 09:54:34 +01:00
mr
03ae522e73 gitignore 2025-11-13 09:53:57 +01:00
pb
39137c4f2a Added the method to create a bucket when creating a new service account on minio 2025-07-28 12:20:01 +02:00
pb
53043e7781 removed the label in Target 2025-07-28 11:45:36 +02:00
pb
32ce1ef444 Added the method to create a bucket when creating a new service account on minio 2025-07-28 11:44:29 +02:00
pb
e141793144 changed getConcatenatedName() with the new method defined in oclib entrypoint 2025-07-15 15:02:01 +02:00
pb
067947862c added handling of a duplicate request to create a namespace 2025-07-10 12:36:48 +02:00
pb
ba8e7d3169 improved error message 2025-07-09 12:08:45 +02:00
pb
271d74b4dd return b.Data rather than the whole b object 2025-07-09 12:07:54 +02:00
pb
ad56c42c2f Changed the serviceaccount route to POST 2025-06-30 17:08:01 +02:00
pb
d5e8db60be changed the returned json for /minio/serviceaccount 2025-06-30 12:36:21 +02:00
pb
a664423842 implemented the /minio/serviceaccount route to create new serviceAccount in the minio corresponding to the parameter, then store it in secret in the namespace corresponding to the executionsId 2025-06-30 12:33:24 +02:00
pb
625f34ed21 retrieve the admin creds for local Minio 2025-06-26 10:47:54 +02:00
pb
55b88077d4 started implementing the routes and service to interract with a given Minio server 2025-06-26 10:31:48 +02:00
mr
1d363e8f2a booking test 2025-06-24 16:41:33 +02:00
mr
27c27fef15 test 2025-06-24 16:39:57 +02:00
mr
8899673347 bookin' extension 2025-06-24 16:32:54 +02:00
mr
55da778ae1 datacenter search filter for datacenter live 2025-06-24 14:46:27 +02:00
mr
b06050bfae nats search 2025-06-24 09:11:27 +02:00
mr
243db11a63 comment health check 2025-06-19 10:33:09 +02:00
mr
4e1e3f20af add websocket route 2025-06-18 11:18:12 +02:00
mr
81167f7b86 kill process... may be infinite if no end 2025-06-18 11:07:25 +02:00
mr
cb23289097 add delete namespace with route deleteAdmiraltySession 2025-06-18 11:04:15 +02:00
mr
c0b8ac1eee launch streaming to update booking 2025-06-18 09:14:30 +02:00
mr
f61c9f5df9 draft prometheus update booking... 2025-06-18 08:34:33 +02:00
mr
c7290b0ead draft... metrics by booking... 2025-06-18 08:26:10 +02:00
mr
001539fb36 deploy adjustment 2025-06-16 09:13:43 +02:00
pb
b372c10ab0 changed how peerId and namespace are concatenated to name admiralty resources 2025-05-16 11:09:57 +02:00
pb
fd6186c6df updated how we search for nodes 2025-05-15 10:28:55 +02:00
pb
5069b3455a updated how we search for nodes 2025-05-15 10:27:28 +02:00
pb
cb2e4f6028 added the right naming convention to kubeConfigSecret in target 2025-05-15 09:37:35 +02:00
pb
35facf1b74 added :peer to admiralty routes to create peer related resources 2025-05-13 16:33:48 +02:00
pb
24e0137444 shortened how targets are named 2025-05-12 15:00:56 +02:00
pb
ba940bfc80 changed the way kube manifest are applyied 2025-05-12 12:23:38 +02:00
pb
063d57d9e7 updated comments and logs 2025-05-12 12:21:12 +02:00
pb
484c742c31 corrected a typo from a copy/pasted log line 2025-05-06 18:10:53 +02:00
pb
cc3b2a6cfc uncommenting createNamespace method 2025-05-06 18:10:09 +02:00
pb
8e8d0d3e01 added a new parameter to the /admiralty/targets route to specify the peerId of the peer targeted, allowing to name differently peers targeted in a namespace 2025-05-05 16:13:49 +02:00
pb
03f81c66f9 Changed name of the method to create source to be coherent with the one to create target 2025-05-05 15:51:39 +02:00
pb
be721059e5 Merge branch 'main' of https://cloud.o-forge.io/core/oc-datacenter 2025-04-29 11:55:15 +02:00
mr
e4f0f6f4ca Merge branch 'main' of https://cloud.o-forge.io/core/oc-datacenter into main 2025-04-28 14:39:56 +02:00
mr
cf92b46ce6 data 2025-04-28 14:39:53 +02:00
pb
aa42f5f49c debug some typo 2025-04-11 15:45:28 +02:00
pb
98c54eb080 typo when passing gvr for target 2025-04-11 15:34:43 +02:00
pb
afe442d17f refactored the way we apply Source and Target with dynamic client 2025-04-11 15:24:47 +02:00
pb
46b7713404 corrected Apply target 2025-04-11 12:10:54 +02:00
pb
d5ad32e2e4 [NEED REFACTORING] added DynamicClient constructor to make API calls on CDRs 2025-04-11 12:00:21 +02:00
pb
e4ecb8c1db replaced k8s go-client create with apply for the creation of the Secret 2025-04-10 15:48:08 +02:00
pb
cca59faeab when creating source or target returns a 409, don't return an error. Post should be replaced by Put but not working 2025-04-10 14:22:37 +02:00
pb
2cf8923d95 more logs 2025-04-08 10:31:29 +02:00
pb
47ed1b4562 added the label multicluster-scheduler=enabled when creating namespace 2025-04-08 10:31:29 +02:00
pb
063f47c87b Merge branch 'main' of https://cloud.o-forge.io/core/oc-datacenter into feature/admiralty 2025-04-04 18:03:18 +02:00
pb
bb03307b9e corrected how the kubeconfig info are stored in /admiralty/kubeconfig (POST) 2025-04-04 18:01:14 +02:00
pb
4bfb16cba6 added node to the returned data when it was found 2025-04-01 11:46:32 +02:00
mr
3ae9f69525 add all 2025-04-01 10:09:25 +02:00
pb
b08e6a1e70 corrected the options in bee run 2025-03-14 16:58:37 +01:00
pb
1aa6c68b2c Merge branch 'feature/admiralty' 2025-03-14 12:30:58 +01:00
pb
9d865f193d .gitignore for swagger/ 2025-03-14 12:30:36 +01:00
pb
88e6c89612 new libraries for admiralty 2025-03-14 11:41:09 +01:00
pb
2dca4aac62 Merge branch 'feature/admiralty' 2025-03-14 11:39:52 +01:00
pb
7b43fa19ff Admiralty controller able to handle the setup of an admiralty connection between two peers 2025-03-13 16:39:50 +01:00
pb
150d6591be changed status code to 201 when creating resource 2025-03-12 11:07:45 +01:00
pb
5a0489048a changed status code to 201 when creating resource 2025-03-12 10:53:48 +01:00
pb
3c692d2026 monitord is able to create an admiralty Source on remote peer 2025-03-10 17:57:14 +01:00
pb
0a15357445 added a controller to manage controller's errors 2025-03-10 17:36:04 +01:00
pb
4f2a713516 replaced :id in controllers' comment by :execution for better readability 2025-03-10 16:27:16 +01:00
pb
3bbb15c459 removed code used for test from CreateSource() 2025-03-10 15:31:16 +01:00
pb
b35f1c5ff3 Remove test parameters in swagger comments 2025-03-10 12:23:07 +01:00
pb
7eac712e01 added files deleted by previous gitignore 2025-03-06 10:35:03 +01:00
pb
0baee5e339 Added the 'timeout 15' before bee run --downdoc=true' to make sure swagger is dowloaded 2025-03-06 10:35:03 +01:00
pb
f69e9bf2fa Started the implementation of new routes for admiralty 2025-03-06 10:35:03 +01:00
mr
293b52da4c dev launch mode 2025-03-06 10:33:22 +01:00
mr
be7dc4f639 dev launch mode 2025-03-06 09:33:38 +01:00
pb
699e5b910c remove gitignore because its a pain in the ass 2025-03-05 17:57:49 +01:00
pb
7f4c193b0f recreate conf erased by gitignore 2025-03-05 17:47:13 +01:00
pb
1896236cab correction 2025-03-05 17:40:19 +01:00
pb
429b034cb0 cleaning 2025-03-05 17:40:19 +01:00
pb
9e7b81061a Started the implementation of new routes for admiralty 2025-03-05 17:39:57 +01:00
pb
fadbc4f503 Pulling from main 2025-03-05 16:50:57 +01:00
pb
7147531e9d Started the implementation of new routes for admiralty 2025-03-05 16:50:20 +01:00
mr
fa97228e2b neo oclib 2025-03-05 16:48:31 +01:00
pb
693571ddd7 Modified path to correspond those declared in oclib peer_cache 2025-03-03 11:40:53 +01:00
pb
ba3afc69be Changed the return of the /kubeconfig to encoded kubeconfig 2025-02-28 17:57:13 +01:00
pb
f75499d827 Added new route to retrieve the host's kubeconfig with the execution's SA token 2025-02-28 14:07:45 +01:00
pb
7b27945493 cleaning 2025-02-27 17:09:37 +01:00
pb
1d45b470b7 add admiralty to routers 2025-02-27 17:01:07 +01:00
pb
74ac2b6d9c Uniformisation and verification of admiralty link with nodes + token 2025-02-27 17:00:36 +01:00
pb
44abc073c4 First proposition of sequence for admiralty executions/setup 2025-02-27 16:53:39 +01:00
pb
f60474681b Added routes and methods to create admiralty resources : secrets, target sources 2025-02-25 13:03:52 +01:00
mr
026e46450b neo oclib 2025-02-21 11:22:14 +01:00
pb
d26f0d6b1b Added two routes to get all and one admiralty targets from kube 2025-02-20 12:46:24 +01:00
pb
3c313171c3 Merge remote-tracking branch 'origin/master' into feature/admiralty 2025-02-19 14:40:56 +01:00
mr
331c0835f1 go.sum pb 2025-02-19 14:04:32 +01:00
pb
198c1e1a28 Started the implementation of new routes for admiralty 2025-02-19 12:28:48 +01:00
mr
f7dd297b14 traefik 2025-02-19 12:05:43 +01:00
mr
2d3224704a oclib update + controller adjustment 2025-02-18 15:05:47 +01:00
mr
8eeae90b67 Merge branch 'feature/namespace' 2025-02-17 09:28:34 +01:00
plm
fd8e397e16 Support CORS 2025-01-15 11:38:39 +01:00
plm
1c21667195 modifications for k8s integration 2025-01-10 21:21:58 +01:00
42 changed files with 4023 additions and 789 deletions

24
.gitignore vendored
View File

@@ -1 +1,23 @@
oc-datacenter
# ---> Go
# If you prefer the allow list template instead of the deny list, see community template:
# https://github.com/github/gitignore/blob/main/community/Golang/Go.AllowList.gitignore
#
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/
# Go workspace file
go.work

View File

@@ -1,30 +1,46 @@
ARG KUBERNETES_HOST=${KUBERNETES_HOST:-"127.0.0.1"}
FROM golang:alpine AS deps
WORKDIR /app
COPY go.mod go.sum ./
RUN sed -i '/replace/d' go.mod
RUN go mod download
#----------------------------------------------------------------------------------------------
FROM golang:alpine AS builder
WORKDIR /app
RUN go install github.com/beego/bee/v2@latest
WORKDIR /oc-datacenter
COPY --from=deps /go/pkg /go/pkg
COPY --from=deps /app/go.mod /app/go.sum ./
RUN export CGO_ENABLED=0 && \
export GOOS=linux && \
export GOARCH=amd64 && \
export BUILD_FLAGS="-ldflags='-w -s'"
COPY . .
RUN apk add git
RUN sed -i '/replace/d' go.mod
RUN bee pack
RUN mkdir -p /app/extracted && tar -zxvf oc-datacenter.tar.gz -C /app/extracted
#----------------------------------------------------------------------------------------------
RUN go get github.com/beego/bee/v2 && go install github.com/beego/bee/v2@master
FROM golang:alpine
RUN timeout 15 bee run -gendoc=true -downdoc=true -runmode=dev || :
RUN sed -i 's/http:\/\/127.0.0.1:8080\/swagger\/swagger.json/swagger.json/g' swagger/index.html
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o setup .
RUN ls /app
FROM scratch
ENV KUBERNETES_SERVICE_HOST=$KUBERNETES_HOST
WORKDIR /app
COPY --from=builder /app/setup /usr/bin/setup
COPY --from=builder /app/swagger /app/swagger
COPY docker_datacenter.json /etc/oc/datacenter.json
COPY --from=builder /app/extracted/oc-datacenter /usr/bin/
COPY --from=builder /app/extracted/swagger /app/swagger
COPY --from=builder /app/extracted/docker_datacenter.json /etc/oc/datacenter.json
EXPOSE 8080
ENTRYPOINT ["setup"]
ENTRYPOINT ["oc-datacenter"]

42
Makefile Normal file
View File

@@ -0,0 +1,42 @@
.DEFAULT_GOAL := all
build: clean
bee pack
run:
bee run -gendoc=true -downdoc=true
purge:
lsof -t -i:8092 | xargs kill | true
run-dev:
bee generate routers && bee run -gendoc=true -downdoc=true -runmode=prod
dev: purge run-dev
debug:
bee run -downdebug -gendebug
clean:
rm -rf oc-datacenter.tar.gz
docker:
DOCKER_BUILDKIT=1 docker build -t oc-datacenter -f Dockerfile . --build-arg=HOST=$(HOST) --build-arg=KUBERNETES_HOST=$(KUBERNETES_HOST) --build-arg=KUBERNETES_SERVICE_PORT=$(KUBERNETES_SERVICE_PORT) --build-arg=KUBE_CA=$(KUBE_CA) --build-arg=KUBE_CERT=$(KUBE_CERT) --build-arg=KUBE_DATA=$(KUBE_DATA)
docker tag oc-datacenter opencloudregistry/oc-datacenter:latest
publish-kind:
kind load docker-image opencloudregistry/oc-datacenter:latest --name $(CLUSTER_NAME) | true
publish-registry:
docker push opencloudregistry/oc-datacenter:latest
docker-deploy:
docker compose up -d
run-docker: docker publish-kind publish-registry docker-deploy
all: docker publish-kind
ci: docker publish-registry
.PHONY: build run clean docker publish-kind publish-registry

View File

@@ -7,6 +7,9 @@ To build :
bee generate routers
bee run -gendoc=true -downdoc=true
OR
make dev
If default Swagger page is displayed instead of tyour api, change url in swagger/index.html file to :
url: "swagger.json"
@@ -14,7 +17,52 @@ If default Swagger page is displayed instead of tyour api, change url in swagger
Note on particular process :
- set a bookin delete all related workflow booking before creating new ones. (no update of existing ones)
## Admiralty
The routes in /admiralty will trigger actions on the DC's Kubernetes API to retrieve information on Admiralty resources.
### Targets
Remote clusters that can be used by Admiralty to delegate pods.
To set up a target Admiralty needs to associate a `secret` which contains an edited version of the target's `kubeconfig`.
Once the Target is set the remote cluster appears in the output of `kubectl get nodes` under the name `admiralty-<namespace>-<target name>-*`
**TODO** : We might need a way to test if an IP is associated to an admiralty target
# Docker Kube Settings
Set up your base64 key from your ~/.kube/config.
Don't forget to set up your external IP in docker_datacenter.json
## Admiralty
The routes in /admiralty will trigger actions on the DC's Kubernetes API to retrieve information on Admiralty resources.
### Targets
Remote clusters that can be used by Admiralty to delegate pods.
To set up a target Admiralty needs to associate a `secret` which contains an edited version of the target's `kubeconfig`.
Once the Target is set the remote cluster appears in the output of `kubectl get nodes` under the name `admiralty-<namespace>-<target name>-*`
**TODO** : We might need a way to test if an IP is associated to an admiralty target
# Docker Kube Settings
Set up your base64 key from your ~/.kube/config.
Don't forget to set up your external IP in docker_datacenter.json
## Admiralty
The routes in /admiralty will trigger actions on the DC's Kubernetes API to retrieve information on Admiralty resources.
### Targets
Remote clusters that can be used by Admiralty to delegate pods.
To set up a target Admiralty needs to associate a `secret` which contains an edited version of the target's `kubeconfig`.
Once the Target is set the remote cluster appears in the output of `kubectl get nodes` under the name `admiralty-<namespace>-<target name>-*`
**TODO** : We might need a way to test if an IP is associated to an admiralty target

View File

@@ -1,5 +1,5 @@
appname = oc-datacenter
httpport = 8080
httpport = 8092
runmode = dev
autorender = false
copyrequestbody = true

View File

@@ -3,12 +3,20 @@ package conf
import "sync"
type Config struct {
Mode string
KubeHost string
KubePort string
KubeCA string
KubeCert string
KubeData string
Mode string
KubeHost string
KubePort string
// KubeExternalHost is the externally reachable address of this cluster's API server.
// Used when generating kubeconfigs for remote peers. Must be an IP or hostname
// reachable from outside the cluster (NOT kubernetes.default.svc.cluster.local).
KubeExternalHost string
KubeCA string
KubeCert string
KubeData string
MinioRootKey string
MinioRootSecret string
MonitorMode string
MonitorAddress string
}
var instance *Config

View File

@@ -0,0 +1,96 @@
package controllers
import (
"encoding/json"
"slices"
oclib "cloud.o-forge.io/core/oc-lib"
beego "github.com/beego/beego/v2/server/web"
"cloud.o-forge.io/core/oc-lib/models/allowed_image"
)
// AllowedImageController gère la liste locale des images autorisées à persister
// sur ce peer après l'exécution d'un workflow.
//
// GET /allowed-image/ → tous les utilisateurs authentifiés
// GET /allowed-image/:id → tous les utilisateurs authentifiés
// POST /allowed-image/ → peer admin uniquement
// DELETE /allowed-image/:id → peer admin uniquement (bloqué si IsDefault)
type AllowedImageController struct {
beego.Controller
}
// isAdmin vérifie que l'appelant est peer admin (groupe "admin" dans le token JWT).
func isAdmin(groups []string) bool {
return slices.Contains(groups, "admin")
}
// @Title GetAll
// @Description Retourne toutes les images autorisées à persister sur ce peer
// @Success 200 {object} []allowed_image.AllowedImage
// @router / [get]
func (o *AllowedImageController) GetAll() {
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
res := oclib.NewRequest(oclib.LibDataEnum(oclib.ALLOWED_IMAGE), user, peerID, groups, nil).LoadAll(false)
o.Data["json"] = res
o.ServeJSON()
}
// @Title Get
// @Description Retourne une image autorisée par son ID
// @Param id path string true "ID de l'image autorisée"
// @Success 200 {object} allowed_image.AllowedImage
// @router /:id [get]
func (o *AllowedImageController) Get() {
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
id := o.Ctx.Input.Param(":id")
res := oclib.NewRequest(oclib.LibDataEnum(oclib.ALLOWED_IMAGE), user, peerID, groups, nil).LoadOne(id)
o.Data["json"] = res
o.ServeJSON()
}
// @Title Post
// @Description Ajoute une image à la liste des images autorisées (peer admin uniquement)
// @Param body body allowed_image.AllowedImage true "Image à autoriser"
// @Success 200 {object} allowed_image.AllowedImage
// @router / [post]
func (o *AllowedImageController) Post() {
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
if !isAdmin(groups) {
o.Ctx.Output.SetStatus(403)
o.Data["json"] = map[string]string{"err": "peer admin required"}
o.ServeJSON()
return
}
var img allowed_image.AllowedImage
if err := json.Unmarshal(o.Ctx.Input.RequestBody, &img); err != nil {
o.Ctx.Output.SetStatus(400)
o.Data["json"] = map[string]string{"err": err.Error()}
o.ServeJSON()
return
}
img.IsDefault = false // l'opérateur ne peut pas créer d'entrées bootstrap via API
res := oclib.NewRequest(oclib.LibDataEnum(oclib.ALLOWED_IMAGE), user, peerID, groups, nil).StoreOne(img.Serialize(&img))
o.Data["json"] = res
o.ServeJSON()
}
// @Title Delete
// @Description Supprime une image de la liste des images autorisées (peer admin uniquement, entrées bootstrap non supprimables)
// @Param id path string true "ID de l'image autorisée"
// @Success 200 {object} allowed_image.AllowedImage
// @router /:id [delete]
func (o *AllowedImageController) Delete() {
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
if !isAdmin(groups) {
o.Ctx.Output.SetStatus(403)
o.Data["json"] = map[string]string{"err": "peer admin required"}
o.ServeJSON()
return
}
id := o.Ctx.Input.Param(":id")
res := oclib.NewRequest(oclib.LibDataEnum(oclib.ALLOWED_IMAGE), user, peerID, groups, nil).DeleteOne(id)
o.Data["json"] = res
o.ServeJSON()
}

View File

@@ -1,288 +0,0 @@
package controllers
import (
"encoding/json"
"errors"
"fmt"
"oc-datacenter/infrastructure"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/booking"
b "cloud.o-forge.io/core/oc-lib/models/booking"
beego "github.com/beego/beego/v2/server/web"
"go.mongodb.org/mongo-driver/bson/primitive"
)
// Operations about workspace
type BookingController struct {
beego.Controller
}
// @Title Search
// @Description search bookings by execution
// @Param id path string true "id execution"
// @Param is_draft query string false "draft wished"
// @Success 200 {workspace} models.workspace
// @router /search/execution/:id [get]
func (o *BookingController) ExecutionSearch() {
/*
* This is a sample of how to use the search function
* The search function is used to search for data in the database
* The search function takes in a filter and a data type
* The filter is a struct that contains the search parameters
* The data type is an enum that specifies the type of data to search for
* The search function returns a list of data that matches the filter
* The data is then returned as a json object
*/
// store and return Id or post with UUID
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
id := o.Ctx.Input.Param(":id")
isDraft := o.Ctx.Input.Query("is_draft")
f := dbs.Filters{
Or: map[string][]dbs.Filter{ // filter by name if no filters are provided
"execution_id": {{Operator: dbs.EQUAL.String(), Value: id}},
},
}
o.Data["json"] = oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), user, peerID, groups, nil).Search(&f, "", isDraft == "true")
o.ServeJSON()
}
// @Title Search
// @Description search bookings
// @Param start_date path string true "the word search you want to get"
// @Param end_date path string true "the word search you want to get"
// @Param is_draft query string false "draft wished"
// @Success 200 {workspace} models.workspace
// @router /search/:start_date/:end_date [get]
func (o *BookingController) Search() {
/*
* This is a sample of how to use the search function
* The search function is used to search for data in the database
* The search function takes in a filter and a data type
* The filter is a struct that contains the search parameters
* The data type is an enum that specifies the type of data to search for
* The search function returns a list of data that matches the filter
* The data is then returned as a json object
*/
// store and return Id or post with UUID
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
start_date, _ := time.Parse("2006-01-02", o.Ctx.Input.Param(":start_date"))
end_date, _ := time.Parse("2006-01-02", o.Ctx.Input.Param(":end_date"))
isDraft := o.Ctx.Input.Query("is_draft")
sd := primitive.NewDateTimeFromTime(start_date)
ed := primitive.NewDateTimeFromTime(end_date)
f := dbs.Filters{
And: map[string][]dbs.Filter{
"execution_date": {{Operator: "gte", Value: sd}, {Operator: "lte", Value: ed}},
},
}
o.Data["json"] = oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), user, peerID, groups, nil).Search(&f, "", isDraft == "true")
o.ServeJSON()
}
// @Title GetAll
// @Description find booking by id
// @Param is_draft query string false "draft wished"
// @Success 200 {booking} models.booking
// @router / [get]
func (o *BookingController) GetAll() {
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
isDraft := o.Ctx.Input.Query("is_draft")
o.Data["json"] = oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), user, peerID, groups, nil).LoadAll(isDraft == "true")
o.ServeJSON()
}
// @Title Get
// @Description find booking by id
// @Param id path string true "the id you want to get"
// @Success 200 {booking} models.booking
// @router /:id [get]
func (o *BookingController) Get() {
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
id := o.Ctx.Input.Param(":id")
o.Data["json"] = oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), user, peerID, groups, nil).LoadOne(id)
o.ServeJSON()
}
// @Title Update
// @Description create computes
// @Param id path string true "the compute id you want to get"
// @Param body body models.compute true "The compute content"
// @Success 200 {compute} models.compute
// @router /:id [put]
func (o *BookingController) Put() {
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
// store and return Id or post with UUID
var res map[string]interface{}
id := o.Ctx.Input.Param(":id")
book := oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), user, peerID, groups, nil).LoadOne(id)
if book.Code != 200 {
o.Data["json"] = map[string]interface{}{
"data": nil,
"code": book.Code,
"error": book.Err,
}
o.ServeJSON()
return
}
booking := book.Data.(*b.Booking)
if time.Now().After(booking.ExpectedStartDate) {
o.Data["json"] = oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), user, peerID, groups, nil).UpdateOne(res, id)
} else {
o.Data["json"] = map[string]interface{}{
"data": nil,
"code": 409,
"error": "booking is not already started",
}
}
o.ServeJSON()
}
// @Title Check
// @Description check booking
// @Param id path string "id of the datacenter"
// @Param start_date path string "the booking start date" format "2006-01-02T15:04:05"
// @Param end_date path string "the booking end date" format "2006-01-02T15:04:05"
// @Param is_draft query string false "draft wished"
// @Success 200 {object} models.object
// @router /check/:id/:start_date/:end_date [get]
func (o *BookingController) Check() {
/*
* This function is used to check if a booking is available for a specific datacenter.
* It takes the following parameters:
* - id: the id of the datacenter
* - start_date: the start date of the booking/search/execution/:id
* - end_date: the end date of the booking
*/
id := o.Ctx.Input.Param(":id")
date, err := time.Parse("2006-01-02T15:04:05", o.Ctx.Input.Param(":start_date"))
date2, err2 := time.Parse("2006-01-02T15:04:05", o.Ctx.Input.Param(":end_date"))
if err != nil || err2 != nil {
o.Data["json"] = map[string]interface{}{
"data": map[string]interface{}{
"is_available": false,
},
"code": 400,
"error": errors.New("invalid date format"),
}
} else {
booking := &b.Booking{} // create a new booking object
isAvailable, err2 := booking.Check(id, date, &date2, 1) // check if the booking is available
fmt.Println(isAvailable, err2)
code := 200
err := ""
if !isAvailable {
code = 409
err = "booking not available"
if err2 != nil {
err += " - " + err2.Error()
}
}
o.Data["json"] = map[string]interface{}{
"data": map[string]interface{}{
"is_available": isAvailable,
},
"code": code,
"error": err,
}
}
o.ServeJSON()
}
// @Title Post.
// @Description create booking
// @Param booking body string true "the booking you want to post"
// @Param is_draft query string false "draft wished"
// @Success 200 {object} models.object
// @router / [post]
func (o *BookingController) Post() {
fmt.Println("POST")
/*
* This function is used to create a booking.
* It takes the following parameters:
* - booking: the booking you want to post
* The booking is a JSON object that contains the following fields:
* - datacenter_resource_id: the id of the datacenter
* - workflow_execution: the workflow execution
*/
var resp booking.Booking
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
json.Unmarshal(o.Ctx.Input.CopyBody(10000000), &resp)
dc_id := resp.ResourceID
// delete all previous bookings
isDraft := o.Ctx.Input.Query("is_draft")
res := oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), user, peerID, groups, nil).Search(&dbs.Filters{And: map[string][]dbs.Filter{
"workflow_id": {{Operator: dbs.EQUAL.String(), Value: resp.WorkflowID}},
"resource_id": {{Operator: dbs.EQUAL.String(), Value: dc_id}},
}}, "", isDraft == "true")
if res.Code != 200 {
o.Data["json"] = map[string]interface{}{
"data": nil,
"code": res.Code,
"error": res.Err,
}
o.ServeJSON()
return
}
for _, b := range res.Data { // delete all previous bookings
oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), user, peerID, groups, nil).DeleteOne(b.GetID())
}
b := oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), user, peerID, groups, nil).StoreOne(resp.Serialize(&resp))
if b.Code != 200 {
o.Data["json"] = map[string]interface{}{
"data": nil,
"code": b.Code,
"error": b.Err,
}
o.ServeJSON()
return
}
fmt.Println("there was an error creating the namespace", o.createNamespace(resp.ExecutionsID))
o.Data["json"] = map[string]interface{}{
"data": []interface{}{b},
"code": 200,
"error": "",
}
o.ServeJSON()
}
func (o *BookingController) createNamespace(ns string) error {
/*
* This function is used to create a namespace.
* It takes the following parameters:
* - ns: the namespace you want to create
*/
serv, err := infrastructure.NewService()
if err != nil {
return nil
}
err = serv.CreateNamespace(o.Ctx.Request.Context(), ns)
if err != nil {
return err
}
err = serv.CreateServiceAccount(o.Ctx.Request.Context(), ns)
if err != nil {
return err
}
role := "argo-role"
err = serv.CreateRole(o.Ctx.Request.Context(), ns, role,
[][]string{
{"coordination.k8s.io"},
{""},
{""}},
[][]string{
{"leases"},
{"secrets"},
{"pods"}},
[][]string{
{"get", "create", "update"},
{"get"},
{"patch"}})
if err != nil {
return err
}
fmt.Println("ROLLLLLE BIND")
return serv.CreateRoleBinding(o.Ctx.Request.Context(), ns, "argo-role-binding", role)
}

View File

@@ -1,9 +1,14 @@
package controllers
import (
"net/http"
"oc-datacenter/infrastructure/monitor"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
beego "github.com/beego/beego/v2/server/web"
"github.com/gorilla/websocket"
)
// Operations about workspace
@@ -19,14 +24,14 @@ type DatacenterController struct {
func (o *DatacenterController) GetAll() {
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
isDraft := o.Ctx.Input.Query("is_draft")
storages := oclib.NewRequest(oclib.LibDataEnum(oclib.STORAGE_RESOURCE), user, peerID, groups, nil).Search(&dbs.Filters{
storages := oclib.NewRequest(oclib.LibDataEnum(oclib.LIVE_STORAGE), user, peerID, groups, nil).Search(&dbs.Filters{
Or: map[string][]dbs.Filter{
"abstractintanciatedresource.abstractresource.abstractobject.creator_id": {{Operator: dbs.EQUAL.String(), Value: peerID}},
"abstractinstanciatedresource.abstractresource.abstractobject.creator_id": {{Operator: dbs.EQUAL.String(), Value: peerID}},
},
}, "", isDraft == "true")
computes := oclib.NewRequest(oclib.LibDataEnum(oclib.COMPUTE_RESOURCE), user, peerID, groups, nil).Search(&dbs.Filters{
computes := oclib.NewRequest(oclib.LibDataEnum(oclib.LIVE_DATACENTER), user, peerID, groups, nil).Search(&dbs.Filters{
Or: map[string][]dbs.Filter{
"abstractintanciatedresource.abstractresource.abstractobject.creator_id": {{Operator: dbs.EQUAL.String(), Value: peerID}},
"abstractinstanciatedresource.abstractresource.abstractobject.creator_id": {{Operator: dbs.EQUAL.String(), Value: peerID}},
},
}, "", isDraft == "true")
storages.Data = append(storages.Data, computes.Data...)
@@ -47,17 +52,17 @@ func (o *DatacenterController) Get() {
user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
isDraft := o.Ctx.Input.Query("is_draft")
id := o.Ctx.Input.Param(":id")
storages := oclib.NewRequest(oclib.LibDataEnum(oclib.STORAGE_RESOURCE), user, peerID, groups, nil).Search(&dbs.Filters{
storages := oclib.NewRequest(oclib.LibDataEnum(oclib.LIVE_STORAGE), user, peerID, groups, nil).Search(&dbs.Filters{
Or: map[string][]dbs.Filter{
"abstractintanciatedresource.abstractresource.abstractobject.id": {{Operator: dbs.EQUAL.String(), Value: id}},
"abstractintanciatedresource.abstractresource.abstractobject.creator_id": {{Operator: dbs.EQUAL.String(), Value: peerID}},
"abstractinstanciatedresource.abstractresource.abstractobject.id": {{Operator: dbs.EQUAL.String(), Value: id}},
"abstractinstanciatedresource.abstractresource.abstractobject.creator_id": {{Operator: dbs.EQUAL.String(), Value: peerID}},
},
}, "", isDraft == "true")
if len(storages.Data) == 0 {
computes := oclib.NewRequest(oclib.LibDataEnum(oclib.COMPUTE_RESOURCE), user, peerID, groups, nil).Search(&dbs.Filters{
computes := oclib.NewRequest(oclib.LibDataEnum(oclib.LIVE_DATACENTER), user, peerID, groups, nil).Search(&dbs.Filters{
Or: map[string][]dbs.Filter{
"abstractintanciatedresource.abstractresource.abstractobject.id": {{Operator: dbs.EQUAL.String(), Value: id}},
"abstractintanciatedresource.abstractresource.abstractobject.creator_id": {{Operator: dbs.EQUAL.String(), Value: peerID}},
"abstractinstanciatedresource.abstractresource.abstractobject.id": {{Operator: dbs.EQUAL.String(), Value: id}},
"abstractinstanciatedresource.abstractresource.abstractobject.creator_id": {{Operator: dbs.EQUAL.String(), Value: peerID}},
},
}, "", isDraft == "true")
if len(computes.Data) == 0 {
@@ -83,3 +88,30 @@ func (o *DatacenterController) Get() {
}
o.ServeJSON()
}
var upgrader = websocket.Upgrader{
CheckOrigin: func(r *http.Request) bool { return true }, // allow all origins
}
// @Title Log
// @Description find booking by id
// @Param id path string true "the id you want to get"
// @Success 200 {booking} models.booking
// @router /:id [get]
func (o *DatacenterController) Log() {
// user, peerID, groups := oclib.ExtractTokenInfo(*o.Ctx.Request)
id := o.Ctx.Input.Param(":id")
conn, err := upgrader.Upgrade(o.Ctx.ResponseWriter, o.Ctx.Request, nil)
if err != nil {
o.Ctx.WriteString("WebSocket upgrade failed: " + err.Error())
return
}
defer conn.Close()
monitors, err := monitor.NewMonitorService()
if err != nil {
o.Ctx.WriteString("Monitor service unavailable: " + err.Error())
return
}
ctx := monitor.StreamRegistry.Register(id)
monitors.Stream(ctx, id, 1*time.Second, conn)
}

View File

@@ -1,10 +1,10 @@
package controllers
import (
"fmt"
"oc-datacenter/infrastructure"
"oc-datacenter/conf"
"strconv"
"cloud.o-forge.io/core/oc-lib/tools"
beego "github.com/beego/beego/v2/server/web"
)
@@ -31,7 +31,8 @@ func (o *SessionController) GetToken() {
return
}
serv, err := infrastructure.NewService()
serv, err := tools.NewKubernetesService(conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort,
conf.GetConfig().KubeCA, conf.GetConfig().KubeCert, conf.GetConfig().KubeData)
if err != nil {
// change code to 500
o.Ctx.Output.SetStatus(500)
@@ -39,8 +40,7 @@ func (o *SessionController) GetToken() {
o.ServeJSON()
return
}
fmt.Println("BLAPO", id, duration)
token, err := serv.GetToken(o.Ctx.Request.Context(), id, duration)
token, err := serv.GenerateToken(o.Ctx.Request.Context(), id, duration)
if err != nil {
// change code to 500
o.Ctx.Output.SetStatus(500)

View File

@@ -15,7 +15,10 @@ type VersionController struct {
// @Success 200
// @router / [get]
func (c *VersionController) GetAll() {
c.Data["json"] = map[string]string{"version": "1"}
c.Data["json"] = map[string]string{
"service": "oc-datacenter",
"version": "1",
}
c.ServeJSON()
}

View File

@@ -1,5 +1,7 @@
{
"port": 8080,
"MONGO_URL":"mongodb://localhost:27017/",
"MONGO_DATABASE":"DC_myDC"
"MONGO_URL": "mongodb://mongo:27017/",
"NATS_URL": "nats://localhost:4222",
"MONGO_DATABASE": "DC_myDC",
"KUBERNETES_SERVICE_HOST": "172.16.0.183",
"port": "8092"
}

View File

@@ -1,34 +0,0 @@
version: '3.4'
services:
mongo:
image: 'mongo:latest'
networks:
- catalog
ports:
- 27017:27017
container_name: mongo
volumes:
- oc-catalog-data:/data/db
- oc-catalog-data:/data/configdb
mongo-express:
image: "mongo-express:latest"
restart: always
depends_on:
- mongo
networks:
- catalog
ports:
- 8081:8081
environment:
- ME_CONFIG_BASICAUTH_USERNAME=test
- ME_CONFIG_BASICAUTH_PASSWORD=test
volumes:
oc-catalog-data:
networks:
catalog:
external: true
# name: catalog

View File

@@ -2,6 +2,9 @@ version: '3.4'
services:
oc-datacenter:
env_file:
- path: ./env.env
required: false
environment:
- MONGO_DATABASE=DC_myDC
image: 'oc-datacenter:latest'
@@ -10,14 +13,20 @@ services:
labels:
- "traefik.enable=true"
- "traefik.http.routers.datacenter.entrypoints=web"
- "traefik.http.middlewares.auth.forwardauth.address=http://oc-auth:8080/oc/forward"
- "traefik.http.routers.workflow.rule=PathPrefix(/datacenter)"
- "traefik.http.routers.datacenter.tls=false"
- "traefik.http.routers.datacenter.middlewares=auth"
- "traefik.http.routers.datacenter.rule=PathPrefix(`/datacenter`)"
- "traefik.http.services.datacenter.loadbalancer.server.port=8080"
- "traefik.http.middlewares.datacenter-rewrite.replacepathregex.regex=^/datacenter(.*)"
- "traefik.http.middlewares.datacenter-rewrite.replacepathregex.replacement=/oc$$1"
- "traefik.http.routers.datacenter.middlewares=datacenter-rewrite,auth-datacenter"
- "traefik.http.middlewares.auth-datacenter.forwardauth.address=http://oc-auth:8080/oc/forward"
- "traefik.http.middlewares.auth-datacenter.forwardauth.trustForwardHeader=true"
- "traefik.http.middlewares.auth-datacenter.forwardauth.authResponseHeaders=X-Auth-Request-User,X-Auth-Request-Email"
container_name: oc-datacenter
networks:
- catalog
- oc
networks:
catalog:
oc:
external: true

View File

@@ -2,8 +2,10 @@
"MONGO_URL":"mongodb://mongo:27017/",
"NATS_URL":"nats://nats:4222",
"MONGO_DATABASE":"DC_myDC",
"KUBERNETES_SERVICE_HOST" : "192.168.1.69",
"KUBE_CA" : "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTWpNeE1USXdNell3SGhjTk1qUXdPREE0TVRBeE16VTJXaGNOTXpRd09EQTJNVEF4TXpVMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTWpNeE1USXdNell3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTVlk3ZHZhNEdYTVdkMy9jMlhLN3JLYjlnWXgyNSthaEE0NmkyNVBkSFAKRktQL2UxSVMyWVF0dzNYZW1TTUQxaStZdzJSaVppNUQrSVZUamNtNHdhcnFvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWtlUVJpNFJiODduME5yRnZaWjZHClc2SU55NnN3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnRXA5ck04WmdNclRZSHYxZjNzOW5DZXZZeWVVa3lZUk4KWjUzazdoaytJS1FDSVFDbk05TnVGKzlTakIzNDFacGZ5ays2NEpWdkpSM3BhcmVaejdMd2lhNm9kdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"KUBE_CERT":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJWUxWNkFPQkdrU1F3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOekl6TVRFeU1ETTJNQjRYRFRJME1EZ3dPREV3TVRNMU5sb1hEVEkxTURndwpPREV3TVRNMU5sb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJGQ2Q1MFdPeWdlQ2syQzcKV2FrOWY4MVAvSkJieVRIajRWOXBsTEo0ck5HeHFtSjJOb2xROFYxdUx5RjBtOTQ2Nkc0RmRDQ2dqaXFVSk92Swp3NVRPNnd5alNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCVFJkOFI5cXVWK2pjeUVmL0ovT1hQSzMyS09XekFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlFQTArbThqTDBJVldvUTZ0dnB4cFo4NVlMalF1SmpwdXM0aDdnSXRxS3NmUVVDSUI2M2ZNdzFBMm5OVWU1TgpIUGZOcEQwSEtwcVN0Wnk4djIyVzliYlJUNklZCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUzTWpNeE1USXdNell3SGhjTk1qUXdPREE0TVRBeE16VTJXaGNOTXpRd09EQTJNVEF4TXpVMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUzTWpNeE1USXdNell3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFRc3hXWk9pbnIrcVp4TmFEQjVGMGsvTDF5cE01VHAxOFRaeU92ektJazQKRTFsZWVqUm9STW0zNmhPeVljbnN3d3JoNnhSUnBpMW5RdGhyMzg0S0Z6MlBvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTBYZkVmYXJsZm8zTWhIL3lmemx6Cnl0OWlqbHN3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUxJL2dNYnNMT3MvUUpJa3U2WHVpRVMwTEE2cEJHMXgKcnBlTnpGdlZOekZsQWlFQW1wdjBubjZqN3M0MVI0QzFNMEpSL0djNE53MHdldlFmZWdEVGF1R2p3cFk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"KUBE_DATA": "LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSU5ZS1BFb1dhd1NKUzJlRW5oWmlYMk5VZlY1ZlhKV2krSVNnV09TNFE5VTlvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFVUozblJZN0tCNEtUWUx0WnFUMS96VS84a0Z2Sk1lUGhYMm1Vc25pczBiR3FZblkyaVZEeApYVzR2SVhTYjNqcm9iZ1YwSUtDT0twUWs2OHJEbE03ckRBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo="
"KUBERNETES_SERVICE_HOST": "kubernetes.default.svc.cluster.local",
"KUBERNETES_SERVICE_PORT": "6443",
"KUBE_EXTERNAL_HOST": "",
"KUBE_CA": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTnpReU56STVNVEF3SGhjTk1qWXdNekl6TVRNek5URXdXaGNOTXpZd016SXdNVE16TlRFdwpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTnpReU56STVNVEF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSSGpYRDVpbnRIYWZWSk5VaDFlRnIxcXBKdFlkUmc5NStKVENEa0tadTIKYjUxRXlKaG1zanRIY3BDUndGL1VGMzlvdzY4TFBUcjBxaUorUHlhQTBLZUtvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTdWQkNzZVN3ajJ2cmczMFE5UG8vCnV6ZzAvMjR3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQUlEOVY2aFlUSS83ZW1hRzU0dDdDWVU3TXFSdDdESUkKNlgvSUwrQ0RLbzlNQWlCdlFEMGJmT0tVWDc4UmRGdUplcEhEdWFUMUExaGkxcWdIUGduM1dZdDBxUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"KUBE_CERT": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJUU5KbFNJQUJPMDR3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOemMwTWpjeU9URXdNQjRYRFRJMk1ETXlNekV6TXpVeE1Gb1hEVEkzTURNeQpNekV6TXpVeE1Gb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJMY3Uwb2pUbVg4RFhTQkYKSHZwZDZNVEoyTHdXc1lRTmdZVURXRDhTVERIUWlCczlMZ0x5ZTdOMEFvZk85RkNZVW1HamhiaVd3WFVHR3dGTgpUdlRMU2lXalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUlJhRW9wQzc5NGJyTHlnR0g5SVhvbDZTSmlFREFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlFQWhaRUlrSWV3Y1loL1NmTFVCVjE5MW1CYTNRK0J5S2J5eTVlQmpwL3kzeWtDSUIxWTJicTVOZTNLUUU4RAprNnNzeFJrbjJmN0VoWWVRQU1pUlJ2MjIweDNLCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUzTnpReU56STVNVEF3SGhjTk1qWXdNekl6TVRNek5URXdXaGNOTXpZd016SXdNVE16TlRFdwpXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUzTnpReU56STVNVEF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTcTdVTC85MEc1ZmVTaE95NjI3eGFZWlM5dHhFdWFoWFQ3Vk5wZkpQSnMKaEdXd2UxOXdtbXZzdlp6dlNPUWFRSzJaMmttN0hSb1IrNlA1YjIyamczbHVvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVVVXaEtLUXUvZUc2eThvQmgvU0Y2Ckpla2lZaEF3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQUk3cGxHczFtV20ySDErbjRobDBNTk13RmZzd0o5ZXIKTzRGVkM0QzhwRG44QWlCN3NZMVFwd2M5VkRUeGNZaGxuZzZNUzRXai85K0lHWjJxcy94UStrMjdTQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K",
"KUBE_DATA": "LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUROZDRnWXd6aVRhK1hwNnFtNVc3SHFzc1JJNkREaUJTbUV2ZHoxZzk3VGxvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFdHk3U2lOT1pmd05kSUVVZStsM294TW5ZdkJheGhBMkJoUU5ZUHhKTU1kQ0lHejB1QXZKNwpzM1FDaDg3MFVKaFNZYU9GdUpiQmRRWWJBVTFPOU10S0pRPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo="
}

21
docs/admiralty_setup.puml Normal file
View File

@@ -0,0 +1,21 @@
@startuml
boundary "oc-workflow" as workflow
boundary "oc-monitord" as monitord
boundary "local oc-datacenter" as locdc
boundary "remote oc-datacenter" as rocdc
workflow --> locdc : POST /booking/ {booking object}
locdc --> locdc : create Namespace + ServiceAccount
workflow --> rocdc : POST /boking/
rocdc --> rocdc : create \nNamespace + \nServiceAccount
monitord --> monitord : retrieves a Workflow to execute
monitord --> monitord : workflow needs repartited execution
' monitord --> rocdc : POST /????? (route that use the same \nmethods as /booking/ to create NS & SA)
monitord --> rocdc : POST /admiralty/source
monitord --> rocdc : GET /admiralty/kubeconfig/:execution_id
rocdc -> monitord : base64 encoded edited kubeconfig with token (**how to make it secure** ???)
monitord --> locdc : POST /admiralty/secret/:execution_id
monitord --> locdc : POST /admiralty/target/:execution_id
monitord --> locdc : GET /admiralty/nodes/:execution_id \n(if the node is up it means ALL GOOD)
@enduml

4
env.env Normal file
View File

@@ -0,0 +1,4 @@
KUBERNETES_SERVICE_HOST=192.168.47.20
KUBE_CA="LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTWpNeE1USXdNell3SGhjTk1qUXdPREE0TVRBeE16VTJXaGNOTXpRd09EQTJNVEF4TXpVMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTWpNeE1USXdNell3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTVlk3ZHZhNEdYTVdkMy9jMlhLN3JLYjlnWXgyNSthaEE0NmkyNVBkSFAKRktQL2UxSVMyWVF0dzNYZW1TTUQxaStZdzJSaVppNUQrSVZUamNtNHdhcnFvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWtlUVJpNFJiODduME5yRnZaWjZHClc2SU55NnN3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnRXA5ck04WmdNclRZSHYxZjNzOW5DZXZZeWVVa3lZUk4KWjUzazdoaytJS1FDSVFDbk05TnVGKzlTakIzNDFacGZ5ays2NEpWdkpSM3BhcmVaejdMd2lhNm9kdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
KUBE_CERT="LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJWUxWNkFPQkdrU1F3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOekl6TVRFeU1ETTJNQjRYRFRJME1EZ3dPREV3TVRNMU5sb1hEVEkxTURndwpPREV3TVRNMU5sb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJGQ2Q1MFdPeWdlQ2syQzcKV2FrOWY4MVAvSkJieVRIajRWOXBsTEo0ck5HeHFtSjJOb2xROFYxdUx5RjBtOTQ2Nkc0RmRDQ2dqaXFVSk92Swp3NVRPNnd5alNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCVFJkOFI5cXVWK2pjeUVmL0ovT1hQSzMyS09XekFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlFQTArbThqTDBJVldvUTZ0dnB4cFo4NVlMalF1SmpwdXM0aDdnSXRxS3NmUVVDSUI2M2ZNdzFBMm5OVWU1TgpIUGZOcEQwSEtwcVN0Wnk4djIyVzliYlJUNklZCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUzTWpNeE1USXdNell3SGhjTk1qUXdPREE0TVRBeE16VTJXaGNOTXpRd09EQTJNVEF4TXpVMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUzTWpNeE1USXdNell3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFRc3hXWk9pbnIrcVp4TmFEQjVGMGsvTDF5cE01VHAxOFRaeU92ektJazQKRTFsZWVqUm9STW0zNmhPeVljbnN3d3JoNnhSUnBpMW5RdGhyMzg0S0Z6MlBvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTBYZkVmYXJsZm8zTWhIL3lmemx6Cnl0OWlqbHN3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUxJL2dNYnNMT3MvUUpJa3U2WHVpRVMwTEE2cEJHMXgKcnBlTnpGdlZOekZsQWlFQW1wdjBubjZqN3M0MVI0QzFNMEpSL0djNE53MHdldlFmZWdEVGF1R2p3cFk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
KUBE_DATA="LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSU5ZS1BFb1dhd1NKUzJlRW5oWmlYMk5VZlY1ZlhKV2krSVNnV09TNFE5VTlvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFVUozblJZN0tCNEtUWUx0WnFUMS96VS84a0Z2Sk1lUGhYMm1Vc25pczBiR3FZblkyaVZEeApYVzR2SVhTYjNqcm9iZ1YwSUtDT0twUWs2OHJEbE03ckRBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo="

119
go.mod
View File

@@ -1,87 +1,112 @@
module oc-datacenter
go 1.23.0
toolchain go1.23.3
go 1.25.0
require (
cloud.o-forge.io/core/oc-lib v0.0.0-20250213085018-271cc2caa026
github.com/beego/beego/v2 v2.3.1
go.mongodb.org/mongo-driver v1.17.1
k8s.io/api v0.32.1
k8s.io/apimachinery v0.32.1
k8s.io/client-go v0.32.1
github.com/beego/beego/v2 v2.3.8
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674
github.com/minio/madmin-go/v4 v4.1.1
github.com/minio/minio-go/v7 v7.0.94
github.com/necmettindev/randomstring v0.1.0
go.mongodb.org/mongo-driver v1.17.4
k8s.io/api v0.35.1
k8s.io/apimachinery v0.35.1
k8s.io/client-go v0.35.1
)
require (
cloud.o-forge.io/core/oc-lib v0.0.0-20260325092016-4580200e8057 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/biter777/countries v1.7.5 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.6 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/ebitengine/purego v0.8.4 // indirect
github.com/emicklei/go-restful/v3 v3.12.2 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.9 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.22.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/go-playground/validator/v10 v10.27.0 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/golang-jwt/jwt/v4 v4.5.2 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/golang/snappy v1.0.0 // indirect
github.com/google/gnostic-models v0.7.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/goraz/onion v0.1.3 // indirect
github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/libp2p/go-libp2p/core v0.43.0-rc2 // indirect
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/minio/crc64nvme v1.0.2 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
github.com/montanaflynn/stats v0.7.1 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nats-io/nats.go v1.37.0 // indirect
github.com/nats-io/nkeys v0.4.7 // indirect
github.com/nats-io/nats.go v1.43.0 // indirect
github.com/nats-io/nkeys v0.4.11 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/client_golang v1.20.5 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.60.1 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/robfig/cron v1.2.0 // indirect
github.com/rs/zerolog v1.33.0 // indirect
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/prometheus/client_golang v1.22.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.65.0 // indirect
github.com/prometheus/procfs v0.17.0 // indirect
github.com/prometheus/prom2json v1.4.2 // indirect
github.com/prometheus/prometheus v0.304.1 // indirect
github.com/rs/xid v1.6.0 // indirect
github.com/rs/zerolog v1.34.0 // indirect
github.com/safchain/ethtool v0.6.1 // indirect
github.com/secure-io/sio-go v0.3.1 // indirect
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 // indirect
github.com/shirou/gopsutil/v4 v4.25.5 // indirect
github.com/smartystreets/goconvey v1.7.2 // indirect
github.com/tinylib/msgp v1.3.0 // indirect
github.com/tklauser/go-sysconf v0.3.15 // indirect
github.com/tklauser/numcpus v0.10.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
github.com/xdg-go/scram v1.1.2 // indirect
github.com/xdg-go/stringprep v1.0.4 // indirect
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
golang.org/x/crypto v0.28.0 // indirect
golang.org/x/net v0.30.0 // indirect
golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/sync v0.8.0 // indirect
golang.org/x/sys v0.26.0 // indirect
golang.org/x/term v0.25.0 // indirect
golang.org/x/text v0.19.0 // indirect
golang.org/x/time v0.7.0 // indirect
google.golang.org/protobuf v1.35.1 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.44.0 // indirect
golang.org/x/net v0.47.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.37.0 // indirect
golang.org/x/text v0.31.0 // indirect
golang.org/x/time v0.11.0 // indirect
google.golang.org/protobuf v1.36.8 // indirect
gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.2 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 // indirect
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect
sigs.k8s.io/yaml v1.6.0 // indirect
)

354
go.sum
View File

@@ -1,14 +1,24 @@
cloud.o-forge.io/core/oc-lib v0.0.0-20250205160221-88b7cfe2fd0f h1:6V+Z81ywYoDYSVMnM4PVaJYXFgCN3xSG3ddiUPn4jL8=
cloud.o-forge.io/core/oc-lib v0.0.0-20250205160221-88b7cfe2fd0f/go.mod h1:2roQbUpv3a6mTIr5oU1ux31WbN8YucyyQvCQ0FqwbcE=
cloud.o-forge.io/core/oc-lib v0.0.0-20250212150815-c7c1535ba91a h1:kfTSMCOxYiVGNJWD4OrV7YYTf6t4geKxWpGz4EucpEA=
cloud.o-forge.io/core/oc-lib v0.0.0-20250212150815-c7c1535ba91a/go.mod h1:2roQbUpv3a6mTIr5oU1ux31WbN8YucyyQvCQ0FqwbcE=
cloud.o-forge.io/core/oc-lib v0.0.0-20250213072626-4920322d0afb h1:EybP8jPpIiN5RLiBxr3cvvF9KIaC+uWvzM23ga0t1yI=
cloud.o-forge.io/core/oc-lib v0.0.0-20250213072626-4920322d0afb/go.mod h1:2roQbUpv3a6mTIr5oU1ux31WbN8YucyyQvCQ0FqwbcE=
cloud.o-forge.io/core/oc-lib v0.0.0-20250213085018-271cc2caa026 h1:CYwpofGfpAhMDrT6jqvu9NI/tcgxCD8PKJZDKEfTvVI=
cloud.o-forge.io/core/oc-lib v0.0.0-20250213085018-271cc2caa026/go.mod h1:2roQbUpv3a6mTIr5oU1ux31WbN8YucyyQvCQ0FqwbcE=
cloud.o-forge.io/core/oc-lib v0.0.0-20260319071818-28b5b7d39ffe h1:CHiWQAX7j/bMfbytCWGL2mUgSWYoDY4+bFQbCHEfypk=
cloud.o-forge.io/core/oc-lib v0.0.0-20260319071818-28b5b7d39ffe/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260323080307-5bdd2554a769 h1:TYluuZ28s58KqXrh3Z4nTYje3TVcLJN3VJwVwF9uP0M=
cloud.o-forge.io/core/oc-lib v0.0.0-20260323080307-5bdd2554a769/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260323105321-14b449f5473b h1:ouGEzCLGLjUOQ0ciowv9yJv3RhylvUg1GTUlOqXHCSc=
cloud.o-forge.io/core/oc-lib v0.0.0-20260323105321-14b449f5473b/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260323111629-fa9893e1508c h1:4T+SJgpeK9+lpVQq68chTiAKdaevwvKYo/veP/cOFRY=
cloud.o-forge.io/core/oc-lib v0.0.0-20260323111629-fa9893e1508c/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260323112935-b76b22a8fbee h1:XQ85OdhYry8zolODV0ezS6+Ari36SpXcnRSbP4E6v2k=
cloud.o-forge.io/core/oc-lib v0.0.0-20260323112935-b76b22a8fbee/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260323152020-211339947c46 h1:71WVrnLj0SM6PfQxCh25b2JGcL/1MZ2lYt254R/8n28=
cloud.o-forge.io/core/oc-lib v0.0.0-20260323152020-211339947c46/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260324114937-6d0c78946e8b h1:y0rppyzGIQTIyvapWwHZ8t20wMaSaMU6NoZLkMCui8w=
cloud.o-forge.io/core/oc-lib v0.0.0-20260324114937-6d0c78946e8b/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260325092016-4580200e8057 h1:pR+lZzcCWZ0kke2r2xXa7OpdbLpPW3gZSWZ8gGHh274=
cloud.o-forge.io/core/oc-lib v0.0.0-20260325092016-4580200e8057/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/beego/beego/v2 v2.3.1 h1:7MUKMpJYzOXtCUsTEoXOxsDV/UcHw6CPbaWMlthVNsc=
github.com/beego/beego/v2 v2.3.1/go.mod h1:5cqHsOHJIxkq44tBpRvtDe59GuVRVv/9/tyVDxd5ce4=
github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=
github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
github.com/beego/beego/v2 v2.3.8 h1:wplhB1pF4TxR+2SS4PUej8eDoH4xGfxuHfS7wAk9VBc=
github.com/beego/beego/v2 v2.3.8/go.mod h1:8vl9+RrXqvodrl9C8yivX1e6le6deCK6RWeq8R7gTTg=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/biter777/countries v1.7.5 h1:MJ+n3+rSxWQdqVJU8eBy9RqcdH6ePPn4PJHocVWUa+Q=
@@ -18,29 +28,39 @@ github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XL
github.com/coreos/etcd v3.3.17+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8=
github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/ebitengine/purego v0.8.4 h1:CF7LEKg5FFOsASUj0+QwaXf8Ht6TlFxg09+S9wz0omw=
github.com/ebitengine/purego v0.8.4/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/elazarl/go-bindata-assetfs v1.0.1 h1:m0kkaHRKEu7tUIUFVwhGGGYClXvyl4RE03qmvRTNfbw=
github.com/elazarl/go-bindata-assetfs v1.0.1/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4=
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/emicklei/go-restful/v3 v3.12.2 h1:DhwDP0vY3k8ZzE0RunuJy8GhNpPL6zqLkDf9B/a0/xU=
github.com/emicklei/go-restful/v3 v3.12.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/etcd-io/etcd v3.3.17+incompatible/go.mod h1:cdZ77EstHBwVtD6iTgzgvogwcjo9m4iOqoijouPJ4bs=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/gabriel-vasile/mimetype v1.4.6 h1:3+PzJTKLkvgjeTbts6msPJt4DixhT4YtFNf1gtGe3zc=
github.com/gabriel-vasile/mimetype v1.4.6/go.mod h1:JX1qVKqZd40hUPpAfiNTe0Sne7hdfKSbOqqmkq8GCXc=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/gabriel-vasile/mimetype v1.4.9 h1:5k+WDwEsD9eTLL8Tz3L0VnmVh9QxGjRmjBvAG7U/oYY=
github.com/gabriel-vasile/mimetype v1.4.9/go.mod h1:WnSQhFKJuBlRyLiKohA/2DtIlPFAbguNaG7QCHcyGok=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ=
github.com/go-openapi/jsonreference v0.21.0/go.mod h1:LmZmgsrTkVg9LG4EaHeY8cBDslNPMo06cago5JNLkm4=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
@@ -49,36 +69,40 @@ github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/o
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.22.1 h1:40JcKH+bBNGFczGuoBYgX4I6m/i27HYW8P9FDk5PbgA=
github.com/go-playground/validator/v10 v10.22.1/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
github.com/go-playground/validator/v10 v10.27.0 h1:w8+XrWVMhGkxOaaowyKH35gFydVHOvC0/uWoy2Fzwn4=
github.com/go-playground/validator/v10 v10.27.0/go.mod h1:I5QpIEbmr8On7W0TktmJAumgzX4CA1XNl4ZmDuVHKKo=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=
github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo=
github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo=
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/goraz/onion v0.1.3 h1:KhyvbDA2b70gcz/d5izfwTiOH8SmrvV43AsVzpng3n0=
github.com/goraz/onion v0.1.3/go.mod h1:XEmz1XoBz+wxTgWB8NwuvRm4RAu3vKxvrmYtzK+XCuQ=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/hashicorp/golang-lru v1.0.2 h1:dV3g9Z/unq5DpblPpw+Oqcv4dU/1omnb4Ok8iPY6p1c=
github.com/hashicorp/golang-lru v1.0.2/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/ipfs/go-cid v0.5.0 h1:goEKKhaGm0ul11IHA7I6p1GmKz8kEYniqFopaB5Otwg=
github.com/ipfs/go-cid v0.5.0/go.mod h1:0L7vmeNXpQpUS9vt+yEARkJ8rOg43DF3iPgn4GIN0mk=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
@@ -86,30 +110,47 @@ github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnr
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.10 h1:tBs3QSyvjDyFTq3uoc/9xFpCuOsJQFNPiAhYdw2skhE=
github.com/klauspost/cpuid/v2 v2.2.10/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
github.com/libp2p/go-libp2p/core v0.43.0-rc2 h1:1X1aDJNWhMfodJ/ynbaGLkgnC8f+hfBIqQDrzxFZOqI=
github.com/libp2p/go-libp2p/core v0.43.0-rc2/go.mod h1:NYeJ9lvyBv9nbDk2IuGb8gFKEOkIv/W5YRIy1pAJB2Q=
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35 h1:PpXWgLPs+Fqr325bN2FD2ISlRRztXibcX6e8f5FR5Dc=
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35/go.mod h1:autxFIvghDt3jPTLoqZ9OZ7s9qTGNAWmYCjVFWPX/zg=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/minio/crc64nvme v1.0.2 h1:6uO1UxGAD+kwqWWp7mBFsi5gAse66C4NXO8cmcVculg=
github.com/minio/crc64nvme v1.0.2/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
github.com/minio/madmin-go/v4 v4.1.1 h1:Y7JHamjTwnyvXO9aike5SC0c0sNsbs/NpG37c525oe4=
github.com/minio/madmin-go/v4 v4.1.1/go.mod h1:16tMDVIHWcp1zrmL6XrgCwPlZC7kB3UNNs5GDNnjYLQ=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.94 h1:1ZoksIKPyaSt64AVOyaQvhDOgVC3MfZsWM6mZXRUGtM=
github.com/minio/minio-go/v7 v7.0.94/go.mod h1:71t2CqDt3ThzESgZUlU1rBN54mksGGlkLcFgguDnnAc=
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
@@ -118,46 +159,77 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
github.com/mr-tron/base58 v1.2.0/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/multiformats/go-base32 v0.1.0 h1:pVx9xoSPqEIQG8o+UbAe7DNi51oej1NtK+aGkbLYxPE=
github.com/multiformats/go-base32 v0.1.0/go.mod h1:Kj3tFY6zNr+ABYMqeUNeGvkIC/UYgtWibDcT0rExnbI=
github.com/multiformats/go-base36 v0.2.0 h1:lFsAbNOGeKtuKozrtBsAkSVhv1p9D0/qedU9rQyccr0=
github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4=
github.com/multiformats/go-multiaddr v0.16.0 h1:oGWEVKioVQcdIOBlYM8BH1rZDWOGJSqr9/BKl6zQ4qc=
github.com/multiformats/go-multiaddr v0.16.0/go.mod h1:JSVUmXDjsVFiW7RjIFMP7+Ev+h1DTbiJgVeTV/tcmP0=
github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g=
github.com/multiformats/go-multibase v0.2.0/go.mod h1:bFBZX4lKCA/2lyOFSAoKH5SS6oPyjtnzK/XTFDPkNuk=
github.com/multiformats/go-multicodec v0.9.1 h1:x/Fuxr7ZuR4jJV4Os5g444F7xC4XmyUaT/FWtE+9Zjo=
github.com/multiformats/go-multicodec v0.9.1/go.mod h1:LLWNMtyV5ithSBUo3vFIMaeDy+h3EbkMTek1m+Fybbo=
github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U=
github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM=
github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/nEGOHFS8=
github.com/multiformats/go-varint v0.0.7/go.mod h1:r8PUYw/fD/SjBCiKOoDlGF6QawOELpZAu9eioSos/OU=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/nats-io/nats.go v1.37.0 h1:07rauXbVnnJvv1gfIyghFEo6lUcYRY0WXc3x7x0vUxE=
github.com/nats-io/nats.go v1.37.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI=
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
github.com/nats-io/nats.go v1.43.0 h1:uRFZ2FEoRvP64+UUhaTokyS18XBCR/xM2vQZKO4i8ug=
github.com/nats-io/nats.go v1.43.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
github.com/nats-io/nkeys v0.4.11 h1:q44qGV008kYd9W1b1nEBkNzvnWxtRSQ7A8BoqRrcfa0=
github.com/nats-io/nkeys v0.4.11/go.mod h1:szDimtgmfOi9n25JpfIdGw12tZFYXqhGxjhVxsatHVE=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/necmettindev/randomstring v0.1.0 h1:HeU/mfLCd/5E9At7xznbTeEw5YldGW92fvK8lWtvPwE=
github.com/necmettindev/randomstring v0.1.0/go.mod h1:h2nX9Jl0TLImuMt++XfLStVr8N76BmmP5D5EhLq0KEQ=
github.com/ogier/pflag v0.0.1/go.mod h1:zkFki7tvTa0tafRvTBIZTvzYyAu6kQhPZFnshFFPE+g=
github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM=
github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo=
github.com/onsi/gomega v1.35.1 h1:Cwbd75ZBPxFSuZ6T+rN/WCb/gOc6YgFBXLlZLhC7Ds4=
github.com/onsi/gomega v1.35.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns=
github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
github.com/onsi/gomega v1.38.2 h1:eZCjf2xjZAqe+LeWvKb5weQ+NcPwX84kqJ0cZNxok2A=
github.com/onsi/gomega v1.38.2/go.mod h1:W2MJcYxRGV63b418Ai34Ud0hEdTVXq9NW9+Sx6uXf3k=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c h1:dAMKvw0MlJT1GshSTtih8C2gDs04w8dReiOGXrGLNoY=
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.60.1 h1:FUas6GcOw66yB/73KC+BOZoFJmbo/1pojoILArPAaSc=
github.com/prometheus/common v0.60.1/go.mod h1:h0LYf1R1deLSKtD4Vdg8gy4RuOvENW2J/h19V5NADQw=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/robfig/cron v1.2.0 h1:ZjScXvvxeQ63Dbyxy76Fj3AT3Ut0aKsyd2/tl3DTMuQ=
github.com/robfig/cron v1.2.0/go.mod h1:JGuDeoQd7Z6yL4zQhZ3OPEVHB7fL6Ka6skscFHfmt2k=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8=
github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.65.0 h1:QDwzd+G1twt//Kwj/Ww6E9FQq1iVMmODnILtW1t2VzE=
github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=
github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=
github.com/prometheus/prom2json v1.4.2 h1:PxCTM+Whqi/eykO1MKsEL0p/zMpxp9ybpsmdFamw6po=
github.com/prometheus/prom2json v1.4.2/go.mod h1:zuvPm7u3epZSbXPWHny6G+o8ETgu6eAK3oPr6yFkRWE=
github.com/prometheus/prometheus v0.304.1 h1:e4kpJMb2Vh/PcR6LInake+ofcvFYHT+bCfmBvOkaZbY=
github.com/prometheus/prometheus v0.304.1/go.mod h1:ioGx2SGKTY+fLnJSQCdTHqARVldGNS8OlIe3kvp98so=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
github.com/safchain/ethtool v0.6.1 h1:mhRnXE1H8fV8TTXh/HdqE4tXtb57r//BQh5pPYMuM5k=
github.com/safchain/ethtool v0.6.1/go.mod h1:JzoNbG8xeg/BeVeVoMCtCb3UPWoppZZbFpA+1WFh+M0=
github.com/secure-io/sio-go v0.3.1 h1:dNvY9awjabXTYGsTF1PiCySl9Ltofk9GA3VdWlo7rRc=
github.com/secure-io/sio-go v0.3.1/go.mod h1:+xbkjDzPjwh4Axd07pRKSNriS9SCiYksWnZqdnfpQxs=
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 h1:v9ezJDHA1XGxViAUSIoO/Id7Fl63u6d0YmsAm+/p2hs=
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02/go.mod h1:RF16/A3L0xSa0oSERcnhd8Pu3IXSDZSK2gmGIMsttFE=
github.com/shirou/gopsutil/v4 v4.25.5 h1:rtd9piuSMGeU8g1RMXjZs9y9luK5BwtnG7dZaQUJAsc=
github.com/shirou/gopsutil/v4 v4.25.5/go.mod h1:PfybzyydfZcN+JMMjkF6Zb8Mq1A/VcogFFg7hj50W9c=
github.com/skarademir/naturalsort v0.0.0-20150715044055-69a5d87bef62/go.mod h1:oIdVclZaltY1Nf7OQUkg1/2jImBJ+ZfKZuDIRSwk3p0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/assertions v1.2.0 h1:42S6lae5dvLc7BrLu/0ugRtcFVjoJNMC/N3yZFZkDFs=
@@ -165,17 +237,22 @@ github.com/smartystreets/assertions v1.2.0/go.mod h1:tcbTF8ujkAEcZ8TElKY+i30BzYl
github.com/smartystreets/goconvey v0.0.0-20190731233626-505e41936337/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/smartystreets/goconvey v1.7.2 h1:9RBaZCeXEQ3UselpuwUQHltGVXvdwm6cv1hgR6gDIPg=
github.com/smartystreets/goconvey v1.7.2/go.mod h1:Vw0tHAZW6lzCRk3xgdin6fKYcG+G3Pg9vgXWeJpQFMM=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tinylib/msgp v1.3.0 h1:ULuf7GPooDaIlbyvgAxBV/FI7ynli6LZ1/nVUNu+0ww=
github.com/tinylib/msgp v1.3.0/go.mod h1:ykjzy2wzgrlvpDCRc4LA8UXy6D8bzMSuAF3WD57Gok0=
github.com/tklauser/go-sysconf v0.3.15 h1:VE89k0criAymJ/Os65CSn1IXaol+1wrsFHEB8Ol49K4=
github.com/tklauser/go-sysconf v0.3.15/go.mod h1:Dmjwr6tYFIseJw7a3dRLJfsHAMXZ3nEnL/aZY+0IuI4=
github.com/tklauser/numcpus v0.10.0 h1:18njr6LDBk1zuna922MgdjQuJFjrdppsZG60sHGfjso=
github.com/tklauser/numcpus v0.10.0/go.mod h1:BiTKazU708GQTYF4mB+cmlpT2Is1gLk7XVuEeem8LsQ=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
@@ -186,104 +263,107 @@ github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.mongodb.org/mongo-driver v1.17.1 h1:Wic5cJIwJgSpBhe3lx3+/RybR5PiYRMpVFgO7cOHyIM=
go.mongodb.org/mongo-driver v1.17.1/go.mod h1:wwWm/+BuOddhcq3n68LKRmgk2wXzmF6s0SFOa0GINL4=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.mongodb.org/mongo-driver v1.17.4 h1:jUorfmVzljjr0FLzYQsGP8cgN/qzzxlY9Vh0C9KFXVw=
go.mongodb.org/mongo-driver v1.17.4/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191112222119-e1110fd1c708/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw=
golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU=
golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc=
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 h1:bsqhLWFR6G6xiQcb+JoGqdKdRU6WzPWmK8E0jxTjzo4=
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476/go.mod h1:3//PLf8L/X+8b4vuAfHzxeRUl04Adcb341+IGKfnqS8=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24=
golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M=
golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.26.0 h1:v/60pFQmzmT9ExmjDv2gGIfi3OqfKoEP6I5+umXlbnQ=
golang.org/x/tools v0.26.0/go.mod h1:TPVVj70c7JJ3WCazhD8OdXcZg/og+b9+tH/KxylGwH0=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.36.8 h1:xHScyCOEuuwZEc6UtSOvPbAT4zRh0xcNRYekJwfqyMc=
google.golang.org/protobuf v1.36.8/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo=
gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/api v0.32.1 h1:f562zw9cy+GvXzXf0CKlVQ7yHJVYzLfL6JAS4kOAaOc=
k8s.io/api v0.32.1/go.mod h1:/Yi/BqkuueW1BgpoePYBRdDYfjPF5sgTr5+YqDZra5k=
k8s.io/apimachinery v0.32.1 h1:683ENpaCBjma4CYqsmZyhEzrGz6cjn1MY/X2jB2hkZs=
k8s.io/apimachinery v0.32.1/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE=
k8s.io/client-go v0.32.1 h1:otM0AxdhdBIaQh7l1Q0jQpmo7WOFIk5FFa4bg6YMdUU=
k8s.io/client-go v0.32.1/go.mod h1:aTTKZY7MdxUaJ/KiUs8D+GssR9zJZi77ZqtzcGXIiDg=
k8s.io/api v0.35.1 h1:0PO/1FhlK/EQNVK5+txc4FuhQibV25VLSdLMmGpDE/Q=
k8s.io/api v0.35.1/go.mod h1:28uR9xlXWml9eT0uaGo6y71xK86JBELShLy4wR1XtxM=
k8s.io/apimachinery v0.35.1 h1:yxO6gV555P1YV0SANtnTjXYfiivaTPvCTKX6w6qdDsU=
k8s.io/apimachinery v0.35.1/go.mod h1:jQCgFZFR1F4Ik7hvr2g84RTJSZegBc8yHgFWKn//hns=
k8s.io/client-go v0.35.1 h1:+eSfZHwuo/I19PaSxqumjqZ9l5XiTEKbIaJ+j1wLcLM=
k8s.io/client-go v0.35.1/go.mod h1:1p1KxDt3a0ruRfc/pG4qT/3oHmUj1AhSHEcxNSGg+OA=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f h1:GA7//TjRY9yWGy1poLzYYJJ4JRdzg3+O6e8I+e+8T5Y=
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f/go.mod h1:R/HEjbvWI0qdfb8viZUeVZm0X6IZnxAydC7YU42CMw4=
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro=
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo=
sigs.k8s.io/structured-merge-diff/v4 v4.4.2 h1:MdmvkGuXi/8io6ixD5wud3vOLwc1rj0aNqRlpuvjmwA=
sigs.k8s.io/structured-merge-diff/v4 v4.4.2/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 h1:Y3gxNAuB0OBLImH611+UDZcmKS3g6CthxToOb37KgwE=
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ=
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck=
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
lukechampine.com/blake3 v1.4.1 h1:I3Smz7gso8w4/TunLKec6K2fn+kyKtDxr/xcQEN84Wg=
lukechampine.com/blake3 v1.4.1/go.mod h1:QFosUxmjB8mnrWFSNwKmvxHpfY72bmD2tQ0kBMM3kwo=
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg=
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=

View File

@@ -0,0 +1,445 @@
package admiralty
import (
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"strings"
"sync"
"time"
"oc-datacenter/conf"
"oc-datacenter/infrastructure/kubernetes/models"
"oc-datacenter/infrastructure/monitor"
"oc-datacenter/infrastructure/storage"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
bookingmodel "cloud.o-forge.io/core/oc-lib/models/booking"
"cloud.o-forge.io/core/oc-lib/models/workflow_execution"
"cloud.o-forge.io/core/oc-lib/tools"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
)
// kubeconfigChannels holds channels waiting for kubeconfig delivery (keyed by executionID).
var kubeconfigChannels sync.Map
// admiraltyConsidersPayload is the PB_CONSIDERS payload emitted after admiralty provisioning.
type admiraltyConsidersPayload struct {
OriginID string `json:"origin_id"`
ExecutionsID string `json:"executions_id"`
// PeerID is the compute peer (SourcePeerID of the original ArgoKubeEvent).
// oc-monitord uses it to build a unique considers key per peer, avoiding
// broadcast collisions when multiple compute peers run in parallel.
PeerID string `json:"peer_id,omitempty"`
Secret string `json:"secret,omitempty"`
Error *string `json:"error,omitempty"`
}
// emitAdmiraltyConsiders publishes a PB_CONSIDERS back to OriginID with the result
// of the admiralty provisioning. secret is the base64-encoded kubeconfig; err is nil on success.
// When self is true the origin is the local peer: emits directly on CONSIDERS_EVENT
// instead of routing through PROPALGATION_EVENT.
func emitAdmiraltyConsiders(executionsID, originID, peerID, secret string, provErr error, self bool) {
var errStr *string
if provErr != nil {
s := provErr.Error()
errStr = &s
}
payload, _ := json.Marshal(admiraltyConsidersPayload{
OriginID: originID,
ExecutionsID: executionsID,
PeerID: peerID,
Secret: secret,
Error: errStr,
})
if self {
go tools.NewNATSCaller().SetNATSPub(tools.CONSIDERS_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: tools.COMPUTE_RESOURCE,
Method: int(tools.CONSIDERS_EVENT),
Payload: payload,
})
return
}
b, _ := json.Marshal(&tools.PropalgationMessage{
DataType: tools.COMPUTE_RESOURCE.EnumIndex(),
Action: tools.PB_CONSIDERS,
Payload: payload,
})
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: -1,
Method: int(tools.PROPALGATION_EVENT),
Payload: b,
})
}
// AdmiraltySetter carries the execution context for an admiralty pairing.
type AdmiraltySetter struct {
ExecutionsID string // execution ID, used as the Kubernetes namespace
NodeName string // name of the virtual node created by Admiralty on the target cluster
}
func NewAdmiraltySetter(execIDS string) *AdmiraltySetter {
return &AdmiraltySetter{
ExecutionsID: execIDS,
}
}
// InitializeAsSource is called on the peer that acts as the SOURCE cluster (compute provider).
// It creates the AdmiraltySource resource, generates a kubeconfig for the target peer,
// and publishes it on NATS so the target peer can complete its side of the setup.
func (s *AdmiraltySetter) InitializeAsSource(ctx context.Context, localPeerID string, destPeerID string, originID string, self bool, images []string) error {
logger := oclib.GetLogger()
// Local execution: no Admiralty resources needed — just emit PB_CONSIDERS.
if localPeerID == destPeerID {
emitAdmiraltyConsiders(s.ExecutionsID, originID, localPeerID, "", nil, true)
return nil
}
serv, err := tools.NewKubernetesService(conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort,
conf.GetConfig().KubeCA, conf.GetConfig().KubeCert, conf.GetConfig().KubeData)
if err != nil {
return errors.New("InitializeAsSource: failed to create service: " + err.Error())
}
// Create the AdmiraltySource resource on this cluster (inlined from CreateAdmiraltySource controller)
logger.Info().Msg("Creating AdmiraltySource ns-" + s.ExecutionsID)
_, err = serv.CreateAdmiraltySource(ctx, s.ExecutionsID)
if err != nil && !strings.Contains(err.Error(), "already exists") {
return errors.New("InitializeAsSource: failed to create service: " + err.Error())
}
// Generate a service-account token for the namespace (inlined from GetAdmiraltyKubeconfig controller)
token, err := serv.GenerateToken(ctx, s.ExecutionsID, 3600)
if err != nil {
return errors.New("InitializeAsSource: failed to generate token for ns-" + s.ExecutionsID + ": " + err.Error())
}
kubeconfig, err := buildHostKubeWithToken(token)
if err != nil {
return errors.New("InitializeAsSource: " + err.Error())
}
b, err := json.Marshal(kubeconfig)
if err != nil {
return errors.New("InitializeAsSource: failed to marshal kubeconfig: " + err.Error())
}
encodedKubeconfig := base64.StdEncoding.EncodeToString(b)
kube := models.KubeconfigEvent{
ExecutionsID: s.ExecutionsID,
Kubeconfig: encodedKubeconfig,
SourcePeerID: localPeerID,
DestPeerID: destPeerID,
OriginID: originID,
SourceExecutionsID: s.ExecutionsID,
Images: images,
}
// Publish the kubeconfig on NATS so the target peer can proceed
payload, err := json.Marshal(kube)
if err != nil {
return errors.New("InitializeAsSource: failed to marshal kubeconfig event: " + err.Error())
}
if b, err := json.Marshal(&tools.PropalgationMessage{
DataType: -1,
Action: tools.PB_ADMIRALTY_CONFIG,
Payload: payload,
}); err == nil {
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: tools.COMPUTE_RESOURCE,
User: "",
Method: int(tools.PROPALGATION_EVENT),
Payload: b,
})
}
logger.Info().Msg("InitializeAsSource: kubeconfig published for ns-" + s.ExecutionsID)
return nil
}
// InitializeAsTarget is called on the peer that acts as the TARGET cluster (scheduler).
// It waits for the kubeconfig published by the source peer via NATS, then creates
// the Secret, AdmiraltyTarget, and polls until the virtual node appears.
// self must be true when the origin peer is the local peer (direct CONSIDERS_EVENT emission).
func (s *AdmiraltySetter) InitializeAsTarget(ctx context.Context, kubeconfigObj models.KubeconfigEvent, self bool) {
logger := oclib.GetLogger()
defer kubeconfigChannels.Delete(s.ExecutionsID)
logger.Info().Msg("InitializeAsTarget: waiting for kubeconfig from source peer ns-" + s.ExecutionsID)
kubeconfigData := kubeconfigObj.Kubeconfig
serv, err := tools.NewKubernetesService(conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort,
conf.GetConfig().KubeCA, conf.GetConfig().KubeCert, conf.GetConfig().KubeData)
if err != nil {
logger.Error().Msg("InitializeAsTarget: failed to create service: " + err.Error())
return
}
// 1. Create the namespace
logger.Info().Msg("InitializeAsTarget: creating Namespace " + s.ExecutionsID)
if err := serv.CreateNamespace(ctx, s.ExecutionsID); err != nil && !strings.Contains(err.Error(), "already exists") {
logger.Error().Msg("InitializeAsTarget: failed to create namespace: " + err.Error())
emitAdmiraltyConsiders(s.ExecutionsID, kubeconfigObj.OriginID, kubeconfigObj.SourcePeerID, "", err, self)
return
}
// 2. Create the ServiceAccount sa-{executionID}
logger.Info().Msg("InitializeAsTarget: creating ServiceAccount sa-" + s.ExecutionsID)
if err := serv.CreateServiceAccount(ctx, s.ExecutionsID); err != nil && !strings.Contains(err.Error(), "already exists") {
logger.Error().Msg("InitializeAsTarget: failed to create service account: " + err.Error())
emitAdmiraltyConsiders(s.ExecutionsID, kubeconfigObj.OriginID, kubeconfigObj.SourcePeerID, "", err, self)
return
}
// 3. Create the Role
roleName := "role-" + s.ExecutionsID
logger.Info().Msg("InitializeAsTarget: creating Role " + roleName)
if err := serv.CreateRole(ctx, s.ExecutionsID, roleName,
[][]string{
{"coordination.k8s.io"},
{""},
{""}},
[][]string{
{"leases"},
{"secrets"},
{"pods"}},
[][]string{
{"get", "create", "update"},
{"get"},
{"patch"}},
); err != nil && !strings.Contains(err.Error(), "already exists") {
logger.Error().Msg("InitializeAsTarget: failed to create role: " + err.Error())
emitAdmiraltyConsiders(s.ExecutionsID, kubeconfigObj.OriginID, kubeconfigObj.SourcePeerID, "", err, self)
return
}
// 4. Create the RoleBinding
rbName := "rb-" + s.ExecutionsID
logger.Info().Msg("InitializeAsTarget: creating RoleBinding " + rbName)
if err := serv.CreateRoleBinding(ctx, s.ExecutionsID, rbName, roleName); err != nil && !strings.Contains(err.Error(), "already exists") {
logger.Error().Msg("InitializeAsTarget: failed to create role binding: " + err.Error())
emitAdmiraltyConsiders(s.ExecutionsID, kubeconfigObj.OriginID, kubeconfigObj.SourcePeerID, "", err, self)
return
}
// Create the Secret from the source peer's kubeconfig (inlined from CreateKubeSecret controller)
logger.Info().Msg("InitializeAsTarget: creating Secret ns-" + s.ExecutionsID)
if _, err := serv.CreateKubeconfigSecret(ctx, kubeconfigData, s.ExecutionsID, kubeconfigObj.SourcePeerID); err != nil {
logger.Error().Msg("InitializeAsTarget: failed to create kubeconfig secret: " + err.Error())
emitAdmiraltyConsiders(s.ExecutionsID, kubeconfigObj.OriginID, kubeconfigObj.SourcePeerID, "", err, self)
return
}
// Create the AdmiraltyTarget resource (inlined from CreateAdmiraltyTarget controller)
logger.Info().Msg("InitializeAsTarget: creating AdmiraltyTarget ns-" + s.ExecutionsID)
resp, err := serv.CreateAdmiraltyTarget(ctx, s.ExecutionsID, kubeconfigObj.SourcePeerID)
if err != nil || resp == nil {
logger.Error().Msg(fmt.Sprintf("InitializeAsTarget: failed to create admiralty target: %v", err))
if err == nil {
err = fmt.Errorf("CreateAdmiraltyTarget returned nil response")
}
emitAdmiraltyConsiders(s.ExecutionsID, kubeconfigObj.OriginID, kubeconfigObj.SourcePeerID, "", err, self)
return
}
// 5. Provision PVCs in the target namespace so Admiralty shadow pods can mount them.
// The claim names must match what oc-monitord generates: {storageName}-{sourceExecutionsID}.
if kubeconfigObj.SourceExecutionsID != "" {
logger.Info().Msg("InitializeAsTarget: provisioning PVCs for source exec " + kubeconfigObj.SourceExecutionsID)
provisionPVCsForTarget(ctx, s.ExecutionsID, kubeconfigObj.SourceExecutionsID, kubeconfigObj.SourcePeerID)
}
// Poll until the virtual node appears (inlined from GetNodeReady controller)
logger.Info().Msg("InitializeAsTarget: waiting for virtual node ns-" + s.ExecutionsID)
s.waitForNode(ctx, serv, kubeconfigObj.SourcePeerID)
emitAdmiraltyConsiders(s.ExecutionsID, kubeconfigObj.OriginID, kubeconfigObj.SourcePeerID, kubeconfigData, nil, self)
}
// provisionPVCsForTarget creates PVCs in the Admiralty target namespace for all local
// storages booked under sourceExecutionsID. The claim names use sourceExecutionsID as
// suffix so they match what oc-monitord generates in the workflow spec.
func provisionPVCsForTarget(ctx context.Context, targetNS string, sourceExecutionsID string, peerID string) {
logger := oclib.GetLogger()
res := oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), "", peerID, []string{}, nil).
Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"executions_id": {{Operator: dbs.EQUAL.String(), Value: sourceExecutionsID}},
"resource_type": {{Operator: dbs.EQUAL.String(), Value: tools.LIVE_STORAGE.EnumIndex()}},
},
}, "", false)
if res.Err != "" || len(res.Data) == 0 {
return
}
for _, dbo := range res.Data {
b, ok := dbo.(*bookingmodel.Booking)
if !ok {
continue
}
storageName := storage.ResolveStorageName(b.ResourceID, peerID)
if storageName == "" {
continue
}
event := storage.PVCProvisionEvent{
ExecutionsID: targetNS,
StorageID: b.ResourceID,
StorageName: storageName,
SourcePeerID: peerID,
DestPeerID: peerID,
OriginID: peerID,
}
// Use sourceExecutionsID as claim name suffix so it matches oc-monitord's claimName.
setter := storage.NewPVCSetterWithClaimSuffix(b.ResourceID, sourceExecutionsID)
logger.Info().Msgf("InitializeAsTarget: provisioning PVC %s in ns %s", storage.ClaimName(storageName, sourceExecutionsID), targetNS)
setter.InitializeAsSource(ctx, event, true)
}
}
// waitForNode polls GetOneNode until the Admiralty virtual node appears on this cluster.
func (s *AdmiraltySetter) waitForNode(ctx context.Context, serv *tools.KubernetesService, sourcePeerID string) {
logger := oclib.GetLogger()
for i := range 5 {
time.Sleep(10 * time.Second)
node, err := serv.GetOneNode(ctx, s.ExecutionsID, sourcePeerID)
if err == nil && node != nil {
s.NodeName = node.Name
logger.Info().Msg("waitForNode: node ready: " + s.NodeName)
return
}
if i == 4 {
logger.Error().Msg("waitForNode: node never appeared for ns-" + s.ExecutionsID)
return
}
logger.Info().Msg("waitForNode: node not ready yet, retrying...")
}
}
// TeardownAsTarget destroys all Admiralty resources created by InitializeAsTarget on the
// target (scheduler) cluster: the AdmiraltyTarget CRD, the ServiceAccount, the Role,
// the RoleBinding, and the namespace (namespace deletion cascades the rest).
func (s *AdmiraltySetter) TeardownAsTarget(ctx context.Context, originID string) {
logger := oclib.GetLogger()
serv, err := tools.NewKubernetesService(conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort,
conf.GetConfig().KubeCA, conf.GetConfig().KubeCert, conf.GetConfig().KubeData)
if err != nil {
logger.Error().Msg("TeardownAsTarget: failed to create k8s service: " + err.Error())
return
}
if err := serv.DeleteNamespace(ctx, s.ExecutionsID, func() {
logger.Info().Msg("TeardownAsTarget: namespace " + s.ExecutionsID + " deleted")
defer monitor.StreamRegistry.Register(s.ExecutionsID)
}); err != nil {
logger.Error().Msg("TeardownAsTarget: " + err.Error())
return
}
}
// TeardownAsSource destroys all Admiralty resources created by InitializeAsSource on the
// source (compute) cluster: the AdmiraltySource CRD, the ServiceAccount, and the namespace.
// The namespace deletion cascades the Role and RoleBinding.
func (s *AdmiraltySetter) TeardownAsSource(ctx context.Context) {
logger := oclib.GetLogger()
host := conf.GetConfig().KubeHost + ":" + conf.GetConfig().KubePort
ca := conf.GetConfig().KubeCA
cert := conf.GetConfig().KubeCert
data := conf.GetConfig().KubeData
// Delete the AdmiraltySource CRD via dynamic client
gvrSources := schema.GroupVersionResource{
Group: "multicluster.admiralty.io", Version: "v1alpha1", Resource: "sources",
}
if dyn, err := tools.NewDynamicClient(host, ca, cert, data); err != nil {
logger.Error().Msg("TeardownAsSource: failed to create dynamic client: " + err.Error())
} else if err := dyn.Resource(gvrSources).Namespace(s.ExecutionsID).Delete(
ctx, "source-"+s.ExecutionsID, metav1.DeleteOptions{},
); err != nil {
logger.Error().Msg("TeardownAsSource: failed to delete AdmiraltySource: " + err.Error())
}
// Delete the namespace (cascades SA, Role, RoleBinding)
serv, err := tools.NewKubernetesService(host, ca, cert, data)
if err != nil {
logger.Error().Msg("TeardownAsSource: failed to create k8s service: " + err.Error())
return
}
if err := serv.Set.CoreV1().Namespaces().Delete(ctx, s.ExecutionsID, metav1.DeleteOptions{}); err != nil {
logger.Error().Msg("TeardownAsSource: failed to delete namespace: " + err.Error())
return
}
logger.Info().Msg("TeardownAsSource: namespace " + s.ExecutionsID + " deleted")
}
// buildHostKubeWithToken builds a kubeconfig pointing to this peer's cluster,
// authenticated with the provided service-account token.
func buildHostKubeWithToken(token string) (*models.KubeConfigValue, error) {
if len(token) == 0 {
return nil, fmt.Errorf("buildHostKubeWithToken: empty token")
}
apiHost := conf.GetConfig().KubeExternalHost
if apiHost == "" {
apiHost = conf.GetConfig().KubeHost
}
encodedCA := conf.GetConfig().KubeCA
return &models.KubeConfigValue{
APIVersion: "v1",
CurrentContext: "default",
Kind: "Config",
Preferences: struct{}{},
Clusters: []models.KubeconfigNamedCluster{{
Name: "default",
Cluster: models.KubeconfigCluster{
Server: "https://" + apiHost + ":6443",
CertificateAuthorityData: encodedCA,
},
}},
Contexts: []models.KubeconfigNamedContext{{
Name: "default",
Context: models.KubeconfigContext{Cluster: "default", User: "default"},
}},
Users: []models.KubeconfigUser{{
Name: "default",
User: models.KubeconfigUserKeyPair{Token: token},
}},
}, nil
}
// teardownAdmiraltyIfRemote triggers Admiralty TeardownAsTarget only when at
// least one compute booking for the execution is on a remote peer.
// Local executions do not involve Admiralty.
func (s *AdmiraltySetter) TeardownIfRemote(exec *workflow_execution.WorkflowExecution, selfPeerID string) {
logger := oclib.GetLogger()
res := oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), "", selfPeerID, []string{}, nil).
Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"executions_id": {{Operator: dbs.EQUAL.String(), Value: exec.ExecutionsID}},
"resource_type": {{Operator: dbs.EQUAL.String(), Value: tools.COMPUTE_RESOURCE.EnumIndex()}},
},
}, "", false)
if res.Err != "" || len(res.Data) == 0 {
return
}
for _, dbo := range res.Data {
b, ok := dbo.(*bookingmodel.Booking)
if !ok {
continue
}
if b.DestPeerID != selfPeerID {
logger.Info().Msgf("InfraTeardown: Admiralty teardown exec=%s (remote peer=%s)",
exec.ExecutionsID, b.DestPeerID)
s.TeardownAsTarget(context.Background(), selfPeerID)
return // one teardown per execution is enough
}
}
}

View File

@@ -0,0 +1,39 @@
package infrastructure
import (
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/allowed_image"
)
// defaultAllowedImages est la liste des images utilitaires légères autorisées
// à persister sur tous les peers sans action de l'opérateur.
//
// Ces entrées sont marquées IsDefault:true et ne peuvent pas être supprimées
// via l'API — elles sont sous contrôle exclusif du code de la plateforme.
var defaultAllowedImages = []allowed_image.AllowedImage{
{Image: "natsio/nats-box", TagConstraint: "", IsDefault: true}, // outil NATS utilisé par les native tools
{Image: "library/alpine", TagConstraint: "", IsDefault: true}, // base image légère standard
{Image: "library/busybox", TagConstraint: "", IsDefault: true}, // utilitaire shell minimal
}
// BootstrapAllowedImages insère les images par défaut si elles sont absentes
// en base. Les entrées existantes ne sont pas modifiées.
// À appeler une fois au démarrage, avant beego.Run().
func BootstrapAllowedImages() {
req := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.ALLOWED_IMAGE), nil)
for _, img := range defaultAllowedImages {
// Vérifie si une entrée avec ce nom d'image existe déjà.
existing := req.Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"image": {{Operator: dbs.EQUAL.String(), Value: img.Image}},
},
}, "", false)
if existing.Err != "" || len(existing.Data) > 0 {
continue // déjà présente ou erreur de recherche : on passe
}
local := img // copie pour éviter la capture de boucle
req.StoreOne(local.Serialize(&local))
}
}

View File

@@ -0,0 +1,112 @@
package infrastructure
import (
"encoding/json"
"fmt"
"sync"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
bookingmodel "cloud.o-forge.io/core/oc-lib/models/booking"
"cloud.o-forge.io/core/oc-lib/models/common/enum"
"cloud.o-forge.io/core/oc-lib/tools"
"go.mongodb.org/mongo-driver/bson/primitive"
)
// processedBookings tracks booking IDs already handled this process lifetime.
var processedBookings sync.Map
// closingStates is the set of terminal booking states.
var ClosingStates = map[enum.BookingStatus]bool{
enum.FAILURE: true,
enum.SUCCESS: true,
enum.FORGOTTEN: true,
enum.CANCELLED: true,
}
// WatchBookings is a safety-net fallback for when oc-monitord fails to launch.
// It detects bookings that are past expected_start_date by at least 1 minute and
// are still in a non-terminal state. Instead of writing to the database directly,
// it emits WORKFLOW_STEP_DONE_EVENT with State=FAILURE on NATS so that oc-scheduler
// handles the state transition — keeping a single source of truth for booking state.
//
// Must be launched in a goroutine from main.
func WatchBookings() {
logger := oclib.GetLogger()
logger.Info().Msg("BookingWatchdog: started")
ticker := time.NewTicker(time.Minute)
defer ticker.Stop()
for range ticker.C {
if err := scanStaleBookings(); err != nil {
logger.Error().Msg("BookingWatchdog: " + err.Error())
}
}
}
// scanStaleBookings queries all bookings whose ExpectedStartDate passed more than
// 1 minute ago. Non-terminal ones get a WORKFLOW_STEP_DONE_EVENT FAILURE emitted
// on NATS so oc-scheduler closes them.
func scanStaleBookings() error {
myself, err := oclib.GetMySelf()
if err != nil {
return fmt.Errorf("could not resolve local peer: %w", err)
}
peerID := myself.GetID()
deadline := time.Now().UTC().Add(-time.Minute)
res := oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), "", peerID, []string{}, nil).
Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"expected_start_date": {{
Operator: dbs.LTE.String(),
Value: primitive.NewDateTimeFromTime(deadline),
}},
},
}, "", false)
if res.Err != "" {
return fmt.Errorf("stale booking search failed: %s", res.Err)
}
for _, dbo := range res.Data {
b, ok := dbo.(*bookingmodel.Booking)
if !ok {
continue
}
go emitWatchdogFailure(b)
}
return nil
}
// emitWatchdogFailure publishes a WORKFLOW_STEP_DONE_EVENT FAILURE for a stale
// booking. oc-scheduler is the single authority for booking state transitions.
func emitWatchdogFailure(b *bookingmodel.Booking) {
logger := oclib.GetLogger()
if _, done := processedBookings.Load(b.GetID()); done {
return
}
if ClosingStates[b.State] {
processedBookings.Store(b.GetID(), struct{}{})
return
}
now := time.Now().UTC()
payload, err := json.Marshal(tools.WorkflowLifecycleEvent{
BookingID: b.GetID(),
State: enum.FAILURE.EnumIndex(),
RealEnd: &now,
})
if err != nil {
return
}
tools.NewNATSCaller().SetNATSPub(tools.WORKFLOW_STEP_DONE_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Method: int(tools.WORKFLOW_STEP_DONE_EVENT),
Payload: payload,
})
logger.Info().Msgf("BookingWatchdog: booking %s stale → emitting FAILURE", b.GetID())
processedBookings.Store(b.GetID(), struct{}{})
}

View File

@@ -1,28 +0,0 @@
package infrastructure
import (
"context"
"errors"
"oc-datacenter/conf"
)
type Infrastructure interface {
CreateNamespace(ctx context.Context, ns string) error
DeleteNamespace(ctx context.Context, ns string) error
GetToken(ctx context.Context, ns string, duration int) (string, error)
CreateServiceAccount(ctx context.Context, ns string) error
CreateRoleBinding(ctx context.Context, ns string, roleBinding string, role string) error
CreateRole(ctx context.Context, ns string, role string, groups [][]string, resources [][]string, verbs [][]string) error
}
var _service = map[string]func() (Infrastructure, error){
"kubernetes": NewKubernetesService,
}
func NewService() (Infrastructure, error) {
service, ok := _service[conf.GetConfig().Mode]
if !ok {
return nil, errors.New("service not found")
}
return service()
}

View File

@@ -1,158 +0,0 @@
package infrastructure
import (
"context"
"errors"
"fmt"
"oc-datacenter/conf"
authv1 "k8s.io/api/authentication/v1"
v1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
type KubernetesService struct {
Set *kubernetes.Clientset
}
func NewKubernetesService() (Infrastructure, error) {
config := &rest.Config{
Host: conf.GetConfig().KubeHost + ":" + conf.GetConfig().KubePort,
TLSClientConfig: rest.TLSClientConfig{
CAData: []byte(conf.GetConfig().KubeCA),
CertData: []byte(conf.GetConfig().KubeCert),
KeyData: []byte(conf.GetConfig().KubeData),
},
}
// Create clientset
clientset, err := kubernetes.NewForConfig(config)
fmt.Println("NewForConfig", clientset, err)
if err != nil {
return nil, errors.New("Error creating Kubernetes client: " + err.Error())
}
if clientset == nil {
return nil, errors.New("Error creating Kubernetes client: clientset is nil")
}
return &KubernetesService{
Set: clientset,
}, nil
}
func (k *KubernetesService) CreateNamespace(ctx context.Context, ns string) error {
// Define the namespace
namespace := &v1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: ns,
},
}
// Create the namespace
fmt.Println("Creating namespace...", k.Set)
if _, err := k.Set.CoreV1().Namespaces().Create(ctx, namespace, metav1.CreateOptions{}); err != nil {
return errors.New("Error creating namespace: " + err.Error())
}
fmt.Println("Namespace created successfully!")
return nil
}
func (k *KubernetesService) CreateServiceAccount(ctx context.Context, ns string) error {
// Create the ServiceAccount object
serviceAccount := &v1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: "sa-" + ns,
Namespace: ns,
},
}
// Create the ServiceAccount in the specified namespace
_, err := k.Set.CoreV1().ServiceAccounts(ns).Create(ctx, serviceAccount, metav1.CreateOptions{})
if err != nil {
return errors.New("Failed to create ServiceAccount: " + err.Error())
}
return nil
}
func (k *KubernetesService) CreateRole(ctx context.Context, ns string, role string, groups [][]string, resources [][]string, verbs [][]string) error {
// Create the Role object
if len(groups) != len(resources) || len(resources) != len(verbs) {
return errors.New("Invalid input: groups, resources, and verbs must have the same length")
}
rules := []rbacv1.PolicyRule{}
for i, group := range groups {
rules = append(rules, rbacv1.PolicyRule{
APIGroups: group,
Resources: resources[i],
Verbs: verbs[i],
})
}
r := &rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: role,
Namespace: ns,
},
Rules: rules,
}
// Create the Role in the specified namespace
_, err := k.Set.RbacV1().Roles(ns).Create(ctx, r, metav1.CreateOptions{})
if err != nil {
return errors.New("Failed to create Role: " + err.Error())
}
return nil
}
func (k *KubernetesService) CreateRoleBinding(ctx context.Context, ns string, roleBinding string, role string) error {
// Create the RoleBinding object
rb := &rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: roleBinding,
Namespace: ns,
},
Subjects: []rbacv1.Subject{
{
Kind: "ServiceAccount",
Name: "sa-" + ns,
Namespace: ns,
},
},
RoleRef: rbacv1.RoleRef{
Kind: "Role",
Name: role,
APIGroup: "rbac.authorization.k8s.io",
},
}
// Create the RoleBinding in the specified namespace
_, err := k.Set.RbacV1().RoleBindings(ns).Create(ctx, rb, metav1.CreateOptions{})
if err != nil {
return errors.New("Failed to create RoleBinding: " + err.Error())
}
return nil
}
func (k *KubernetesService) DeleteNamespace(ctx context.Context, ns string) error {
// Delete the namespace
if err := k.Set.CoreV1().Namespaces().Delete(ctx, ns, metav1.DeleteOptions{}); err != nil {
return errors.New("Error deleting namespace: " + err.Error())
}
fmt.Println("Namespace deleted successfully!")
return nil
}
func (k *KubernetesService) GetToken(ctx context.Context, ns string, duration int) (string, error) {
// Define TokenRequest (valid for 1 hour)
d := int64(duration)
tokenRequest := &authv1.TokenRequest{
Spec: authv1.TokenRequestSpec{
ExpirationSeconds: &d, // 1 hour validity
},
}
// Generate the token
token, err := k.Set.CoreV1().
ServiceAccounts(ns).
CreateToken(ctx, "sa-"+ns, tokenRequest, metav1.CreateOptions{})
if err != nil {
return "", errors.New("Failed to create token for ServiceAccount: " + err.Error())
}
return token.Status.Token, nil
}

View File

@@ -0,0 +1,323 @@
package kubernetes
import (
"context"
"encoding/base64"
"fmt"
"strings"
"sync"
"time"
"oc-datacenter/conf"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/allowed_image"
"cloud.o-forge.io/core/oc-lib/tools"
appsv1 "k8s.io/api/apps/v1"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
type KubernetesService struct {
ExecutionsID string
}
func NewKubernetesService(executionsID string) *KubernetesService {
return &KubernetesService{
ExecutionsID: executionsID,
}
}
// prepullRegistry associe executionsID → images pre-pullées pour ce run.
// Utilisé par CleanupImages après WORKFLOW_DONE_EVENT.
var prepullRegistry sync.Map
// RunPrepull crée un Job k8s dans le namespace executionsID qui pre-pull chaque
// image de la liste (imagePullPolicy: IfNotPresent). Bloque jusqu'à la complétion
// du Job ou timeout (5 min). Enregistre les images pour le cleanup post-exec.
func (s *KubernetesService) RunPrepull(ctx context.Context, images []string) error {
logger := oclib.GetLogger()
// Toujours stocker pour le cleanup, même si le pull échoue.
prepullRegistry.Store(s.ExecutionsID, images)
if len(images) == 0 {
return nil
}
cs, err := s.newClientset()
if err != nil {
return fmt.Errorf("RunPrepull: failed to build clientset: %w", err)
}
// Un container par image — ils tournent tous en parallèle dans le même pod.
containers := make([]corev1.Container, 0, len(images))
for i, img := range images {
containers = append(containers, corev1.Container{
Name: fmt.Sprintf("prepull-%d", i),
Image: img,
ImagePullPolicy: corev1.PullIfNotPresent,
Command: []string{"true"},
})
}
var backoff int32 = 0
jobName := "prepull-" + s.ExecutionsID
job := &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
Namespace: s.ExecutionsID,
},
Spec: batchv1.JobSpec{
BackoffLimit: &backoff,
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
Containers: containers,
},
},
},
}
if _, err := cs.BatchV1().Jobs(s.ExecutionsID).Create(ctx, job, metav1.CreateOptions{}); err != nil {
return fmt.Errorf("RunPrepull: failed to create job: %w", err)
}
timeout := int64(300) // 5 min, cohérent avec waitForConsiders
watcher, err := cs.BatchV1().Jobs(s.ExecutionsID).Watch(ctx, metav1.ListOptions{
FieldSelector: "metadata.name=" + jobName,
TimeoutSeconds: &timeout,
})
if err != nil {
return fmt.Errorf("RunPrepull: failed to watch job: %w", err)
}
defer watcher.Stop()
for event := range watcher.ResultChan() {
j, ok := event.Object.(*batchv1.Job)
if !ok {
continue
}
for _, cond := range j.Status.Conditions {
if cond.Type == batchv1.JobComplete && cond.Status == corev1.ConditionTrue {
logger.Info().Msgf("RunPrepull: job %s completed for ns %s", jobName, s.ExecutionsID)
return nil
}
if cond.Type == batchv1.JobFailed && cond.Status == corev1.ConditionTrue {
return fmt.Errorf("RunPrepull: job %s failed for ns %s", jobName, s.ExecutionsID)
}
}
}
return fmt.Errorf("RunPrepull: timeout waiting for job %s", jobName)
}
// CleanupImages récupère les images pre-pullées pour ce run, filtre celles
// absentes de AllowedImages, et planifie leur suppression via un DaemonSet
// privilégié (crictl rmi) sur tous les nœuds du cluster.
// Appelé depuis teardownInfraForExecution au WORKFLOW_DONE_EVENT.
func (s *KubernetesService) CleanupImages(ctx context.Context) {
logger := oclib.GetLogger()
raw, ok := prepullRegistry.LoadAndDelete(s.ExecutionsID)
if !ok {
return
}
images := raw.([]string)
if len(images) == 0 {
return
}
toRemove := s.filterNonAllowed(images)
if len(toRemove) == 0 {
logger.Info().Msgf("CleanupImages: all images for %s are in AllowedImages, keeping", s.ExecutionsID)
return
}
logger.Info().Msgf("CleanupImages: scheduling removal of %d image(s) for %s: %v",
len(toRemove), s.ExecutionsID, toRemove)
go s.scheduleImageRemoval(ctx, toRemove)
}
// filterNonAllowed retourne les images non présentes dans AllowedImages.
func (s *KubernetesService) filterNonAllowed(images []string) []string {
var toRemove []string
for _, img := range images {
registry, name, tag := s.parseImage(img)
res := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.ALLOWED_IMAGE), nil).Search(
&dbs.Filters{
And: map[string][]dbs.Filter{
"image": {{Operator: dbs.EQUAL.String(), Value: name}},
},
}, "", false)
if len(res.Data) == 0 {
toRemove = append(toRemove, img)
continue
}
allowed := false
for _, d := range res.Data {
a, ok := d.(*allowed_image.AllowedImage)
if !ok {
continue
}
if a.Registry != "" && a.Registry != registry {
continue
}
if s.matchesTagConstraint(a.TagConstraint, tag) {
allowed = true
break
}
}
if !allowed {
toRemove = append(toRemove, img)
}
}
return toRemove
}
// scheduleImageRemoval crée un DaemonSet privilégié sur tous les nœuds du cluster
// qui exécute "crictl rmi" pour chaque image à supprimer, puis supprime le DaemonSet.
func (s *KubernetesService) scheduleImageRemoval(ctx context.Context, images []string) {
logger := oclib.GetLogger()
cs, err := s.newClientset()
if err != nil {
logger.Error().Msgf("scheduleImageRemoval: failed to build clientset: %v", err)
return
}
// Commande shell : crictl rmi image1 image2 ... || true (best-effort)
args := strings.Join(images, " ")
cmd := fmt.Sprintf("crictl rmi %s || true", args)
privileged := true
dsName := "oc-cleanup-" + s.ExecutionsID
ds := &appsv1.DaemonSet{
ObjectMeta: metav1.ObjectMeta{
Name: dsName,
Namespace: "default",
Labels: map[string]string{"app": dsName},
},
Spec: appsv1.DaemonSetSpec{
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{"app": dsName},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{"app": dsName},
},
Spec: corev1.PodSpec{
// Tolère tous les taints pour atteindre tous les nœuds.
Tolerations: []corev1.Toleration{
{Operator: corev1.TolerationOpExists},
},
HostPID: true,
Containers: []corev1.Container{{
Name: "cleanup",
Image: "alpine:3",
// nsenter entre dans le namespace mount du host (PID 1)
// pour accéder au crictl installé sur le nœud.
Command: []string{"sh", "-c",
"nsenter -t 1 -m -u -i -n -- sh -c '" + cmd + "'"},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
}},
},
},
},
}
if _, err := cs.AppsV1().DaemonSets("default").Create(ctx, ds, metav1.CreateOptions{}); err != nil {
logger.Error().Msgf("scheduleImageRemoval: failed to create DaemonSet: %v", err)
return
}
// Laisse le temps au DaemonSet de tourner sur tous les nœuds.
time.Sleep(30 * time.Second)
if err := cs.AppsV1().DaemonSets("default").Delete(ctx, dsName, metav1.DeleteOptions{}); err != nil {
logger.Error().Msgf("scheduleImageRemoval: failed to delete DaemonSet: %v", err)
}
logger.Info().Msgf("scheduleImageRemoval: completed for %s", s.ExecutionsID)
}
// parseImage décompose "registry/name:tag" en ses trois composants.
// registry vide si aucun composant ressemblant à un hostname n'est détecté.
func (s *KubernetesService) parseImage(image string) (registry, name, tag string) {
parts := strings.SplitN(image, ":", 2)
nameWithRegistry := parts[0]
if len(parts) == 2 {
tag = parts[1]
} else {
tag = "latest"
}
slashIdx := strings.Index(nameWithRegistry, "/")
if slashIdx == -1 {
return "", nameWithRegistry, tag
}
prefix := nameWithRegistry[:slashIdx]
// Présence d'un "." ou ":" ou "localhost" → c'est un hostname de registry.
if strings.ContainsAny(prefix, ".:") || prefix == "localhost" {
return prefix, nameWithRegistry[slashIdx+1:], tag
}
return "", nameWithRegistry, tag
}
// matchesTagConstraint vérifie si tag satisfait la contrainte.
// Vide = toutes versions. Supporte exact et glob suffixe ("3.*").
func (s *KubernetesService) matchesTagConstraint(constraint, tag string) bool {
if constraint == "" {
return true
}
if strings.HasSuffix(constraint, "*") {
return strings.HasPrefix(tag, strings.TrimSuffix(constraint, "*"))
}
return constraint == tag
}
// newClientset construit un client k8s depuis les credentials base64 en conf.
func (s *KubernetesService) newClientset() (*kubernetes.Clientset, error) {
caData, err := base64.StdEncoding.DecodeString(conf.GetConfig().KubeCA)
if err != nil {
return nil, fmt.Errorf("newClientset: invalid KubeCA: %w", err)
}
certData, err := base64.StdEncoding.DecodeString(conf.GetConfig().KubeCert)
if err != nil {
return nil, fmt.Errorf("newClientset: invalid KubeCert: %w", err)
}
keyData, err := base64.StdEncoding.DecodeString(conf.GetConfig().KubeData)
if err != nil {
return nil, fmt.Errorf("newClientset: invalid KubeData: %w", err)
}
cfg := &rest.Config{
Host: "https://" + conf.GetConfig().KubeHost + ":" + conf.GetConfig().KubePort,
TLSClientConfig: rest.TLSClientConfig{
CAData: caData,
CertData: certData,
KeyData: keyData,
},
}
return kubernetes.NewForConfig(cfg)
}
func (s *KubernetesService) CreateNamespace() error {
logger := oclib.GetLogger()
serv, err := tools.NewKubernetesService(
conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort, conf.GetConfig().KubeCA,
conf.GetConfig().KubeCert, conf.GetConfig().KubeData)
if err != nil {
logger.Error().Msg("CreateNamespace: failed to init k8s service: " + err.Error())
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
return serv.ProvisionExecutionNamespace(ctx, s.ExecutionsID)
}

View File

@@ -0,0 +1,72 @@
package models
// KubeConfigValue is a struct used to create a kubectl configuration YAML file.
type KubeConfigValue struct {
APIVersion string `yaml:"apiVersion" json:"apiVersion"`
Kind string `yaml:"kind" json:"kind"`
Clusters []KubeconfigNamedCluster `yaml:"clusters" json:"clusters"`
Users []KubeconfigUser `yaml:"users" json:"users"`
Contexts []KubeconfigNamedContext `yaml:"contexts" json:"contexts"`
CurrentContext string `yaml:"current-context" json:"current-context"`
Preferences struct{} `yaml:"preferences" json:"preferences"`
}
// KubeconfigUser is a struct used to create a kubectl configuration YAML file
type KubeconfigUser struct {
Name string `yaml:"name" json:"name"`
User KubeconfigUserKeyPair `yaml:"user" json:"user"`
}
// KubeconfigUserKeyPair is a struct used to create a kubectl configuration YAML file
type KubeconfigUserKeyPair struct {
Token string `yaml:"token" json:"token"`
}
// KubeconfigAuthProvider is a struct used to create a kubectl authentication provider
type KubeconfigAuthProvider struct {
Name string `yaml:"name" json:"name"`
Config map[string]string `yaml:"config" json:"config"`
}
// KubeconfigNamedCluster is a struct used to create a kubectl configuration YAML file
type KubeconfigNamedCluster struct {
Name string `yaml:"name" json:"name"`
Cluster KubeconfigCluster `yaml:"cluster" json:"cluster"`
}
// KubeconfigCluster is a struct used to create a kubectl configuration YAML file
type KubeconfigCluster struct {
Server string `yaml:"server" json:"server"`
CertificateAuthorityData string `yaml:"certificate-authority-data" json:"certificate-authority-data"`
CertificateAuthority string `yaml:"certificate-authority" json:"certificate-authority"`
}
// KubeconfigNamedContext is a struct used to create a kubectl configuration YAML file
type KubeconfigNamedContext struct {
Name string `yaml:"name" json:"name"`
Context KubeconfigContext `yaml:"context" json:"context"`
}
// KubeconfigContext is a struct used to create a kubectl configuration YAML file
type KubeconfigContext struct {
Cluster string `yaml:"cluster" json:"cluster"`
Namespace string `yaml:"namespace,omitempty" json:"namespace,omitempty"`
User string `yaml:"user" json:"user"`
}
// kubeconfigEvent is the NATS payload used to transfer the kubeconfig from the source peer to the target peer.
type KubeconfigEvent struct {
DestPeerID string `json:"dest_peer_id"`
ExecutionsID string `json:"executions_id"`
Kubeconfig string `json:"kubeconfig"`
SourcePeerID string `json:"source_peer_id"`
// OriginID is the peer that initiated the provisioning request.
// The PB_CONSIDERS response is routed back to this peer.
OriginID string `json:"origin_id"`
// SourceExecutionsID is the execution namespace on the source cluster.
// Used by the target to provision PVCs with the correct claim name.
SourceExecutionsID string `json:"source_executions_id,omitempty"`
// Images is the list of container images to pre-pull on the compute peer
// before the workflow starts.
Images []string `json:"images,omitempty"`
}

View File

@@ -0,0 +1,359 @@
package kubernetes
import (
"context"
"fmt"
"regexp"
"strings"
"time"
"oc-datacenter/conf"
"oc-datacenter/infrastructure"
"oc-datacenter/infrastructure/admiralty"
"oc-datacenter/infrastructure/storage"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
bookingmodel "cloud.o-forge.io/core/oc-lib/models/booking"
"cloud.o-forge.io/core/oc-lib/models/workflow_execution"
"cloud.o-forge.io/core/oc-lib/tools"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// uuidNsPattern matches Kubernetes namespace names that are execution UUIDs.
var uuidNsPattern = regexp.MustCompile(`^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$`)
// WatchInfra is a safety-net watchdog that periodically scans Kubernetes for
// execution namespaces whose WorkflowExecution has reached a terminal state
// but whose infra was never torn down (e.g. because WORKFLOW_DONE_EVENT was
// missed due to oc-monitord or oc-datacenter crash/restart).
//
// Must be launched in a goroutine from main.
func (s *KubernetesService) Watch() {
logger := oclib.GetLogger()
logger.Info().Msg("InfraWatchdog: started")
ticker := time.NewTicker(5 * time.Minute)
defer ticker.Stop()
for range ticker.C {
if err := s.scanOrphaned(); err != nil {
logger.Error().Msg("InfraWatchdog: " + err.Error())
}
if err := s.scanOrphanedMinio(); err != nil {
logger.Error().Msg("InfraWatchdog(minio): " + err.Error())
}
if err := s.scanOrphanedAdmiraltyNodes(); err != nil {
logger.Error().Msg("InfraWatchdog(admiralty-nodes): " + err.Error())
}
if err := s.scanOrphanedPVC(); err != nil {
logger.Error().Msg("InfraWatchdog(pvc): " + err.Error())
}
}
}
// scanOrphanedInfra lists all UUID-named Kubernetes namespaces, looks up their
// WorkflowExecution in the DB, and triggers teardown for any that are in a
// terminal state. Namespaces already in Terminating phase are skipped.
func (s *KubernetesService) scanOrphaned() error {
logger := oclib.GetLogger()
serv, err := tools.NewKubernetesService(
conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort,
conf.GetConfig().KubeCA,
conf.GetConfig().KubeCert,
conf.GetConfig().KubeData,
)
if err != nil {
return fmt.Errorf("failed to init k8s service: %w", err)
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
nsList, err := serv.Set.CoreV1().Namespaces().List(ctx, metav1.ListOptions{})
if err != nil {
return fmt.Errorf("failed to list namespaces: %w", err)
}
myself, err := oclib.GetMySelf()
if err != nil {
return fmt.Errorf("could not resolve local peer: %w", err)
}
peerID := myself.GetID()
for _, ns := range nsList.Items {
executionsID := ns.Name
if !uuidNsPattern.MatchString(executionsID) {
continue
}
// Skip namespaces already being deleted by a previous teardown.
if ns.Status.Phase == v1.NamespaceTerminating {
continue
}
exec := findTerminalExecution(executionsID, peerID)
if exec == nil {
continue
}
logger.Info().Msgf("InfraWatchdog: orphaned infra detected for execution %s (state=%v) → teardown",
executionsID, exec.State)
go s.TeardownForExecution(exec.GetID())
}
return nil
}
// scanOrphanedMinio scans LIVE_STORAGE bookings for executions that are in a
// terminal state and triggers Minio teardown for each unique executionsID found.
// This covers the case where the Kubernetes namespace is already gone (manual
// deletion, prior partial teardown) but Minio SA and bucket were never revoked.
func (s *KubernetesService) scanOrphanedMinio() error {
logger := oclib.GetLogger()
myself, err := oclib.GetMySelf()
if err != nil {
return fmt.Errorf("could not resolve local peer: %w", err)
}
peerID := myself.GetID()
res := oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), "", peerID, []string{}, nil).
Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"resource_type": {{Operator: dbs.EQUAL.String(), Value: tools.LIVE_STORAGE.EnumIndex()}},
},
}, "", false)
if res.Err != "" {
return fmt.Errorf("failed to search LIVE_STORAGE bookings: %s", res.Err)
}
// Collect unique executionsIDs to avoid redundant teardowns.
seen := map[string]bool{}
ctx := context.Background()
for _, dbo := range res.Data {
b, ok := dbo.(*bookingmodel.Booking)
if !ok || seen[b.ExecutionsID] {
continue
}
exec := findTerminalExecution(b.ExecutionsID, peerID)
if exec == nil {
continue
}
seen[b.ExecutionsID] = true
minio := storage.NewMinioSetter(b.ExecutionsID, b.ResourceID)
// Determine this peer's role and call the appropriate teardown.
if b.DestPeerID == peerID {
logger.Info().Msgf("InfraWatchdog(minio): orphaned target resources for exec %s → TeardownAsTarget", b.ExecutionsID)
event := storage.MinioDeleteEvent{
ExecutionsID: b.ExecutionsID,
MinioID: b.ResourceID,
SourcePeerID: b.DestPeerID,
DestPeerID: peerID,
}
go minio.TeardownAsTarget(ctx, event)
} else {
logger.Info().Msgf("InfraWatchdog(minio): orphaned source resources for exec %s → TeardownAsSource", b.ExecutionsID)
event := storage.MinioDeleteEvent{
ExecutionsID: b.ExecutionsID,
MinioID: b.ResourceID,
SourcePeerID: peerID,
DestPeerID: b.DestPeerID,
}
go minio.TeardownAsSource(ctx, event)
}
}
return nil
}
// scanOrphanedAdmiraltyNodes lists all Kubernetes nodes, identifies Admiralty
// virtual nodes (name prefix "admiralty-{UUID}-") that are NotReady, and
// explicitly deletes them when their WorkflowExecution is in a terminal state.
//
// This covers the gap where the namespace is already gone (or Terminating) but
// the virtual node was never cleaned up by the Admiralty controller — which can
// happen when the node goes NotReady before the AdmiraltyTarget CRD is deleted.
func (s *KubernetesService) scanOrphanedAdmiraltyNodes() error {
logger := oclib.GetLogger()
serv, err := tools.NewKubernetesService(
conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort,
conf.GetConfig().KubeCA,
conf.GetConfig().KubeCert,
conf.GetConfig().KubeData,
)
if err != nil {
return fmt.Errorf("failed to init k8s service: %w", err)
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
nodeList, err := serv.Set.CoreV1().Nodes().List(ctx, metav1.ListOptions{})
if err != nil {
return fmt.Errorf("failed to list nodes: %w", err)
}
myself, err := oclib.GetMySelf()
if err != nil {
return fmt.Errorf("could not resolve local peer: %w", err)
}
peerID := myself.GetID()
for _, node := range nodeList.Items {
// Admiralty virtual nodes are named: admiralty-{executionID}-target-{...}
rest := strings.TrimPrefix(node.Name, "admiralty-")
if rest == node.Name {
continue // not an admiralty node
}
// UUID is exactly 36 chars: 8-4-4-4-12
if len(rest) < 36 {
continue
}
executionsID := rest[:36]
if !uuidNsPattern.MatchString(executionsID) {
continue
}
// Only act on NotReady nodes.
ready := false
for _, cond := range node.Status.Conditions {
if cond.Type == v1.NodeReady {
ready = cond.Status == v1.ConditionTrue
break
}
}
if ready {
continue
}
exec := findTerminalExecution(executionsID, peerID)
if exec == nil {
continue
}
logger.Info().Msgf("InfraWatchdog(admiralty-nodes): NotReady orphaned node %s for terminal execution %s → deleting",
node.Name, executionsID)
if delErr := serv.Set.CoreV1().Nodes().Delete(ctx, node.Name, metav1.DeleteOptions{}); delErr != nil {
logger.Error().Msgf("InfraWatchdog(admiralty-nodes): failed to delete node %s: %v", node.Name, delErr)
}
}
return nil
}
// scanOrphanedPVC scans LIVE_STORAGE bookings for executions that are in a
// terminal state and triggers PVC teardown for each one where this peer holds
// the local storage. This covers the case where the Kubernetes namespace was
// already deleted (or its teardown was partial) but the PersistentVolume
// (cluster-scoped) was never reclaimed.
//
// A LIVE_STORAGE booking is treated as a local PVC only when ResolveStorageName
// returns a non-empty name — the same guard used by teardownPVCForExecution.
func (s *KubernetesService) scanOrphanedPVC() error {
logger := oclib.GetLogger()
myself, err := oclib.GetMySelf()
if err != nil {
return fmt.Errorf("could not resolve local peer: %w", err)
}
peerID := myself.GetID()
res := oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), "", peerID, []string{}, nil).
Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"resource_type": {{Operator: dbs.EQUAL.String(), Value: tools.LIVE_STORAGE.EnumIndex()}},
},
}, "", false)
if res.Err != "" {
return fmt.Errorf("failed to search LIVE_STORAGE bookings: %s", res.Err)
}
seen := map[string]bool{}
ctx := context.Background()
for _, dbo := range res.Data {
b, ok := dbo.(*bookingmodel.Booking)
if !ok || seen[b.ExecutionsID+b.ResourceID] {
continue
}
storageName := storage.ResolveStorageName(b.ResourceID, peerID)
if storageName == "" {
continue // not a local PVC booking
}
exec := findTerminalExecution(b.ExecutionsID, peerID)
if exec == nil {
continue
}
seen[b.ExecutionsID+b.ResourceID] = true
logger.Info().Msgf("InfraWatchdog(pvc): orphaned PVC for exec %s storage %s → TeardownAsSource",
b.ExecutionsID, b.ResourceID)
event := storage.PVCDeleteEvent{
ExecutionsID: b.ExecutionsID,
StorageID: b.ResourceID,
StorageName: storageName,
SourcePeerID: peerID,
DestPeerID: b.DestPeerID,
}
go storage.NewPVCSetter(b.ExecutionsID, b.ResourceID).TeardownAsSource(ctx, event)
}
return nil
}
// findTerminalExecution returns the WorkflowExecution for the given executionsID
// if it exists in the DB and is in a terminal state, otherwise nil.
func findTerminalExecution(executionsID string, peerID string) *workflow_execution.WorkflowExecution {
res := oclib.NewRequest(oclib.LibDataEnum(oclib.WORKFLOW_EXECUTION), "", peerID, []string{}, nil).
Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"executions_id": {{Operator: dbs.EQUAL.String(), Value: executionsID}},
},
}, "", false)
if res.Err != "" || len(res.Data) == 0 {
return nil
}
exec, ok := res.Data[0].(*workflow_execution.WorkflowExecution)
if !ok {
return nil
}
if !infrastructure.ClosingStates[exec.State] {
return nil
}
return exec
}
// teardownInfraForExecution handles infrastructure cleanup when a workflow terminates.
// oc-datacenter is responsible only for infra here — booking/execution state
// is managed by oc-scheduler.
func (s *KubernetesService) TeardownForExecution(executionID string) {
logger := oclib.GetLogger()
myself, err := oclib.GetMySelf()
if err != nil || myself == nil {
return
}
selfPeerID := myself.GetID()
adminReq := &tools.APIRequest{Admin: true}
res, _, loadErr := workflow_execution.NewAccessor(adminReq).LoadOne(executionID)
if loadErr != nil || res == nil {
logger.Warn().Msgf("teardownInfraForExecution: execution %s not found", executionID)
return
}
exec := res.(*workflow_execution.WorkflowExecution)
ctx := context.Background()
admiralty.NewAdmiraltySetter(s.ExecutionsID).TeardownIfRemote(exec, selfPeerID)
storage.NewMinioSetter(s.ExecutionsID, "").TeardownForExecution(ctx, selfPeerID)
storage.NewPVCSetter(s.ExecutionsID, "").TeardownForExecution(ctx, selfPeerID)
s.CleanupImages(ctx)
}

View File

@@ -0,0 +1,100 @@
# Analyse de `infrastructure/prometheus.go`
## Ce que fait le fichier
Ce fichier implémente un service de monitoring qui interroge une instance **Prometheus** pour collecter des métriques de conteneurs Kubernetes associés à une réservation (Booking).
### Structures de données
| Struct | Role |
|---|---|
| `MetricsSnapshot` | Snapshot de métriques associé à une origine (source). **Note : cette struct locale est déclarée mais jamais utilisée** — le code utilise en réalité `models.MetricsSnapshot` de oc-lib. |
| `Metric` | Paire nom/valeur d'une métrique. **Même remarque** — le code utilise `models.Metric`. |
| `PrometheusResponse` | Mapping de la réponse JSON de l'API Prometheus `/api/v1/query`. |
### Métriques collectées (`queriesMetrics`)
| # | Requête PromQL | Mesure |
|---|---|---|
| 1 | `rate(container_cpu_usage_seconds_total{namespace}[1m]) * 100` | Utilisation CPU (%) |
| 2 | `container_memory_usage_bytes{namespace}` | Mémoire utilisée (bytes) |
| 3 | `container_fs_usage_bytes / container_fs_limit_bytes * 100` | Utilisation disque (%) |
| 4 | `DCGM_FI_DEV_GPU_UTIL{namespace}` | Utilisation GPU (NVIDIA DCGM) |
| 5 | `rate(container_fs_reads_bytes_total[1m])` | Débit lecture disque (bytes/s) |
| 6 | `rate(container_fs_writes_bytes_total[1m])` | Débit écriture disque (bytes/s) |
| 7 | `rate(container_network_receive_bytes_total[1m])` | Bande passante réseau entrante (bytes/s) |
| 8 | `rate(container_network_transmit_bytes_total[1m])` | Bande passante réseau sortante (bytes/s) |
| 9 | `rate(http_requests_total[1m])` | Requêtes HTTP/s |
| 10 | `rate(http_requests_total{status=~"5.."}[1m]) / rate(http_requests_total[1m]) * 100` | Taux d'erreur HTTP 5xx (%) |
Métriques commentées (non actives) : `system_load_average`, `system_network_latency_ms`, `app_mean_time_to_repair_seconds`, `app_mean_time_between_failure_seconds`.
### Méthodes
#### `queryPrometheus(promURL, expr, namespace) Metric`
- Construit une requête GET vers `/api/v1/query` de Prometheus.
- Injecte le namespace dans l'expression PromQL via `fmt.Sprintf`.
- Parse la réponse JSON et extrait la première valeur du premier résultat.
- Retourne `-1` si aucun résultat.
#### `Call(book, user, peerID, groups) (Booking, map[string]MetricsSnapshot)`
- Charge la ressource de calcul (`ComputeResource`) liée au booking.
- Pour chaque instance de la ressource, cherche le `LiveDatacenter` correspondant.
- Lance en **goroutine** (parallèle) l'exécution de toutes les requêtes PromQL pour chaque datacenter ayant un `MonitorPath`.
- Attend toutes les goroutines (`sync.WaitGroup`), puis retourne les métriques groupées par instance.
#### `Stream(bookingID, interval, user, peerID, groups, websocket)`
- Boucle de monitoring en continu jusqu'à `ExpectedEndDate` du booking ou signal de kill.
- A chaque tick (`interval`), appelle `Call()` dans une goroutine.
- Envoie les métriques en temps réel via **WebSocket**.
- Accumule les métriques en mémoire et les persiste dans le booking tous les `max` (100) cycles.
- Supporte un mécanisme de kill via la variable globale `Kill`.
---
## Problemes et points d'attention
### Bugs potentiels
1. **Race condition dans `Stream`** — Les variables `mets`, `bookIDS`, `book` sont partagées entre la boucle principale et les goroutines lancées à chaque tick, **sans synchronisation** (pas de mutex). Si `interval` est court, plusieurs goroutines peuvent écrire simultanément dans `mets` et `bookIDS`.
2. **Race condition sur `Kill`** — La variable globale `Kill` est lue dans la boucle sans verrouiller `LockKill`. Le mutex n'est utilisé que pour l'écriture.
3. **Structs locales inutilisées**`MetricsSnapshot` et `Metric` (lignes 22-31) sont déclarées localement mais le code utilise `models.MetricsSnapshot` et `models.Metric`. Code mort à nettoyer.
4. **Requête PromQL avec double placeholder** — La requête filesystem (ligne 47) contient deux `%s` mais `queryPrometheus` ne fait qu'un seul `fmt.Sprintf(expr, namespace)`. Cela provoque un **`%!s(MISSING)`** dans la requête. Il faut passer le namespace deux fois ou réécrire la fonction.
5. **Pas de timeout HTTP**`http.Get()` utilise le client par défaut sans timeout. Un Prometheus lent peut bloquer indéfiniment.
6. **Pas de gestion d'erreur sur `WriteJSON`** — Si le WebSocket est fermé côté client, l'écriture échoue silencieusement.
### Améliorations possibles
#### Fiabilité
- **Ajouter un `context.Context`** à `queryPrometheus` et `Call` pour supporter les timeouts et l'annulation.
- **Utiliser un `http.Client` avec timeout** au lieu de `http.Get`.
- **Protéger les accès concurrents** dans `Stream` avec un `sync.Mutex` sur `mets`/`bookIDS`.
- **Remplacer la variable globale `Kill`** par un `context.WithCancel` ou un channel, plus idiomatique en Go.
#### Métriques supplémentaires envisageables
- `container_cpu_cfs_throttled_seconds_total` — Throttling CPU (le container est bridé).
- `kube_pod_container_status_restarts_total` — Nombre de restarts (instabilité).
- `container_memory_working_set_bytes` — Mémoire réelle utilisée (exclut le cache, plus précis que `memory_usage_bytes`).
- `kube_pod_status_phase` — Phase du pod (Running, Pending, Failed...).
- `container_oom_events_total` ou `kube_pod_container_status_last_terminated_reason` — Détection des OOM kills.
- `kubelet_volume_stats_used_bytes` / `kubelet_volume_stats_capacity_bytes` — Utilisation des PVC.
- `DCGM_FI_DEV_MEM_COPY_UTIL` — Utilisation mémoire GPU.
- `DCGM_FI_DEV_GPU_TEMP` — Température GPU.
- `node_cpu_seconds_total` / `node_memory_MemAvailable_bytes` — Métriques au niveau du noeud (vue globale).
#### Architecture
- **Range queries** (`/api/v1/query_range`) — Actuellement seul l'instant query est utilisé. Pour le streaming sur une période, `query_range` permettrait de récupérer des séries temporelles complètes et de calculer des moyennes/percentiles.
- **Labels dans les résultats** — Actuellement seule la première série est lue (`Result[0]`). On perd l'information si plusieurs pods/containers matchent. Agréger ou renvoyer toutes les séries.
- **Noms de métriques lisibles** — Mapper les expressions PromQL vers des noms humains (`cpu_usage_percent`, `memory_bytes`, etc.) au lieu de stocker l'expression brute comme nom.
- **Health check Prometheus** — Ajouter une méthode pour vérifier que Prometheus est accessible (`/-/healthy`).
---
## Résumé
Le fichier est **fonctionnel** pour un cas d'usage basique (collecte one-shot + streaming WebSocket), mais présente des **race conditions** dans `Stream`, un **bug sur la requête filesystem** (double `%s`), et du **code mort**. Les améliorations prioritaires sont la correction des accès concurrents et l'ajout de timeouts HTTP.

View File

@@ -0,0 +1,75 @@
package monitor
import (
"context"
"errors"
"fmt"
"oc-datacenter/conf"
"sync"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/booking"
"cloud.o-forge.io/core/oc-lib/models/common/models"
"cloud.o-forge.io/core/oc-lib/models/live"
"cloud.o-forge.io/core/oc-lib/models/resources"
"github.com/gorilla/websocket"
)
type MonitorInterface interface {
Stream(ctx context.Context, bookingID string, interval time.Duration, ws *websocket.Conn)
}
var _monitorService = map[string]func() MonitorInterface{
"prometheus": func() MonitorInterface { return NewPrometheusService() },
"vector": func() MonitorInterface { return NewVectorService() },
}
func NewMonitorService() (MonitorInterface, error) {
service, ok := _monitorService[conf.GetConfig().MonitorMode]
if !ok {
return nil, errors.New("monitor service not found")
}
return service(), nil
}
func Call(book *booking.Booking,
f func(*live.LiveDatacenter, *resources.ComputeResourceInstance,
map[string]models.MetricsSnapshot, *sync.WaitGroup, *sync.Mutex)) (*booking.Booking, map[string]models.MetricsSnapshot) {
logger := oclib.GetLogger()
metrics := map[string]models.MetricsSnapshot{}
var wg sync.WaitGroup
var mu sync.Mutex
cUAccess := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.LIVE_DATACENTER), nil)
cRAccess := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.COMPUTE_RESOURCE), nil)
rr := cRAccess.LoadOne(book.ResourceID)
if rr.Err != "" {
logger.Err(fmt.Errorf("can't proceed because of unfound resource %s : %s", book.ResourceID, rr.Err))
return book, metrics
}
computeRes := rr.ToComputeResource()
for _, instance := range computeRes.Instances {
res := cUAccess.Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"source": {{Operator: dbs.EQUAL.String(), Value: instance.Source}},
"abstractlive.resources_id": {{Operator: dbs.EQUAL.String(), Value: computeRes.GetID()}},
},
}, "", false)
if res.Err != "" {
continue
}
for _, r := range res.Data {
dc := r.(*live.LiveDatacenter)
if dc.MonitorPath == "" {
continue
}
wg.Add(1)
go f(dc, instance, metrics, &wg, &mu)
}
}
wg.Wait()
return book, metrics
}

View File

@@ -0,0 +1,194 @@
package monitor
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"strings"
"sync"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/models/booking"
"cloud.o-forge.io/core/oc-lib/models/common/models"
"cloud.o-forge.io/core/oc-lib/models/live"
"cloud.o-forge.io/core/oc-lib/models/resources"
"github.com/gorilla/websocket"
)
type PrometheusResponse struct {
Status string `json:"status"`
Data struct {
ResultType string `json:"resultType"`
Result []struct {
Metric map[string]string `json:"metric"`
Value []interface{} `json:"value"` // [timestamp, value]
} `json:"result"`
} `json:"data"`
}
var queriesMetrics = []string{
"rate(container_cpu_usage_seconds_total{namespace=\"%s\"}[1m]) * 100",
"container_memory_usage_bytes{namespace=\"%s\"}",
"(container_fs_usage_bytes{namespace=\"%s\"}) / (container_fs_limit_bytes{namespace=\"%s\"}) * 100",
"DCGM_FI_DEV_GPU_UTIL{namespace=\"%s\"}",
"rate(container_fs_reads_bytes_total{namespace=\"%s\"}[1m])",
"rate(container_fs_writes_bytes_total{namespace=\"%s\"}[1m])",
"rate(container_network_receive_bytes_total{namespace=\"%s\"}[1m])",
"rate(container_network_transmit_bytes_total{namespace=\"%s\"}[1m])",
"rate(http_requests_total{namespace=\"%s\"}[1m])",
"(rate(http_requests_total{status=~\"5..\", namespace=\"%s\"}[1m]) / rate(http_requests_total{namespace=\"%s\"}[1m])) * 100",
}
var httpClient = &http.Client{
Timeout: 10 * time.Second,
}
// StreamRegistry manages cancellation of active monitoring streams by namespace.
var StreamRegistry = &streamRegistry{
streams: map[string]context.CancelFunc{},
}
type streamRegistry struct {
mu sync.Mutex
streams map[string]context.CancelFunc
}
func (r *streamRegistry) Register(namespace string) context.Context {
r.mu.Lock()
defer r.mu.Unlock()
if cancel, ok := r.streams[namespace]; ok {
cancel()
}
ctx, cancel := context.WithCancel(context.Background())
r.streams[namespace] = cancel
return ctx
}
func (r *streamRegistry) Cancel(namespace string) {
r.mu.Lock()
defer r.mu.Unlock()
if cancel, ok := r.streams[namespace]; ok {
cancel()
delete(r.streams, namespace)
}
}
type PrometheusService struct {
}
func NewPrometheusService() *PrometheusService {
return &PrometheusService{}
}
func (p *PrometheusService) queryPrometheus(ctx context.Context, promURL string, expr string, namespace string) models.Metric {
metric := models.Metric{Name: expr, Value: -1}
query := strings.ReplaceAll(expr, "%s", namespace)
reqURL := promURL + "/api/v1/query?query=" + url.QueryEscape(query)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, reqURL, nil)
if err != nil {
metric.Error = err
return metric
}
resp, err := httpClient.Do(req)
if err != nil {
metric.Error = err
return metric
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
metric.Error = err
return metric
}
var result PrometheusResponse
if err = json.Unmarshal(body, &result); err != nil {
metric.Error = err
return metric
}
if len(result.Data.Result) > 0 && len(result.Data.Result[0].Value) == 2 {
metric.Value, metric.Error = strconv.ParseFloat(fmt.Sprintf("%s", result.Data.Result[0].Value[1]), 64)
}
return metric
}
func (p *PrometheusService) Stream(ctx context.Context, bookingID string, interval time.Duration, ws *websocket.Conn) {
logger := oclib.GetLogger()
max := 100
count := 0
mets := map[string][]models.MetricsSnapshot{}
bAccess := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.BOOKING), nil)
book := bAccess.LoadOne(bookingID)
if book.Err != "" {
logger.Err(fmt.Errorf("stop because of empty : %s", book.Err))
return
}
isActive := func(e *booking.Booking) bool {
if e.ExpectedEndDate == nil {
return true
}
return time.Now().UTC().Before(*e.ExpectedEndDate)
}
ticker := time.NewTicker(interval)
defer ticker.Stop()
for isActive(book.Data.(*booking.Booking)) {
select {
case <-ctx.Done():
return
case <-ticker.C:
}
b, metrics := Call(book.Data.(*booking.Booking),
func(dc *live.LiveDatacenter, instance *resources.ComputeResourceInstance,
metrics map[string]models.MetricsSnapshot,
wg *sync.WaitGroup, mu *sync.Mutex) {
defer wg.Done()
for _, expr := range queriesMetrics {
if mm, ok := metrics[instance.Name]; !ok {
mu.Lock()
metrics[instance.Name] = models.MetricsSnapshot{
From: instance.Source,
Metrics: []models.Metric{p.queryPrometheus(ctx, dc.MonitorPath, expr, book.Data.(*booking.Booking).ExecutionsID)},
}
mu.Unlock()
} else {
mu.Lock()
mm.Metrics = append(mm.Metrics, p.queryPrometheus(ctx, dc.MonitorPath, expr, book.Data.(*booking.Booking).ExecutionsID))
mu.Unlock()
}
}
})
_ = b
count++
if ws != nil {
if err := ws.WriteJSON(metrics); err != nil {
logger.Err(fmt.Errorf("websocket write error: %w", err))
return
}
}
if count < max {
continue
}
bk := book.Data.(*booking.Booking)
if bk.ExecutionMetrics == nil {
bk.ExecutionMetrics = mets
} else {
for kk, vv := range mets {
bk.ExecutionMetrics[kk] = append(bk.ExecutionMetrics[kk], vv...)
}
}
bk.GetAccessor(nil).UpdateOne(bk.Serialize(bk), bookingID)
mets = map[string][]models.MetricsSnapshot{}
count = 0
}
}

View File

@@ -0,0 +1,153 @@
package monitor
import (
"context"
"encoding/json"
"fmt"
"log"
"sync"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/models/booking"
"cloud.o-forge.io/core/oc-lib/models/common/models"
"cloud.o-forge.io/core/oc-lib/models/live"
"cloud.o-forge.io/core/oc-lib/models/resources"
"github.com/gorilla/websocket"
)
// --- Structure métrique Vector ---
type VectorMetric struct {
Name string `json:"name"`
Value float64 `json:"value"`
Labels map[string]string `json:"labels"`
Timestamp int64 `json:"timestamp"`
}
// --- Service Vector ---
type VectorService struct {
mu sync.Mutex
ExecutionMetrics map[string][]models.MetricsSnapshot // bookingID -> snapshots
sessions map[string]context.CancelFunc // optional: WS clients
}
func NewVectorService() *VectorService {
return &VectorService{
ExecutionMetrics: make(map[string][]models.MetricsSnapshot),
sessions: make(map[string]context.CancelFunc),
}
}
// --- Connexion à un flux Vector en WS ---
func (v *VectorService) ListenVector(ctx context.Context, b *booking.Booking, interval time.Duration, ws *websocket.Conn) error {
max := 100
count := 0
mets := map[string][]models.MetricsSnapshot{}
isActive := func() bool {
if b.ExpectedEndDate == nil {
return true
}
return time.Now().UTC().Before(*b.ExpectedEndDate)
}
ticker := time.NewTicker(interval)
defer ticker.Stop()
for isActive() {
select {
case <-ctx.Done():
return nil
case <-ticker.C:
}
bb, metrics := Call(b,
func(dc *live.LiveDatacenter, instance *resources.ComputeResourceInstance, metrics map[string]models.MetricsSnapshot,
wg *sync.WaitGroup, mu *sync.Mutex) {
c, _, err := websocket.DefaultDialer.Dial(dc.MonitorPath, nil)
if err != nil {
return
}
defer c.Close()
_, msg, err := c.ReadMessage()
if err != nil {
log.Println("vector ws read error:", err)
return
}
var m models.Metric
if err := json.Unmarshal(msg, &m); err != nil {
log.Println("json unmarshal error:", err)
return
}
mu.Lock()
if mm, ok := metrics[instance.Name]; !ok {
metrics[instance.Name] = models.MetricsSnapshot{
From: instance.Source,
Metrics: []models.Metric{m},
}
} else {
mm.Metrics = append(mm.Metrics, m)
}
mu.Unlock()
})
_ = bb
for k, v := range metrics {
mets[k] = append(mets[k], v)
}
count++
if ws != nil {
if err := ws.WriteJSON(metrics); err != nil {
return fmt.Errorf("websocket write error: %w", err)
}
}
if count < max {
continue
}
if b.ExecutionMetrics == nil {
b.ExecutionMetrics = mets
} else {
for kk, vv := range mets {
b.ExecutionMetrics[kk] = append(b.ExecutionMetrics[kk], vv...)
}
}
b.GetAccessor(nil).UpdateOne(b.Serialize(b), b.GetID())
mets = map[string][]models.MetricsSnapshot{}
count = 0
}
return nil
}
// --- Permet d'ajouter un front WebSocket pour recevoir les metrics live ---
// --- Permet de récupérer le cache historique pour un booking ---
func (v *VectorService) GetCache(bookingID string) []models.MetricsSnapshot {
v.mu.Lock()
defer v.mu.Unlock()
snapshots := v.ExecutionMetrics[bookingID]
return snapshots
}
// --- Exemple d'intégration avec un booking ---
func (v *VectorService) Stream(ctx context.Context, bookingID string, interval time.Duration, ws *websocket.Conn) {
logger := oclib.GetLogger()
go func() {
bAccess := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.BOOKING), nil)
book := bAccess.LoadOne(bookingID)
if book.Err != "" {
logger.Err(fmt.Errorf("stop because of empty : %s", book.Err))
return
}
b := book.ToBookings()
if b == nil {
logger.Err(fmt.Errorf("stop because of empty is not a booking"))
return
}
if err := v.ListenVector(ctx, b, interval, ws); err != nil {
log.Printf("Vector listen error for booking %s: %v\n", b.GetID(), err)
}
}()
}

269
infrastructure/nats/nats.go Normal file
View File

@@ -0,0 +1,269 @@
package nats
import (
"context"
"encoding/json"
"fmt"
"oc-datacenter/infrastructure/admiralty"
"oc-datacenter/infrastructure/kubernetes"
"oc-datacenter/infrastructure/kubernetes/models"
"oc-datacenter/infrastructure/storage"
"sync"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/tools"
)
// roleWaiters maps executionID → channel expecting the role-assignment message from OC discovery.
var roleWaiters sync.Map
// ArgoKubeEvent carries the peer-routing metadata for a resource provisioning event.
//
// When MinioID is non-empty and Local is false, the event concerns Minio credential provisioning.
// When Local is true, the event concerns local PVC provisioning.
// Otherwise it concerns Admiralty kubeconfig provisioning.
type ArgoKubeEvent struct {
ExecutionsID string `json:"executions_id"`
DestPeerID string `json:"dest_peer_id"`
Type tools.DataType `json:"data_type"`
SourcePeerID string `json:"source_peer_id"`
MinioID string `json:"minio_id,omitempty"`
// Local signals that this STORAGE_RESOURCE event is for a local PVC (not Minio).
Local bool `json:"local,omitempty"`
StorageName string `json:"storage_name,omitempty"`
// OriginID is the peer that initiated the request; the PB_CONSIDERS
// response is routed back to this peer once provisioning completes.
OriginID string `json:"origin_id,omitempty"`
// Images is the list of container images to pre-pull on the target peer
// before the workflow starts. Empty for STORAGE_RESOURCE events.
Images []string `json:"images,omitempty"`
}
// ListenNATS starts all NATS subscriptions for the infrastructure layer.
// Must be launched in a goroutine from main.
func ListenNATS() {
tools.NewNATSCaller().ListenNats(map[tools.NATSMethod]func(tools.NATSResponse){
// ─── ARGO_KUBE_EVENT ────────────────────────────────────────────────────────
// Triggered by oc-discovery to notify this peer of a provisioning task.
// Dispatches to Admiralty, Minio, or local PVC based on event fields.
tools.ARGO_KUBE_EVENT: func(resp tools.NATSResponse) {
argo := &ArgoKubeEvent{}
if err := json.Unmarshal(resp.Payload, argo); err != nil {
return
}
kube := kubernetes.NewKubernetesService(argo.ExecutionsID)
if argo.Type == tools.STORAGE_RESOURCE {
if argo.Local {
fmt.Println("DETECT LOCAL PVC ARGO_KUBE_EVENT")
// ── Local PVC provisioning ──────────────────────────────────
setter := storage.NewPVCSetter(argo.ExecutionsID, argo.MinioID)
event := storage.PVCProvisionEvent{
ExecutionsID: argo.ExecutionsID,
StorageID: argo.MinioID,
StorageName: argo.StorageName,
SourcePeerID: argo.SourcePeerID,
DestPeerID: argo.DestPeerID,
OriginID: argo.OriginID,
}
if argo.SourcePeerID == argo.DestPeerID {
fmt.Println("CONFIG PVC MYSELF")
err := kube.CreateNamespace()
fmt.Println("NS", err)
go setter.InitializeAsSource(context.Background(), event, true)
} else {
// Cross-peer: route to dest peer via PB_PVC_CONFIG.
if b, err := json.Marshal(event); err == nil {
if b2, err := json.Marshal(&tools.PropalgationMessage{
Payload: b,
Action: tools.PB_PVC_CONFIG,
}); err == nil {
fmt.Println("CONFIG PVC THEM")
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: -1,
User: resp.User,
Method: int(tools.PROPALGATION_EVENT),
Payload: b2,
})
}
}
}
} else {
fmt.Println("DETECT STORAGE ARGO_KUBE_EVENT")
// ── Minio credential provisioning ──────────────────────────────
setter := storage.NewMinioSetter(argo.ExecutionsID, argo.MinioID)
if argo.SourcePeerID == argo.DestPeerID {
fmt.Println("CONFIG MYSELF")
err := kube.CreateNamespace()
fmt.Println("NS", err)
go setter.InitializeAsSource(context.Background(), argo.SourcePeerID, argo.DestPeerID, argo.OriginID, true)
} else {
// Different peers: publish Phase-1 PB_MINIO_CONFIG (Access == "")
// so oc-discovery routes the role-assignment to the Minio host.
phase1 := storage.MinioCredentialEvent{
ExecutionsID: argo.ExecutionsID,
MinioID: argo.MinioID,
SourcePeerID: argo.SourcePeerID,
DestPeerID: argo.DestPeerID,
OriginID: argo.OriginID,
}
if b, err := json.Marshal(phase1); err == nil {
if b2, err := json.Marshal(&tools.PropalgationMessage{
Payload: b,
Action: tools.PB_MINIO_CONFIG,
}); err == nil {
fmt.Println("CONFIG THEM")
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: -1,
User: resp.User,
Method: int(tools.PROPALGATION_EVENT),
Payload: b2,
})
}
}
}
}
} else {
fmt.Println("DETECT COMPUTE ARGO_KUBE_EVENT")
// ── Pre-pull + Admiralty kubeconfig provisioning ─────────────
fmt.Println(argo.SourcePeerID, argo.DestPeerID)
if argo.SourcePeerID == argo.DestPeerID {
fmt.Println("CONFIG MYSELF")
kube := kubernetes.NewKubernetesService(argo.ExecutionsID)
err := kube.CreateNamespace()
fmt.Println("NS", err)
go func(a ArgoKubeEvent) {
ctx := context.Background()
// Pre-pull en premier : PB_CONSIDERS n'est envoyé qu'après.
if len(a.Images) > 0 {
if err := kube.RunPrepull(ctx, a.Images); err != nil {
logger := oclib.GetLogger()
logger.Error().Msgf("RunPrepull local: %v", err)
}
}
admiralty.NewAdmiraltySetter(a.ExecutionsID).InitializeAsSource(
ctx, a.SourcePeerID, a.DestPeerID, a.OriginID, true, a.Images)
}(*argo)
} else if b, err := json.Marshal(argo); err == nil {
if b2, err := json.Marshal(&tools.PropalgationMessage{
Payload: b,
Action: tools.PB_ADMIRALTY_CONFIG,
}); err == nil {
fmt.Println("CONFIG THEM")
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: -1,
User: resp.User,
Method: int(tools.PROPALGATION_EVENT),
Payload: b2,
})
}
}
}
},
// ─── ADMIRALTY_CONFIG_EVENT ─────────────────────────────────────────────────
// Forwarded by oc-discovery after receiving via libp2p ProtocolAdmiraltyConfigResource.
// Payload is a KubeconfigEvent (phase discriminated by Kubeconfig presence).
tools.ADMIRALTY_CONFIG_EVENT: func(resp tools.NATSResponse) {
kubeconfigEvent := models.KubeconfigEvent{}
if err := json.Unmarshal(resp.Payload, &kubeconfigEvent); err == nil {
if kubeconfigEvent.Kubeconfig != "" {
// Phase 2: kubeconfig present → this peer is the TARGET (scheduler).
fmt.Println("CreateAdmiraltyTarget")
admiralty.NewAdmiraltySetter(kubeconfigEvent.ExecutionsID).InitializeAsTarget(
context.Background(), kubeconfigEvent, false)
} else {
kube := kubernetes.NewKubernetesService(kubeconfigEvent.ExecutionsID)
err := kube.CreateNamespace()
fmt.Println("NS", err)
// Phase 1: no kubeconfig → this peer is the SOURCE (compute).
if len(kubeconfigEvent.Images) > 0 {
if err := kube.RunPrepull(context.Background(), kubeconfigEvent.Images); err != nil {
logger := oclib.GetLogger()
logger.Error().Msgf("RunPrepull local: %v", err)
}
}
fmt.Println("CreateAdmiraltySource")
admiralty.NewAdmiraltySetter(kubeconfigEvent.ExecutionsID).InitializeAsSource(
context.Background(), kubeconfigEvent.SourcePeerID, kubeconfigEvent.DestPeerID,
kubeconfigEvent.OriginID, false, kubeconfigEvent.Images)
}
}
},
// ─── MINIO_CONFIG_EVENT ──────────────────────────────────────────────────────
// Forwarded by oc-discovery after receiving via libp2p ProtocolMinioConfigResource.
// Payload is a MinioCredentialEvent (phase discriminated by Access presence).
tools.MINIO_CONFIG_EVENT: func(resp tools.NATSResponse) {
minioEvent := storage.MinioCredentialEvent{}
if err := json.Unmarshal(resp.Payload, &minioEvent); err == nil {
if minioEvent.Access != "" {
// Phase 2: credentials present → this peer is the TARGET (compute).
storage.NewMinioSetter(minioEvent.ExecutionsID, minioEvent.MinioID).InitializeAsTarget(
context.Background(), minioEvent, false)
} else {
err := kubernetes.NewKubernetesService(minioEvent.ExecutionsID).CreateNamespace()
fmt.Println("NS", err)
// Phase 1: no credentials → this peer is the SOURCE (Minio host).
storage.NewMinioSetter(minioEvent.ExecutionsID, minioEvent.MinioID).InitializeAsSource(
context.Background(), minioEvent.SourcePeerID, minioEvent.DestPeerID, minioEvent.OriginID, false)
}
}
},
// ─── PVC_CONFIG_EVENT ────────────────────────────────────────────────────────
// Forwarded by oc-discovery for cross-peer local PVC provisioning.
// The dest peer creates the PVC in its own cluster.
tools.PVC_CONFIG_EVENT: func(resp tools.NATSResponse) {
event := storage.PVCProvisionEvent{}
if err := json.Unmarshal(resp.Payload, &event); err == nil {
err := kubernetes.NewKubernetesService(event.ExecutionsID).CreateNamespace()
fmt.Println("NS", err)
storage.NewPVCSetter(event.ExecutionsID, event.StorageID).InitializeAsSource(
context.Background(), event, false)
}
},
// ─── WORKFLOW_DONE_EVENT ─────────────────────────────────────────────────────
// Emitted by oc-monitord when the top-level Argo workflow reaches a terminal
// phase. oc-datacenter is responsible only for infrastructure teardown here:
// booking/execution state management is handled entirely by oc-scheduler.
tools.WORKFLOW_DONE_EVENT: func(resp tools.NATSResponse) {
var evt tools.WorkflowLifecycleEvent
if err := json.Unmarshal(resp.Payload, &evt); err != nil || evt.ExecutionsID == "" {
return
}
go kubernetes.NewKubernetesService(evt.ExecutionsID).TeardownForExecution(evt.ExecutionID)
},
// ─── REMOVE_RESOURCE ────────────────────────────────────────────────────────
// Routed by oc-discovery via ProtocolDeleteResource for datacenter teardown.
// Only STORAGE_RESOURCE and COMPUTE_RESOURCE deletions are handled here.
tools.REMOVE_RESOURCE: func(resp tools.NATSResponse) {
switch resp.Datatype {
case tools.STORAGE_RESOURCE:
// Try PVC delete first (Local=true), fall back to Minio.
pvcEvent := storage.PVCDeleteEvent{}
if err := json.Unmarshal(resp.Payload, &pvcEvent); err == nil && pvcEvent.ExecutionsID != "" && pvcEvent.StorageName != "" {
go storage.NewPVCSetter(pvcEvent.ExecutionsID, pvcEvent.StorageID).
TeardownAsSource(context.Background(), pvcEvent)
} else {
deleteEvent := storage.MinioDeleteEvent{}
if err := json.Unmarshal(resp.Payload, &deleteEvent); err == nil && deleteEvent.ExecutionsID != "" {
go storage.NewMinioSetter(deleteEvent.ExecutionsID, deleteEvent.MinioID).
TeardownAsSource(context.Background(), deleteEvent)
}
}
case tools.COMPUTE_RESOURCE:
argo := &ArgoKubeEvent{}
if err := json.Unmarshal(resp.Payload, argo); err == nil && argo.ExecutionsID != "" {
go admiralty.NewAdmiraltySetter(argo.ExecutionsID).TeardownAsSource(context.Background())
}
}
},
})
}

View File

@@ -0,0 +1,219 @@
package storage
import (
"context"
"encoding/json"
"fmt"
"oc-datacenter/conf"
oclib "cloud.o-forge.io/core/oc-lib"
"github.com/minio/madmin-go/v4"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"github.com/necmettindev/randomstring"
)
type MinioService struct {
Url string
RootKey string
RootSecret string
MinioAdminClient *madmin.AdminClient
}
type StatementEntry struct {
Effect string `json:"Effect"`
Action []string `json:"Action"`
Resource string `json:"Resource"`
}
type PolicyDocument struct {
Version string `json:"Version"`
Statement []StatementEntry `json:"Statement"`
}
func NewMinioService(url string) *MinioService {
return &MinioService{
Url: url,
RootKey: conf.GetConfig().MinioRootKey,
RootSecret: conf.GetConfig().MinioRootSecret,
}
}
func (m *MinioService) CreateClient() error {
cred := credentials.NewStaticV4(m.RootKey, m.RootSecret, "")
cli, err := madmin.NewWithOptions(m.Url, &madmin.Options{Creds: cred, Secure: false}) // Maybe in the future we should use the secure option ?
if err != nil {
return err
}
m.MinioAdminClient = cli
return nil
}
func (m *MinioService) CreateCredentials(executionId string) (string, string, error) {
policy := PolicyDocument{
Version: "2012-10-17",
Statement: []StatementEntry{
{
Effect: "Allow",
Action: []string{"s3:GetObject", "s3:PutObject"},
Resource: "arn:aws:s3:::" + executionId + "/*",
},
},
}
p, err := json.Marshal(policy)
if err != nil {
return "", "", err
}
randAccess, randSecret := getRandomCreds()
req := madmin.AddServiceAccountReq{
Policy: p,
TargetUser: m.RootKey,
AccessKey: randAccess,
SecretKey: randSecret,
}
res, err := m.MinioAdminClient.AddServiceAccount(context.Background(), req)
if err != nil {
return "", "", err
}
return res.AccessKey, res.SecretKey, nil
}
func getRandomCreds() (string, string) {
opts := randomstring.GenerationOptions{
Length: 20,
}
a, _ := randomstring.GenerateString(opts)
opts.Length = 40
s, _ := randomstring.GenerateString(opts)
return a, s
}
func (m *MinioService) CreateMinioConfigMap(minioID string, executionId string, url string) error {
config, err := rest.InClusterConfig()
if err != nil {
return err
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return err
}
configMap := &v1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: minioID + "artifact-repository",
Namespace: executionId,
},
Data: map[string]string{
minioID + "s3-local": fmt.Sprintf(`
s3:
bucket: %s
endpoint: %s
insecure: true
accessKeySecret:
name: %s-secret-s3
key: accesskey
secretKeySecret:
name: %s-secret-s3
key: secretkey
`, minioID+"-"+executionId, url, minioID, minioID),
},
}
existing, err := clientset.CoreV1().
ConfigMaps(executionId).
Get(context.Background(), minioID+"artifact-repository", metav1.GetOptions{})
if err == nil {
// Update
existing.Data = configMap.Data
_, err = clientset.CoreV1().
ConfigMaps(executionId).
Update(context.Background(), existing, metav1.UpdateOptions{})
} else {
// Create
_, err = clientset.CoreV1().
ConfigMaps(executionId).
Create(context.Background(), configMap, metav1.CreateOptions{})
}
return nil
}
func (m *MinioService) CreateBucket(minioID string, executionId string) error {
l := oclib.GetLogger()
cred := credentials.NewStaticV4(m.RootKey, m.RootSecret, "")
client, err := minio.New(m.Url, &minio.Options{
Creds: cred,
Secure: false,
})
if err != nil {
l.Error().Msg("Error when creating the minio client for the data plane")
return err
}
err = client.MakeBucket(context.Background(), minioID+"-"+executionId, minio.MakeBucketOptions{})
if err != nil {
l.Error().Msg("Error when creating the bucket for namespace " + executionId)
return err
}
l.Info().Msg("Created the bucket " + minioID + "-" + executionId + " on " + m.Url + " minio")
return nil
}
// DeleteCredentials revokes a scoped Minio service account by its access key.
func (m *MinioService) DeleteCredentials(accessKey string) error {
if err := m.MinioAdminClient.DeleteServiceAccount(context.Background(), accessKey); err != nil {
return fmt.Errorf("DeleteCredentials: %w", err)
}
return nil
}
// DeleteBucket removes the execution bucket from Minio.
func (m *MinioService) DeleteBucket(minioID, executionId string) error {
l := oclib.GetLogger()
cred := credentials.NewStaticV4(m.RootKey, m.RootSecret, "")
client, err := minio.New(m.Url, &minio.Options{Creds: cred, Secure: false})
if err != nil {
l.Error().Msg("Error when creating minio client for bucket deletion")
return err
}
bucketName := minioID + "-" + executionId
if err := client.RemoveBucket(context.Background(), bucketName); err != nil {
l.Error().Msg("Error when deleting bucket " + bucketName)
return err
}
l.Info().Msg("Deleted bucket " + bucketName + " on " + m.Url)
return nil
}
// DeleteMinioConfigMap removes the artifact-repository ConfigMap from the execution namespace.
func (m *MinioService) DeleteMinioConfigMap(minioID, executionId string) error {
cfg, err := rest.InClusterConfig()
if err != nil {
return err
}
clientset, err := kubernetes.NewForConfig(cfg)
if err != nil {
return err
}
return clientset.CoreV1().ConfigMaps(executionId).Delete(
context.Background(), minioID+"artifact-repository", metav1.DeleteOptions{},
)
}

View File

@@ -0,0 +1,362 @@
package storage
import (
"context"
"encoding/json"
"fmt"
"slices"
"oc-datacenter/conf"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
bookingmodel "cloud.o-forge.io/core/oc-lib/models/booking"
"cloud.o-forge.io/core/oc-lib/models/live"
"cloud.o-forge.io/core/oc-lib/tools"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// MinioCredentialEvent is the NATS payload used to transfer Minio credentials between peers.
//
// Two-phase protocol over PROPALGATION_EVENT (Action = PB_MINIO_CONFIG):
// - Phase 1 role assignment (Access == ""):
// oc-discovery routes this to the SOURCE peer (Minio host) → InitializeAsSource.
// - Phase 2 credential delivery (Access != ""):
// oc-discovery routes this to the TARGET peer (compute host) → InitializeAsTarget.
type MinioCredentialEvent struct {
ExecutionsID string `json:"executions_id"`
MinioID string `json:"minio_id"`
Access string `json:"access"`
Secret string `json:"secret"`
SourcePeerID string `json:"source_peer_id"`
DestPeerID string `json:"dest_peer_id"`
URL string `json:"url"`
// OriginID is the peer that initiated the provisioning request.
// The PB_CONSIDERS response is routed back to this peer.
OriginID string `json:"origin_id"`
}
// minioConsidersPayload is the PB_CONSIDERS payload emitted after minio provisioning.
type minioConsidersPayload struct {
OriginID string `json:"origin_id"`
ExecutionsID string `json:"executions_id"`
Secret string `json:"secret,omitempty"`
Error *string `json:"error,omitempty"`
}
// MinioSetter carries the execution context for a Minio credential provisioning.
type MinioSetter struct {
ExecutionsID string // used as both the bucket name and the K8s namespace suffix
MinioID string // ID of the Minio storage resource
}
func NewMinioSetter(execID, minioID string) *MinioSetter {
return &MinioSetter{ExecutionsID: execID, MinioID: minioID}
}
// emitConsiders publishes a PB_CONSIDERS back to OriginID with the result of
// the minio provisioning. secret is the provisioned credential; err is nil on success.
// When self is true the origin is the local peer: emits directly on CONSIDERS_EVENT
// instead of routing through PROPALGATION_EVENT.
func (m *MinioSetter) emitConsiders(executionsID, originID, secret string, provErr error, self bool) {
fmt.Println("emitConsiders !")
var errStr *string
if provErr != nil {
s := provErr.Error()
errStr = &s
}
payload, _ := json.Marshal(minioConsidersPayload{
OriginID: originID,
ExecutionsID: executionsID,
Secret: secret,
Error: errStr,
})
if self {
go tools.NewNATSCaller().SetNATSPub(tools.CONSIDERS_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: tools.STORAGE_RESOURCE,
Method: int(tools.CONSIDERS_EVENT),
Payload: payload,
})
return
}
b, _ := json.Marshal(&tools.PropalgationMessage{
DataType: tools.STORAGE_RESOURCE.EnumIndex(),
Action: tools.PB_CONSIDERS,
Payload: payload,
})
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: -1,
Method: int(tools.PROPALGATION_EVENT),
Payload: b,
})
}
// InitializeAsSource is called on the peer that hosts the Minio instance.
//
// It:
// 1. Looks up the live-storage endpoint URL for MinioID.
// 2. Creates a scoped service account (access + secret limited to the execution bucket).
// 3. Creates the execution bucket.
// 4. If source and dest are the same peer, calls InitializeAsTarget directly.
// Otherwise, publishes a MinioCredentialEvent via NATS (Phase 2) so that
// oc-discovery can route the credentials to the compute peer.
func (m *MinioSetter) InitializeAsSource(ctx context.Context, localPeerID, destPeerID, originID string, self bool) {
logger := oclib.GetLogger()
url, err := m.loadMinioURL(localPeerID)
if err != nil {
logger.Error().Msg("MinioSetter.InitializeAsSource: " + err.Error())
return
}
service := NewMinioService(url)
if err := service.CreateClient(); err != nil {
logger.Error().Msg("MinioSetter.InitializeAsSource: failed to create admin client: " + err.Error())
return
}
access, secret, err := service.CreateCredentials(m.ExecutionsID)
if err != nil {
logger.Error().Msg("MinioSetter.InitializeAsSource: failed to create service account: " + err.Error())
return
}
if err := service.CreateBucket(m.MinioID, m.ExecutionsID); err != nil {
logger.Error().Msg("MinioSetter.InitializeAsSource: failed to create bucket: " + err.Error())
return
}
logger.Info().Msg("MinioSetter.InitializeAsSource: bucket and service account ready for " + m.ExecutionsID)
event := MinioCredentialEvent{
ExecutionsID: m.ExecutionsID,
MinioID: m.MinioID,
Access: access,
Secret: secret,
SourcePeerID: localPeerID,
DestPeerID: destPeerID,
OriginID: originID,
}
if destPeerID == localPeerID {
// Same peer: store the secret locally without going through NATS.
m.InitializeAsTarget(ctx, event, true)
return
}
// Cross-peer: publish credentials (Phase 2) so oc-discovery routes them to the compute peer.
payload, err := json.Marshal(event)
if err != nil {
logger.Error().Msg("MinioSetter.InitializeAsSource: failed to marshal credential event: " + err.Error())
return
}
if b, err := json.Marshal(&tools.PropalgationMessage{
DataType: -1,
Action: tools.PB_MINIO_CONFIG,
Payload: payload,
}); err == nil {
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: tools.STORAGE_RESOURCE,
User: "",
Method: int(tools.PROPALGATION_EVENT),
Payload: b,
})
logger.Info().Msg("MinioSetter.InitializeAsSource: credentials published via NATS for " + m.ExecutionsID)
}
}
// InitializeAsTarget is called on the peer that runs the compute workload.
//
// It stores the Minio credentials received from the source peer (via NATS or directly)
// as a Kubernetes secret inside the execution namespace, making them available to pods.
// self must be true when the origin peer is the local peer (direct CONSIDERS_EVENT emission).
func (m *MinioSetter) InitializeAsTarget(ctx context.Context, event MinioCredentialEvent, self bool) {
fmt.Println("InitializeAsTarget is Self :", self)
logger := oclib.GetLogger()
k, err := tools.NewKubernetesService(
conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort,
conf.GetConfig().KubeCA, conf.GetConfig().KubeCert, conf.GetConfig().KubeData,
)
if err != nil {
logger.Error().Msg("MinioSetter.InitializeAsTarget: failed to create k8s service: " + err.Error())
return
}
if err := k.CreateSecret(ctx, event.MinioID, event.ExecutionsID, event.Access, event.Secret); err != nil {
logger.Error().Msg("MinioSetter.InitializeAsTarget: failed to create k8s secret: " + err.Error())
m.emitConsiders(event.ExecutionsID, event.OriginID, "", err, self)
return
}
if err := NewMinioService(event.URL).CreateMinioConfigMap(event.MinioID, event.ExecutionsID, event.URL); err != nil {
logger.Error().Msg("MinioSetter.InitializeAsTarget: failed to create config map: " + err.Error())
m.emitConsiders(event.ExecutionsID, event.OriginID, "", err, self)
return
}
logger.Info().Msg("MinioSetter.InitializeAsTarget: Minio credentials stored in namespace " + event.ExecutionsID)
m.emitConsiders(event.ExecutionsID, event.OriginID, event.Secret, nil, self)
}
// MinioDeleteEvent is the NATS payload used to tear down Minio resources.
// It mirrors MinioCredentialEvent but carries the access key for revocation.
type MinioDeleteEvent struct {
ExecutionsID string `json:"executions_id"`
MinioID string `json:"minio_id"`
Access string `json:"access"` // service account access key to revoke on the Minio host
SourcePeerID string `json:"source_peer_id"`
DestPeerID string `json:"dest_peer_id"`
OriginID string `json:"origin_id"`
}
// TeardownAsTarget is called on the peer that runs the compute workload.
// It reads the stored access key from the K8s secret, then removes both the secret
// and the artifact-repository ConfigMap from the execution namespace.
// For same-peer deployments it calls TeardownAsSource directly; otherwise it
// publishes a MinioDeleteEvent via NATS (PB_DELETE) so oc-discovery routes it to
// the Minio host peer.
func (m *MinioSetter) TeardownAsTarget(ctx context.Context, event MinioDeleteEvent) {
logger := oclib.GetLogger()
k, err := tools.NewKubernetesService(
conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort,
conf.GetConfig().KubeCA, conf.GetConfig().KubeCert, conf.GetConfig().KubeData,
)
if err != nil {
logger.Error().Msg("MinioSetter.TeardownAsTarget: failed to create k8s service: " + err.Error())
m.emitConsiders(event.ExecutionsID, event.OriginID, "", err, event.SourcePeerID == event.DestPeerID)
return
}
// Read the access key from the K8s secret before deleting it.
accessKey := event.Access
if accessKey == "" {
if secret, err := k.Set.CoreV1().Secrets(event.ExecutionsID).Get(
ctx, event.MinioID+"-secret-s3", metav1.GetOptions{},
); err == nil {
accessKey = string(secret.Data["access-key"])
}
}
// Delete K8s credentials secret.
if err := k.Set.CoreV1().Secrets(event.ExecutionsID).Delete(
ctx, event.MinioID+"-secret-s3", metav1.DeleteOptions{},
); err != nil {
logger.Error().Msg("MinioSetter.TeardownAsTarget: failed to delete secret: " + err.Error())
}
// Delete artifact-repository ConfigMap.
if err := NewMinioService("").DeleteMinioConfigMap(event.MinioID, event.ExecutionsID); err != nil {
logger.Error().Msg("MinioSetter.TeardownAsTarget: failed to delete configmap: " + err.Error())
}
logger.Info().Msg("MinioSetter.TeardownAsTarget: K8s resources removed for " + event.ExecutionsID)
// For same-peer deployments the source cleanup runs directly here so the
// caller (REMOVE_EXECUTION handler) doesn't have to distinguish roles.
if event.SourcePeerID == event.DestPeerID {
event.Access = accessKey
m.TeardownAsSource(ctx, event)
}
}
// TeardownAsSource is called on the peer that hosts the Minio instance.
// It revokes the scoped service account and removes the execution bucket.
func (m *MinioSetter) TeardownAsSource(ctx context.Context, event MinioDeleteEvent) {
logger := oclib.GetLogger()
url, err := m.loadMinioURL(event.SourcePeerID)
if err != nil {
logger.Error().Msg("MinioSetter.TeardownAsSource: " + err.Error())
return
}
svc := NewMinioService(url)
if err := svc.CreateClient(); err != nil {
logger.Error().Msg("MinioSetter.TeardownAsSource: failed to create admin client: " + err.Error())
return
}
if event.Access != "" {
if err := svc.DeleteCredentials(event.Access); err != nil {
logger.Error().Msg("MinioSetter.TeardownAsSource: failed to delete service account: " + err.Error())
}
}
if err := svc.DeleteBucket(event.MinioID, event.ExecutionsID); err != nil {
logger.Error().Msg("MinioSetter.TeardownAsSource: failed to delete bucket: " + err.Error())
}
logger.Info().Msg("MinioSetter.TeardownAsSource: Minio resources removed for " + event.ExecutionsID)
}
// loadMinioURL searches through all live storages accessible by peerID to find
// the one that references MinioID, and returns its endpoint URL.
func (m *MinioSetter) loadMinioURL(peerID string) (string, error) {
res := oclib.NewRequest(oclib.LibDataEnum(oclib.LIVE_STORAGE), "", peerID, []string{}, nil).LoadAll(false)
if res.Err != "" {
return "", fmt.Errorf("loadMinioURL: failed to load live storages: %s", res.Err)
}
for _, dbo := range res.Data {
l := dbo.(*live.LiveStorage)
if slices.Contains(l.ResourcesID, m.MinioID) {
return l.Source, nil
}
}
return "", fmt.Errorf("loadMinioURL: no live storage found for minio ID %s", m.MinioID)
}
// teardownMinioForExecution tears down all Minio configuration for the execution:
// - storage bookings where this peer is the compute target → TeardownAsTarget
// - storage bookings where this peer is the Minio source → TeardownAsSource
func (m *MinioSetter) TeardownForExecution(ctx context.Context, localPeerID string) {
logger := oclib.GetLogger()
res := oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), "", localPeerID, []string{}, nil).
Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"executions_id": {{Operator: dbs.EQUAL.String(), Value: m.ExecutionsID}},
"resource_type": {{Operator: dbs.EQUAL.String(), Value: tools.LIVE_STORAGE.EnumIndex()}},
},
}, "", false)
if res.Err != "" || len(res.Data) == 0 {
return
}
for _, dbo := range res.Data {
b, ok := dbo.(*bookingmodel.Booking)
if !ok {
continue
}
if b.DestPeerID == localPeerID {
// This peer is the compute target: tear down K8s secret + configmap.
logger.Info().Msgf("InfraTeardown: Minio target teardown exec=%s storage=%s", m.ExecutionsID, b.ResourceID)
event := MinioDeleteEvent{
ExecutionsID: m.ExecutionsID,
MinioID: b.ResourceID,
SourcePeerID: b.DestPeerID,
DestPeerID: localPeerID,
OriginID: "",
}
m.TeardownAsTarget(ctx, event)
} else {
// This peer is the Minio source: revoke SA + remove execution bucket.
logger.Info().Msgf("InfraTeardown: Minio source teardown exec=%s storage=%s", m.ExecutionsID, b.ResourceID)
event := MinioDeleteEvent{
ExecutionsID: m.ExecutionsID,
MinioID: b.ResourceID,
SourcePeerID: localPeerID,
DestPeerID: b.DestPeerID,
OriginID: "",
}
m.TeardownAsSource(ctx, event)
}
}
}

View File

@@ -0,0 +1,230 @@
package storage
import (
"context"
"encoding/json"
"fmt"
"slices"
"strings"
"oc-datacenter/conf"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
bookingmodel "cloud.o-forge.io/core/oc-lib/models/booking"
"cloud.o-forge.io/core/oc-lib/models/live"
"cloud.o-forge.io/core/oc-lib/tools"
)
// PVCProvisionEvent is the NATS payload for local PVC provisioning.
// Same-peer deployments are handled directly; cross-peer routes via PB_PVC_CONFIG.
type PVCProvisionEvent struct {
ExecutionsID string `json:"executions_id"`
StorageID string `json:"storage_id"`
StorageName string `json:"storage_name"`
SourcePeerID string `json:"source_peer_id"`
DestPeerID string `json:"dest_peer_id"`
OriginID string `json:"origin_id"`
}
// PVCDeleteEvent is the NATS payload for local PVC teardown.
type PVCDeleteEvent struct {
ExecutionsID string `json:"executions_id"`
StorageID string `json:"storage_id"`
StorageName string `json:"storage_name"`
SourcePeerID string `json:"source_peer_id"`
DestPeerID string `json:"dest_peer_id"`
OriginID string `json:"origin_id"`
}
// ClaimName returns the deterministic PVC name shared by oc-datacenter and oc-monitord.
func ClaimName(storageName, executionsID string) string {
return strings.ReplaceAll(strings.ToLower(storageName), " ", "-") + "-" + executionsID
}
// PVCSetter carries the execution context for a local PVC provisioning.
type PVCSetter struct {
ExecutionsID string
StorageID string
// ClaimSuffix overrides ExecutionsID as the suffix in ClaimName when non-empty.
// Used when the PVC namespace differs from the claim name suffix (Admiralty target).
ClaimSuffix string
}
func NewPVCSetter(execID, storageID string) *PVCSetter {
return &PVCSetter{ExecutionsID: execID, StorageID: storageID}
}
// NewPVCSetterWithClaimSuffix creates a PVCSetter where the claim name suffix
// differs from the execution namespace (e.g. Admiralty target provisioning).
func NewPVCSetterWithClaimSuffix(storageID, claimSuffix string) *PVCSetter {
return &PVCSetter{StorageID: storageID, ClaimSuffix: claimSuffix}
}
func (p *PVCSetter) emitConsiders(executionsID, originID string, provErr error, self bool) {
type pvcConsidersPayload struct {
OriginID string `json:"origin_id"`
ExecutionsID string `json:"executions_id"`
Error *string `json:"error,omitempty"`
}
var errStr *string
if provErr != nil {
s := provErr.Error()
errStr = &s
}
payload, _ := json.Marshal(pvcConsidersPayload{
OriginID: originID,
ExecutionsID: executionsID,
Error: errStr,
})
if self {
go tools.NewNATSCaller().SetNATSPub(tools.CONSIDERS_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: tools.STORAGE_RESOURCE,
Method: int(tools.CONSIDERS_EVENT),
Payload: payload,
})
return
}
b, _ := json.Marshal(&tools.PropalgationMessage{
DataType: tools.STORAGE_RESOURCE.EnumIndex(),
Action: tools.PB_CONSIDERS,
Payload: payload,
})
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-datacenter",
Datatype: -1,
Method: int(tools.PROPALGATION_EVENT),
Payload: b,
})
}
// InitializeAsSource creates the PVC in the execution namespace on the local cluster.
// self must be true when source and dest are the same peer (direct CONSIDERS_EVENT emission).
func (p *PVCSetter) InitializeAsSource(ctx context.Context, event PVCProvisionEvent, self bool) {
logger := oclib.GetLogger()
sizeStr, err := p.loadStorageSize(event.SourcePeerID)
if err != nil {
logger.Error().Msg("PVCSetter.InitializeAsSource: " + err.Error())
p.emitConsiders(event.ExecutionsID, event.OriginID, err, self)
return
}
k, err := tools.NewKubernetesService(
conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort,
conf.GetConfig().KubeCA, conf.GetConfig().KubeCert, conf.GetConfig().KubeData,
)
if err != nil {
logger.Error().Msg("PVCSetter.InitializeAsSource: failed to create k8s service: " + err.Error())
p.emitConsiders(event.ExecutionsID, event.OriginID, err, self)
return
}
claimSuffix := event.ExecutionsID
if p.ClaimSuffix != "" {
claimSuffix = p.ClaimSuffix
}
claimName := ClaimName(event.StorageName, claimSuffix)
if err := k.CreatePVC(ctx, claimName, event.ExecutionsID, sizeStr); err != nil {
logger.Error().Msg("PVCSetter.InitializeAsSource: failed to create PVC: " + err.Error())
p.emitConsiders(event.ExecutionsID, event.OriginID, err, self)
return
}
logger.Info().Msg("PVCSetter.InitializeAsSource: PVC " + claimName + " created in " + event.ExecutionsID)
p.emitConsiders(event.ExecutionsID, event.OriginID, nil, self)
}
// TeardownAsSource deletes the PVC from the execution namespace.
func (p *PVCSetter) TeardownAsSource(ctx context.Context, event PVCDeleteEvent) {
logger := oclib.GetLogger()
k, err := tools.NewKubernetesService(
conf.GetConfig().KubeHost+":"+conf.GetConfig().KubePort,
conf.GetConfig().KubeCA, conf.GetConfig().KubeCert, conf.GetConfig().KubeData,
)
if err != nil {
logger.Error().Msg("PVCSetter.TeardownAsSource: failed to create k8s service: " + err.Error())
return
}
claimName := ClaimName(event.StorageName, event.ExecutionsID)
if err := k.DeletePVC(ctx, claimName, event.ExecutionsID); err != nil {
logger.Error().Msg("PVCSetter.TeardownAsSource: failed to delete PVC: " + err.Error())
return
}
logger.Info().Msg("PVCSetter.TeardownAsSource: PVC " + claimName + " deleted from " + event.ExecutionsID)
}
// ResolveStorageName returns the live storage name for a given storageID, or "" if not found.
func ResolveStorageName(storageID, peerID string) string {
res := oclib.NewRequest(oclib.LibDataEnum(oclib.LIVE_STORAGE), "", peerID, []string{}, nil).LoadAll(false)
if res.Err != "" {
return ""
}
for _, dbo := range res.Data {
l := dbo.(*live.LiveStorage)
if slices.Contains(l.ResourcesID, storageID) {
return l.GetName()
}
}
return ""
}
// loadStorageSize looks up the SizeGB for this storage in live storages.
func (p *PVCSetter) loadStorageSize(peerID string) (string, error) {
res := oclib.NewRequest(oclib.LibDataEnum(oclib.LIVE_STORAGE), "", peerID, []string{}, nil).LoadAll(false)
if res.Err != "" {
return "", fmt.Errorf("loadStorageSize: %s", res.Err)
}
for _, dbo := range res.Data {
l := dbo.(*live.LiveStorage)
if slices.Contains(l.ResourcesID, p.StorageID) && l.SizeGB > 0 {
return fmt.Sprintf("%dGi", l.SizeGB), nil
}
}
return "10Gi", nil
}
// teardownPVCForExecution deletes all local PVCs provisioned for the execution.
// It searches LIVE_STORAGE bookings and resolves the storage name via the live storage.
func (p *PVCSetter) TeardownForExecution(ctx context.Context, localPeerID string) {
logger := oclib.GetLogger()
res := oclib.NewRequest(oclib.LibDataEnum(oclib.BOOKING), "", localPeerID, []string{}, nil).
Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"executions_id": {{Operator: dbs.EQUAL.String(), Value: p.ExecutionsID}},
"resource_type": {{Operator: dbs.EQUAL.String(), Value: tools.LIVE_STORAGE.EnumIndex()}},
},
}, "", false)
if res.Err != "" || len(res.Data) == 0 {
return
}
for _, dbo := range res.Data {
b, ok := dbo.(*bookingmodel.Booking)
if !ok {
continue
}
// Resolve storage name from live storage to compute the claim name.
storageName := ResolveStorageName(b.ResourceID, localPeerID)
if storageName == "" {
continue
}
logger.Info().Msgf("InfraTeardown: PVC teardown exec=%s storage=%s", p.ExecutionsID, b.ResourceID)
event := PVCDeleteEvent{
ExecutionsID: p.ExecutionsID,
StorageID: b.ResourceID,
StorageName: storageName,
SourcePeerID: localPeerID,
DestPeerID: b.DestPeerID,
OriginID: "",
}
p.StorageID = b.ResourceID
p.TeardownAsSource(ctx, event)
}
}

51
main.go
View File

@@ -1,55 +1,38 @@
package main
import (
"encoding/base64"
"oc-datacenter/conf"
"oc-datacenter/infrastructure"
_ "oc-datacenter/routers"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/tools"
beego "github.com/beego/beego/v2/server/web"
)
const appname = "oc-datacenter"
func main() {
// Init the oc-lib
oclib.Init(appname)
// Load the right config file
o := oclib.GetConfLoader()
o := oclib.GetConfLoader(appname)
conf.GetConfig().Mode = o.GetStringDefault("MODE", "kubernetes")
conf.GetConfig().KubeHost = o.GetStringDefault("KUBERNETES_SERVICE_HOST", "")
conf.GetConfig().KubeHost = o.GetStringDefault("KUBERNETES_SERVICE_HOST", "kubernetes.default.svc.cluster.local")
conf.GetConfig().KubePort = o.GetStringDefault("KUBERNETES_SERVICE_PORT", "6443")
conf.GetConfig().KubeExternalHost = o.GetStringDefault("KUBE_EXTERNAL_HOST", "")
sDec, err := base64.StdEncoding.DecodeString(o.GetStringDefault("KUBE_CA", ""))
if err == nil {
conf.GetConfig().KubeCA = string(sDec)
}
sDec, err = base64.StdEncoding.DecodeString(o.GetStringDefault("KUBE_CERT", ""))
if err == nil {
conf.GetConfig().KubeCert = string(sDec)
}
sDec, err = base64.StdEncoding.DecodeString(o.GetStringDefault("KUBE_DATA", ""))
if err == nil {
conf.GetConfig().KubeData = string(sDec)
}
// feed the library with the loaded config
oclib.SetConfig(
o.GetStringDefault("MONGO_URL", "mongodb://127.0.0.1:27017"),
o.GetStringDefault("MONGO_DATABASE", "DC_myDC"),
o.GetStringDefault("NATS_URL", "nats://localhost:4222"),
o.GetStringDefault("LOKI_URL", ""),
o.GetStringDefault("LOG_LEVEL", "info"),
)
conf.GetConfig().KubeCA = o.GetStringDefault("KUBE_CA", "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTnpNeE1qY3dPVFl3SGhjTk1qWXdNekV3TURjeE9ERTJXaGNOTXpZd016QTNNRGN4T0RFMgpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTnpNeE1qY3dPVFl3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFReG81cXQ0MGxEekczRHJKTE1wRVBrd0ZBY1FmbC8vVE1iWjZzemMreHAKbmVzVzRTSTdXK1lWdFpRYklmV2xBMTRaazQvRFlDMHc1YlgxZU94RVVuL0pvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXBLM2pGK25IRlZSbDcwb3ZRVGZnCmZabGNQZE13Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnVnkyaUx0Y0xaYm1vTnVoVHdKbU5sWlo3RVlBYjJKNW0KSjJYbG1UbVF5a2tDSUhLbzczaDBkdEtUZTlSa0NXYTJNdStkS1FzOXRFU0tBV0x1emlnYXBHYysKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=")
conf.GetConfig().KubeCert = o.GetStringDefault("KUBE_CERT", "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRlZ0F3SUJBZ0lJQUkvSUg2R2Rodm93Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOemN6TVRJM01EazJNQjRYRFRJMk1ETXhNREEzTVRneE5sb1hEVEkzTURNeApNREEzTVRneE5sb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJQTTdBVEZQSmFMMjUrdzAKUU1vZUIxV2hBRW4vWnViM0tSRERrYnowOFhwQWJ2akVpdmdnTkdpdG4wVmVsaEZHamRmNHpBT29Nd1J3M21kbgpYSGtHVDB5alNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUVZLOThaMEMxcFFyVFJSMGVLZHhIa2o0ejFJREFLQmdncWhrak9QUVFEQWdOSkFEQkcKQWlFQXZYWll6Zk9iSUtlWTRtclNsRmt4ZS80a0E4K01ieDc1UDFKRmNlRS8xdGNDSVFDNnM0ZXlZclhQYmNWSgpxZm5EamkrZ1RacGttN0tWSTZTYTlZN2FSRGFabUE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZURDQ0FSMmdBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwClpXNTBMV05oUURFM056TXhNamN3T1RZd0hoY05Nall3TXpFd01EY3hPREUyV2hjTk16WXdNekEzTURjeE9ERTIKV2pBak1TRXdId1lEVlFRRERCaHJNM010WTJ4cFpXNTBMV05oUURFM056TXhNamN3T1RZd1dUQVRCZ2NxaGtqTwpQUUlCQmdncWhrak9QUU1CQndOQ0FBUzV1NGVJbStvVnV1SFI0aTZIOU1kVzlyUHdJbFVPNFhIMEJWaDRUTGNlCkNkMnRBbFVXUW5FakxMdlpDWlVaYTlzTlhKOUVtWWt5S0dtQWR2TE9FbUVrbzBJd1FEQU9CZ05WSFE4QkFmOEUKQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVGU3ZmR2RBdGFVSzAwVWRIaW5jUgo1SStNOVNBd0NnWUlLb1pJemowRUF3SURTUUF3UmdJaEFMY2xtQnR4TnpSVlBvV2hoVEVKSkM1Z3VNSGsvcFZpCjFvYXJ2UVJxTWRKcUFpRUEyR1dNTzlhZFFYTEQwbFZKdHZMVkc1M3I0M0lxMHpEUUQwbTExMVZyL1MwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==")
conf.GetConfig().KubeData = o.GetStringDefault("KUBE_DATA", "LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUVkSTRZN3lRU1ZwRGNrblhsQmJEaXBWZHRMWEVsYVBkN3VBZHdBWFFya2xvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFOHpzQk1VOGxvdmJuN0RSQXloNEhWYUVBU2Y5bTV2Y3BFTU9SdlBUeGVrQnUrTVNLK0NBMAphSzJmUlY2V0VVYU4xL2pNQTZnekJIRGVaMmRjZVFaUFRBPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=")
// Beego init
beego.BConfig.AppName = appname
beego.BConfig.Listen.HTTPPort = o.GetIntDefault("port", 8080)
beego.BConfig.WebConfig.DirectoryIndex = true
beego.BConfig.WebConfig.StaticDir["/swagger"] = "swagger"
api := &tools.API{}
api.Discovered(beego.BeeApp.Handlers.GetAllControllerInfo())
conf.GetConfig().MonitorMode = o.GetStringDefault("MONITOR_MODE", "prometheus")
conf.GetConfig().MinioRootKey = o.GetStringDefault("MINIO_ADMIN_ACCESS", "")
conf.GetConfig().MinioRootSecret = o.GetStringDefault("MINIO_ADMIN_SECRET", "")
oclib.InitAPI(appname)
infrastructure.BootstrapAllowedImages()
go infrastructure.ListenNATS()
go infrastructure.WatchBookings()
go infrastructure.WatchInfra()
beego.Run()
}

BIN
oc-datacenter Executable file

Binary file not shown.

View File

@@ -7,6 +7,42 @@ import (
func init() {
beego.GlobalControllerRouter["oc-datacenter/controllers:AdmiraltyController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:AdmiraltyController"],
beego.ControllerComments{
Method: "GetKubeSecret",
Router: `/secret/:execution/:peer`,
AllowHTTPMethods: []string{"get"},
MethodParams: param.Make(),
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:AdmiraltyController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:AdmiraltyController"],
beego.ControllerComments{
Method: "GetAllTargets",
Router: `/targets`,
AllowHTTPMethods: []string{"get"},
MethodParams: param.Make(),
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:AdmiraltyController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:AdmiraltyController"],
beego.ControllerComments{
Method: "GetOneTarget",
Router: `/targets/:execution`,
AllowHTTPMethods: []string{"get"},
MethodParams: param.Make(),
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:AdmiraltyController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:AdmiraltyController"],
beego.ControllerComments{
Method: "DeleteAdmiraltySession",
Router: `/targets/:execution`,
AllowHTTPMethods: []string{"delete"},
MethodParams: param.Make(),
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"],
beego.ControllerComments{
Method: "GetAll",
@@ -34,6 +70,15 @@ func init() {
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"],
beego.ControllerComments{
Method: "Log",
Router: `/:id`,
AllowHTTPMethods: []string{"get"},
MethodParams: param.Make(),
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"],
beego.ControllerComments{
Method: "Put",
@@ -52,6 +97,24 @@ func init() {
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"],
beego.ControllerComments{
Method: "ExtendForExecution",
Router: `/extend/:resource_id/from_execution/:execution_id/to/:duration`,
AllowHTTPMethods: []string{"post"},
MethodParams: param.Make(),
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"],
beego.ControllerComments{
Method: "ExtendForNamespace",
Router: `/extend/:resource_id/from_namespace/:namespace/to/:duration`,
AllowHTTPMethods: []string{"post"},
MethodParams: param.Make(),
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:BookingController"],
beego.ControllerComments{
Method: "Search",
@@ -88,6 +151,15 @@ func init() {
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:MinioController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:MinioController"],
beego.ControllerComments{
Method: "CreateServiceAccount",
Router: `/serviceaccount/:minioId/:executions`,
AllowHTTPMethods: []string{"post"},
MethodParams: param.Make(),
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:SessionController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:SessionController"],
beego.ControllerComments{
Method: "GetToken",
@@ -97,6 +169,15 @@ func init() {
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:VectorController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:VectorController"],
beego.ControllerComments{
Method: "Receive",
Router: `/`,
AllowHTTPMethods: []string{"post"},
MethodParams: param.Make(),
Filters: nil,
Params: nil})
beego.GlobalControllerRouter["oc-datacenter/controllers:VersionController"] = append(beego.GlobalControllerRouter["oc-datacenter/controllers:VersionController"],
beego.ControllerComments{
Method: "GetAll",

View File

@@ -18,21 +18,22 @@ func init() {
beego.NSInclude(
&controllers.DatacenterController{},
),
beego.NSNamespace("/session",
beego.NSInclude(
&controllers.SessionController{},
),
),
beego.NSNamespace("/booking",
beego.NSInclude(
&controllers.BookingController{},
),
),
beego.NSNamespace("/version",
beego.NSInclude(
&controllers.VersionController{},
),
),
beego.NSNamespace("/allowed-image",
beego.NSInclude(
&controllers.AllowedImageController{},
),
),
)
beego.AddNamespace(ns)

View File

@@ -39,7 +39,7 @@
window.onload = function() {
// Begin Swagger UI call region
const ui = SwaggerUIBundle({
url: "swagger.json",
url: "https://petstore.swagger.io/v2/swagger.json",
dom_id: '#swagger-ui',
deepLinking: true,
presets: [

View File

@@ -37,6 +37,180 @@
}
}
},
"/admiralty/kubeconfig/{execution}": {
"get": {
"tags": [
"admiralty"
],
"parameters": [
{
"in": "path",
"name": "execution",
"description": "execution id of the workflow",
"required": true,
"type": "string"
}
],
"responses": {
"200": {
"description": ""
}
}
}
},
"/admiralty/node/{execution}": {
"get": {
"tags": [
"admiralty"
],
"parameters": [
{
"in": "path",
"name": "execution",
"description": "execution id of the workflow",
"required": true,
"type": "string"
}
],
"responses": {
"200": {
"description": ""
}
}
}
},
"/admiralty/secret/{execution}": {
"get": {
"tags": [
"admiralty"
],
"parameters": [
{
"in": "path",
"name": "execution",
"description": "execution id of the workflow",
"required": true,
"type": "string"
}
],
"responses": {
"200": {
"description": ""
}
}
},
"post": {
"tags": [
"admiralty"
],
"parameters": [
{
"in": "path",
"name": "execution",
"description": "execution id of the workflow",
"required": true,
"type": "string"
},
{
"in": "body",
"name": "kubeconfig",
"description": "Kubeconfig to use when creating secret",
"required": true,
"schema": {
"$ref": "#/definitions/controllers.RemoteKubeconfig"
}
}
],
"responses": {
"201": {
"description": ""
}
}
}
},
"/admiralty/source/{execution}": {
"post": {
"tags": [
"admiralty"
],
"description": "Create an Admiralty Source on remote cluster\n\u003cbr\u003e",
"operationId": "AdmiraltyController.CreateSource",
"parameters": [
{
"in": "path",
"name": "execution",
"description": "execution id of the workflow",
"required": true,
"type": "string"
}
],
"responses": {
"201": {
"description": ""
}
}
}
},
"/admiralty/target/{execution}": {
"post": {
"tags": [
"admiralty"
],
"description": "Create an Admiralty Target in the namespace associated to the executionID\n\u003cbr\u003e",
"operationId": "AdmiraltyController.CreateAdmiraltyTarget",
"parameters": [
{
"in": "path",
"name": "execution",
"description": "execution id of the workflow",
"required": true,
"type": "string"
}
],
"responses": {
"201": {
"description": ""
}
}
}
},
"/admiralty/targets": {
"get": {
"tags": [
"admiralty"
],
"description": "find all Admiralty Target\n\u003cbr\u003e",
"operationId": "AdmiraltyController.GetAllTargets",
"responses": {
"200": {
"description": ""
}
}
}
},
"/admiralty/targets/{execution}": {
"get": {
"tags": [
"admiralty"
],
"description": "find one Admiralty Target\n\u003cbr\u003e",
"operationId": "AdmiraltyController.GetOneTarget",
"parameters": [
{
"in": "path",
"name": "id",
"description": "the name of the target to get",
"required": true,
"type": "string"
}
],
"responses": {
"200": {
"description": ""
}
}
}
},
"/booking/": {
"get": {
"tags": [
@@ -342,6 +516,15 @@
}
},
"definitions": {
"controllers.RemoteKubeconfig": {
"title": "RemoteKubeconfig",
"type": "object",
"properties": {
"Data": {
"type": "string"
}
}
},
"models.compute": {
"title": "compute",
"type": "object"
@@ -363,6 +546,10 @@
{
"name": "version",
"description": "VersionController operations for Version\n"
},
{
"name": "admiralty",
"description": "Operations about the admiralty objects of the datacenter\n"
}
]
}

View File

@@ -49,6 +49,125 @@ paths:
responses:
"200":
description: '{booking} models.booking'
/admiralty/kubeconfig/{execution}:
get:
tags:
- admiralty
parameters:
- in: path
name: execution
description: execution id of the workflow
required: true
type: string
responses:
"200":
description: ""
/admiralty/node/{execution}:
get:
tags:
- admiralty
parameters:
- in: path
name: execution
description: execution id of the workflow
required: true
type: string
responses:
"200":
description: ""
/admiralty/secret/{execution}:
get:
tags:
- admiralty
parameters:
- in: path
name: execution
description: execution id of the workflow
required: true
type: string
responses:
"200":
description: ""
post:
tags:
- admiralty
parameters:
- in: path
name: execution
description: execution id of the workflow
required: true
type: string
- in: body
name: kubeconfig
description: Kubeconfig to use when creating secret
required: true
schema:
$ref: '#/definitions/controllers.RemoteKubeconfig'
responses:
"201":
description: ""
/admiralty/source/{execution}:
post:
tags:
- admiralty
description: |-
Create an Admiralty Source on remote cluster
<br>
operationId: AdmiraltyController.CreateSource
parameters:
- in: path
name: execution
description: execution id of the workflow
required: true
type: string
responses:
"201":
description: ""
/admiralty/target/{execution}:
post:
tags:
- admiralty
description: |-
Create an Admiralty Target in the namespace associated to the executionID
<br>
operationId: AdmiraltyController.CreateAdmiraltyTarget
parameters:
- in: path
name: execution
description: execution id of the workflow
required: true
type: string
responses:
"201":
description: ""
/admiralty/targets:
get:
tags:
- admiralty
description: |-
find all Admiralty Target
<br>
operationId: AdmiraltyController.GetAllTargets
responses:
"200":
description: ""
/admiralty/targets/{execution}:
get:
tags:
- admiralty
description: |-
find one Admiralty Target
<br>
operationId: AdmiraltyController.GetOneTarget
parameters:
- in: path
name: id
description: the name of the target to get
required: true
type: string
responses:
"200":
description: ""
/booking/:
get:
tags:
@@ -250,6 +369,12 @@ paths:
"200":
description: ""
definitions:
controllers.RemoteKubeconfig:
title: RemoteKubeconfig
type: object
properties:
Data:
type: string
models.compute:
title: compute
type: object
@@ -266,3 +391,6 @@ tags:
- name: version
description: |
VersionController operations for Version
- name: admiralty
description: |
Operations about the admiralty objects of the datacenter