54 Commits

Author SHA1 Message Date
mr
83cef6e6f6 saved 2026-03-09 14:57:41 +01:00
mr
3751ec554d Full Flow : Catalog + Peer 2026-03-05 15:22:02 +01:00
mr
ef3d998ead demo test + Peer 2026-03-03 16:38:24 +01:00
mr
79aa3cc2b3 adjust 2026-02-26 09:14:34 +01:00
mr
779e36aaef Pass + Doc 2026-02-24 14:31:37 +01:00
mr
572da29fd4 check up if peer is sourced. 2026-02-20 15:01:01 +01:00
mr
3eae5791a1 Native Indexer Mode 2026-02-20 12:42:18 +01:00
mr
88fd05066c update lightest peer and nats behaviors 2026-02-18 14:32:44 +01:00
mr
0250c3b339 Peer Discovery -> DHT // no more pubsub state 2026-02-18 13:29:50 +01:00
mr
6a5ffb9a92 Indexer Quality Score TrustLess 2026-02-17 13:11:22 +01:00
mr
fa914958b6 Keep Peer Caching + Resource Verification. 2026-02-09 13:28:00 +01:00
mr
1c0b2b4312 better tagging 2026-02-09 09:45:41 +01:00
mr
631e2846fe remove apk 2026-02-09 08:55:50 +01:00
mr
d985d8339a Change of state Conn Management 2026-02-05 16:17:33 +01:00
mr
ea14ad3933 Closure On change of state 2026-02-05 16:17:14 +01:00
mr
2e31df89c2 oc-discovery + auto create peer 2026-02-05 15:47:29 +01:00
mr
425cbdfe7d stream address 2026-02-05 15:36:22 +01:00
mr
8ee5b84e21 publish-registry 2026-02-05 12:14:02 +01:00
mr
552bb17e2b Connectivity ok 2026-02-05 11:23:11 +01:00
mr
88e29073a2 dockerfile default 2026-02-05 09:31:51 +01:00
mr
b429ee9816 base demo files 2026-02-05 08:57:00 +01:00
mr
c716225283 base demo 2026-02-05 08:56:55 +01:00
mr
3bc01c3a04 Debug Spread Get Peer 2026-02-04 11:35:19 +01:00
mr
1ebbb54dd1 compact conf 2026-02-03 16:21:29 +01:00
mr
c958d106b7 compact conf 2026-02-03 16:21:22 +01:00
mr
5442d625c6 Bad Init 2026-02-03 15:47:28 +01:00
mr
60ed7048cd Missing Config 2026-02-03 15:37:33 +01:00
mr
7b68a608dd standardize Deployment 2026-02-03 15:31:06 +01:00
mr
1c2ea9ca96 Set up -> Stream is Working 2026-02-03 15:25:15 +01:00
mr
0ff21c0818 discovery 2026-02-03 08:47:22 +01:00
mr
6ca762abbf Add logic service NATS Peer x Cache + Retrieve a lost peer partner. 2026-02-02 12:43:43 +01:00
mr
0ffe98045e No Blacklisted + Hoping and diffused research 2026-02-02 12:14:01 +01:00
mr
c3352499fa First Starting debug 2026-02-02 09:05:58 +01:00
mr
562d86125e FULL OC-DISCOVERY LOGIC 2026-01-30 16:57:36 +01:00
mr
d50e5d56f7 Update NATS 2026-01-28 17:31:34 +01:00
mr
38cd862947 Daemon Search Then 2026-01-28 17:22:42 +01:00
mr
7fd258dc9d Daemons Search 2026-01-28 17:22:29 +01:00
ycc
0ed2fc0f15 test
All checks were successful
continuous-integration/drone/push Build is passing
2025-11-12 10:55:21 +01:00
yc
ea5320d4be pack without swagger, temporary
Some checks failed
continuous-integration/drone/push Build is failing
2025-11-12 10:36:26 +01:00
yc
25184deecb fix sed
Some checks failed
continuous-integration/drone/push Build is failing
2025-11-12 10:16:36 +01:00
yc
27379cb392 Multiarch fix
Some checks failed
continuous-integration/drone/push Build is failing
2025-11-12 10:11:09 +01:00
yc
66d228f143 basic test update
Some checks failed
continuous-integration/drone/push Build is failing
2025-11-12 10:03:15 +01:00
yc
f1444c8046 Golang version removal
Some checks failed
continuous-integration/drone/push Build is failing
2025-11-12 09:57:24 +01:00
yc
0aef66207f Multiarch fix
Some checks failed
continuous-integration/drone/push Build is failing
2025-11-12 09:55:04 +01:00
yc
0d9d7c9931 multiple arch CI build
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2025-11-12 09:51:36 +01:00
ycc
90c24a9e05 Dockerfile upgrade for multi target
Some checks failed
continuous-integration/drone/push Build is failing
2025-11-12 09:14:13 +01:00
fdeb933e26 Actualiser .drone.yml
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone Build is passing
2025-10-21 16:49:38 +02:00
5aae779e74 Actualiser docker-compose.yml
Some checks failed
continuous-integration/drone/push Build is failing
continuous-integration/drone Build is failing
2025-10-21 15:51:16 +02:00
3731d48f81 Actualiser .drone.yml
Some checks reported errors
continuous-integration/drone/push Build encountered an error
continuous-integration/drone Build is failing
2025-10-21 14:33:07 +02:00
pb
4915b3e4cf Mise à jour des fichiers go.mod et go.sum
Some checks reported errors
continuous-integration/drone Build encountered an error
2025-08-12 16:22:56 +02:00
plm
50758a7efe k8s integration 2025-01-15 16:32:12 +01:00
ycc
2f2a3bf250 fix port, models.GetConfig assignation to be cheked 2024-10-16 08:42:51 +02:00
ycc
6dddb43590 license update 2024-10-03 09:44:32 +02:00
ycc
af4b62f63b license update 2024-10-03 09:44:30 +02:00
105 changed files with 10322 additions and 1086 deletions

View File

@@ -3,18 +3,37 @@ kind: pipeline
name: unit name: unit
steps: steps:
- name: build # -------------------- tests (host arch only) --------------------
image: golang - name: test
commands: image: golang:alpine
- go test pull: if-not-exists
- go build commands:
- go test ./...
# -------------------- build + push multi-arch image --------------------
- name: publish
image: plugins/docker:latest
settings:
username:
from_secret: docker-user
password:
from_secret: docker-pw
#repo:
# from_secret: docker-repo
repo: opencloudregistry/oc-discovery
# build context & dockerfile
context: .
dockerfile: Dockerfile
# enable buildx / multi-arch
buildx: true
platforms:
- linux/amd64
- linux/arm64
- linux/arm/v7
# tags to push (all as a single multi-arch manifest)
tags:
- latest
- name: publish
image: plugins/docker
settings:
username:
from_secret: docker-user
password:
from_secret: docker-pw
repo:
from_secret: docker-repo

495
ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,495 @@
# oc-discovery — Architecture et analyse technique
> **Convention de lecture**
> Les points marqués ✅ ont été corrigés dans le code. Les points marqués ⚠️ restent ouverts.
## Table des matières
1. [Vue d'ensemble](#1-vue-densemble)
2. [Hiérarchie des rôles](#2-hiérarchie-des-rôles)
3. [Mécanismes principaux](#3-mécanismes-principaux)
- 3.1 Heartbeat long-lived (node → indexer)
- 3.2 Scoring de confiance
- 3.3 Enregistrement auprès des natifs (indexer → native)
- 3.4 Pool d'indexeurs : fetch + consensus
- 3.5 Self-delegation et offload loop
- 3.6 Résilience du mesh natif
- 3.7 DHT partagée
- 3.8 PubSub gossip (indexer registry)
- 3.9 Streams applicatifs (node ↔ node)
4. [Tableau récapitulatif](#4-tableau-récapitulatif)
5. [Risques et limites globaux](#5-risques-et-limites-globaux)
6. [Pistes d'amélioration](#6-pistes-damélioration)
---
## 1. Vue d'ensemble
`oc-discovery` est un service de découverte P2P pour le réseau OpenCloud. Il repose sur
**libp2p** (transport TCP + PSK réseau privé) et une **DHT Kademlia** (préfixe `oc`)
pour indexer les pairs. L'architecture est intentionnellement hiérarchique : des _natifs_
stables servent de hubs autoritaires auxquels des _indexeurs_ s'enregistrent, et des _nœuds_
ordinaires découvrent des indexeurs via ces natifs.
```
┌──────────────┐ heartbeat ┌──────────────────┐
│ Node │ ───────────────────► │ Indexer │
│ (libp2p) │ ◄─────────────────── │ (DHT server) │
└──────────────┘ stream applicatif └────────┬─────────┘
│ subscribe / heartbeat
┌──────────────────┐
│ Native Indexer │◄──► autres natifs
│ (hub autoritaire│ (mesh)
└──────────────────┘
```
Tous les participants partagent une **clé pré-partagée (PSK)** qui isole le réseau
des connexions libp2p externes non autorisées.
---
## 2. Hiérarchie des rôles
| Rôle | Binaire | Responsabilité |
|---|---|---|
| **Node** | `node_mode=node` | Se fait indexer, publie/consulte des records DHT |
| **Indexer** | `node_mode=indexer` | Reçoit les heartbeats, écrit en DHT, s'enregistre auprès des natifs |
| **Native Indexer** | `node_mode=native` | Hub : tient le registre des indexeurs vivants, évalue le consensus, sert de fallback |
Un même processus peut cumuler les rôles node+indexer ou indexer+native.
---
## 3. Mécanismes principaux
### 3.1 Heartbeat long-lived (node → indexer)
**Fonctionnement**
Un stream libp2p **persistant** (`/opencloud/heartbeat/1.0`) est ouvert depuis le nœud
vers chaque indexeur de son pool (`StaticIndexers`). Toutes les 20 secondes, le nœud
envoie un `Heartbeat` JSON sur ce stream. L'indexeur répond en enregistrant le peer dans
`StreamRecords[ProtocolHeartbeat]` avec une expiry de 2 min.
Si `sendHeartbeat` échoue (stream reset, EOF, timeout), le peer est retiré de
`StaticIndexers` et `replenishIndexersFromNative` est déclenché.
**Avantages**
- Détection rapide de déconnexion (erreur sur le prochain encode).
- Un seul stream par pair réduit la pression sur les connexions TCP.
- Le channel de nudge (`indexerHeartbeatNudge`) permet un reconnect immédiat sans
attendre le ticker de 20 s.
**Limites / risques**
- ⚠️ Un seul stream persistant : si la couche TCP reste ouverte mais "gelée" (middlebox,
NAT silencieux), l'erreur peut ne pas remonter avant plusieurs minutes.
- ⚠️ `StaticIndexers` est une map partagée globale : si deux goroutines appellent
`replenishIndexersFromNative` simultanément (cas de perte multiple), on peut avoir
des écritures concurrentes non protégées hors des sections critiques.
---
### 3.2 Scoring de confiance
**Fonctionnement**
Avant d'enregistrer un heartbeat dans `StreamRecords`, l'indexeur vérifie un **score
minimum** calculé par `CheckHeartbeat` :
```
Score = (0.4 × uptime_ratio + 0.4 × bpms + 0.2 × diversity) × 100
```
- `uptime_ratio` : durée de présence du peer / durée depuis le démarrage de l'indexeur.
- `bpms` : débit mesuré via un stream dédié (`/opencloud/probe/1.0`) normalisé par 50 Mbps.
- `diversity` : ratio d'IP /24 distincts parmi les indexeurs que le peer déclare.
Deux seuils sont appliqués selon l'état du peer :
- **Premier heartbeat** (peer absent de `StreamRecords`, uptime = 0) : seuil à **40**.
- **Heartbeats suivants** (uptime accumulé) : seuil à **75**.
**Avantages**
- Décourage les peers éphémères ou lents d'encombrer le registre.
- La diversité réseau réduit le risque de concentration sur un seul sous-réseau.
- Le stream de probe dédié évite de polluer le stream JSON heartbeat avec des données binaires.
- Le double seuil permet aux nouveaux peers d'être admis dès leur première connexion.
**Limites / risques**
-**Deadlock logique de démarrage corrigé** : avec uptime = 0 le score maximal était 60,
en-dessous du seuil de 75. Les nouveaux peers étaient silencieusement rejetés à jamais.
→ Seuil abaissé à **40** pour le premier heartbeat (`isFirstHeartbeat`), 75 ensuite.
- ⚠️ Les seuils (40 / 75) restent câblés en dur, sans possibilité de configuration.
- ⚠️ La mesure de bande passante envoie entre 512 et 2048 octets par heartbeat : à 20 s
d'intervalle et 500 nœuds max, cela représente ~50 KB/s de trafic probe en continu.
- ⚠️ `diversity` est calculé sur les adresses que le nœud *déclare* avoir — ce champ est
auto-rapporté et non vérifié, facilement falsifiable.
---
### 3.3 Enregistrement auprès des natifs (indexer → native)
**Fonctionnement**
Chaque indexeur (non-natif) envoie périodiquement (toutes les 60 s) une
`IndexerRegistration` JSON sur un stream one-shot (`/opencloud/native/subscribe/1.0`)
vers chaque natif configuré. Le natif :
1. Stocke l'entrée en cache local avec un TTL de **90 s** (`IndexerTTL`).
2. Gossipe le `PeerID` sur le topic PubSub `oc-indexer-registry` aux autres natifs.
3. Persiste l'entrée en DHT de manière asynchrone (retry jusqu'à succès).
**Avantages**
- Stream jetable : pas de ressource longue durée côté natif pour les enregistrements.
- Le cache local est immédiatement disponible pour `handleNativeGetIndexers` sans
attendre la DHT.
- La dissémination PubSub permet à d'autres natifs de connaître l'indexeur sans
qu'il ait besoin de s'y enregistrer directement.
**Limites / risques**
-**TTL trop serré corrigé** : le TTL de 66 s n'était que 10 % au-dessus de l'intervalle
de 60 s — un léger retard réseau pouvait expirer un indexeur sain entre deux renewals.
`IndexerTTL` porté à **90 s** (+50 %).
- ⚠️ Si le `PutValue` DHT échoue définitivement (réseau partitionné), le natif possède
l'entrée mais les autres natifs qui n'ont pas reçu le message PubSub ne la connaissent
jamais — incohérence silencieuse.
- ⚠️ `RegisterWithNative` ignore les adresses en `127.0.0.1`, mais ne gère pas
les adresses privées (RFC1918) qui seraient non routables depuis d'autres hôtes.
---
### 3.4 Pool d'indexeurs : fetch + consensus
**Fonctionnement**
Lors de `ConnectToNatives` (démarrage ou replenish), le nœud/indexeur :
1. **Fetch** : envoie `GetIndexersRequest` au premier natif répondant
(`/opencloud/native/indexers/1.0`), reçoit une liste de candidats.
2. **Consensus (round 1)** : interroge **tous** les natifs configurés en parallèle
(`/opencloud/native/consensus/1.0`, timeout 3 s, collecte sur 4 s).
Un indexeur est confirmé si **strictement plus de 50 %** des natifs répondants
le considèrent vivant.
3. **Consensus (round 2)** : si le pool est insuffisant, les suggestions des natifs
(indexeurs qu'ils connaissent mais qui n'étaient pas dans les candidats initiaux)
sont soumises à un second round.
**Avantages**
- La règle de majorité absolue empêche un natif compromis ou désynchronisé d'injecter
des indexeurs fantômes.
- Le double round permet de compléter le pool avec des alternatives connues des natifs
sans sacrifier la vérification.
- Si le fetch retourne un **fallback** (natif comme indexeur), le consensus est skippé —
cohérent car il n'y a qu'une seule source.
**Limites / risques**
- ⚠️ Avec **un seul natif** configuré (très courant en dev/test), le consensus est trivial
(100 % d'un seul vote) — la règle de majorité ne protège rien dans ce cas.
- ⚠️ `fetchIndexersFromNative` s'arrête au **premier natif répondant** (séquentiellement) :
si ce natif a un cache périmé ou partiel, le nœud obtient un pool sous-optimal sans
consulter les autres.
- ⚠️ Le timeout de collecte global (4 s) est fixe : sur un réseau lent ou géographiquement
distribué, des natifs valides peuvent être éliminés faute de réponse à temps.
- ⚠️ `replaceStaticIndexers` **ajoute** sans jamais retirer d'anciens indexeurs expirés :
le pool peut accumuler des entrées mortes que seul le heartbeat purge ensuite.
---
### 3.5 Self-delegation et offload loop
**Fonctionnement**
Si un natif ne dispose d'aucun indexeur vivant lors d'un `handleNativeGetIndexers`,
il se désigne lui-même comme indexeur temporaire (`selfDelegate`) : il retourne sa propre
adresse multiaddr et ajoute le demandeur dans `responsiblePeers`, dans la limite de
`maxFallbackPeers` (50). Au-delà, la délégation est refusée et une réponse vide est
retournée pour que le nœud tente un autre natif.
Toutes les 30 s, `runOffloadLoop` vérifie si des indexeurs réels sont de nouveau
disponibles. Si oui, pour chaque peer responsable :
- **Stream présent** : `Reset()` du stream heartbeat — le peer reçoit une erreur,
déclenche `replenishIndexersFromNative` et migre vers de vrais indexeurs.
- **Stream absent** (peer jamais admis par le scoring) : `ClosePeer()` sur la connexion
réseau — le peer reconnecte et re-demande ses indexeurs au natif.
**Avantages**
- Continuité de service : un nœud n'est jamais bloqué en l'absence temporaire d'indexeurs.
- La migration est automatique et transparente pour le nœud.
- `Reset()` (vs `Close()`) interrompt les deux sens du stream, garantissant que le peer
reçoit bien une erreur.
- La limite de 50 empêche le natif de se retrouver surchargé lors de pénuries prolongées.
**Limites / risques**
-**Offload sans stream corrigé** : si le heartbeat n'avait jamais été enregistré dans
`StreamRecords` (score < seuil cas amplifié par le bug de scoring), l'offload
échouait silencieusement et le peer restait dans `responsiblePeers` indéfiniment.
Branche `else` : `ClosePeer()` + suppression de `responsiblePeers`.
- **`responsiblePeers` illimité corrigé** : le natif acceptait un nombre arbitraire
de peers en self-delegation, devenant lui-même un indexeur surchargé.
`selfDelegate` vérifie `len(responsiblePeers) >= maxFallbackPeers` et retourne
`false` si saturé.
- La délégation reste non coordonnée entre natifs : un natif surchargé refuse (retourne
vide) mais ne redirige pas explicitement vers un natif voisin qui aurait de la capacité.
---
### 3.6 Résilience du mesh natif
**Fonctionnement**
Quand le heartbeat vers un natif échoue, `replenishNativesFromPeers` tente de trouver
un remplaçant dans cet ordre :
1. `fetchNativeFromNatives` : demande à chaque natif vivant (`/opencloud/native/peers/1.0`)
une adresse de natif inconnue.
2. `fetchNativeFromIndexers` : demande à chaque indexeur connu
(`/opencloud/indexer/natives/1.0`) ses natifs configurés.
3. Si aucun remplaçant et `remaining ≤ 1` : `retryLostNative` relance un ticker de 30 s
qui retente la connexion directe au natif perdu.
`EnsureNativePeers` maintient des heartbeats de natif à natif via `ProtocolHeartbeat`,
avec une **unique goroutine** couvrant toute la map `StaticNatives`.
**Avantages**
- Le gossip multi-hop via indexeurs permet de retrouver un natif même si aucun pair
direct ne le connaît.
- `retryLostNative` gère le cas d'un seul natif (déploiement minimal).
- La reconnexion automatique (`retryLostNative`) déclenche `replenishIndexersIfNeeded`
pour restaurer aussi le pool d'indexeurs.
**Limites / risques**
- **Goroutines heartbeat multiples corrigé** : `EnsureNativePeers` démarrait une
goroutine `SendHeartbeat` par adresse native (N natifs N goroutines N² heartbeats
par tick). Utilisation de `nativeMeshHeartbeatOnce` : une seule goroutine itère sur
`StaticNatives`.
- `retryLostNative` tourne indéfiniment sans condition d'arrêt liée à la vie du processus
(pas de `context.Context`). Si le binaire est gracefully shutdown, cette goroutine
peut bloquer.
- La découverte transitoire (natif indexeur natif) est à sens unique : un indexeur
ne connaît que les natifs de sa propre config, pas les nouveaux natifs qui auraient
rejoint après son démarrage.
---
### 3.7 DHT partagée
**Fonctionnement**
Tous les indexeurs et natifs participent à une DHT Kademlia (préfixe `oc`, mode
`ModeServer`). Deux namespaces sont utilisés :
- `/node/<DID>` `PeerRecord` JSON signé (publié par les indexeurs sur heartbeat de nœud).
- `/indexer/<PeerID>` `liveIndexerEntry` JSON avec TTL (publié par les natifs).
Chaque natif lance `refreshIndexersFromDHT` (toutes les 30 s) qui ré-hydrate son cache
local depuis la DHT pour les PeerIDs connus (`knownPeerIDs`) dont l'entrée locale a expiré.
**Avantages**
- Persistance décentralisée : un record survit à la perte d'un seul natif ou indexeur.
- Validation des entrées : `PeerRecordValidator` et `IndexerRecordValidator` rejettent
les records malformés ou expirés au moment du `PutValue`.
- L'index secondaire `/name/<name>` permet la résolution par nom humain.
**Limites / risques**
- La DHT Kademlia en réseau privé (PSK) est fonctionnelle mais les nœuds bootstrap
ne sont pas configurés explicitement : la découverte dépend de connexions déjà établies,
ce qui peut ralentir la convergence au démarrage.
- `PutValue` est réessayé en boucle infinie si `"failed to find any peer in table"`
une panne de réseau prolongée génère des goroutines bloquées.
- Si la PSK est compromise, un attaquant peut écrire dans la DHT ; les `liveIndexerEntry`
d'indexeurs ne sont pas signées, contrairement aux `PeerRecord`.
- `refreshIndexersFromDHT` prune `knownPeerIDs` si la DHT n'a aucune entrée fraîche,
mais ne prune pas `liveIndexers` une entrée expirée reste en mémoire jusqu'au GC
ou au prochain refresh.
---
### 3.8 PubSub gossip (indexer registry)
**Fonctionnement**
Quand un indexeur s'enregistre auprès d'un natif, ce dernier publie l'adresse sur le
topic GossipSub `oc-indexer-registry`. Les autres natifs abonnés mettent à jour leur
`knownPeerIDs` sans attendre la DHT.
Le `TopicValidator` rejette tout message dont le contenu n'est pas un multiaddr
parseable valide avant qu'il n'atteigne la boucle de traitement.
**Avantages**
- Dissémination quasi-instantanée entre natifs connectés.
- Complément utile à la DHT pour les registrations récentes qui n'ont pas encore
été persistées.
- Le filtre syntaxique bloque les messages malformés avant propagation dans le mesh.
**Limites / risques**
- **`TopicValidator` sans validation corrigé** : le validateur acceptait systématiquement
tous les messages (`return true`), permettant à un natif compromis de gossiper
n'importe quelle donnée.
Le validateur vérifie désormais que le message est un multiaddr parseable
(`pp.AddrInfoFromString`).
- La validation reste syntaxique uniquement : l'origine du message (l'émetteur
est-il un natif légitime ?) n'est pas vérifiée.
- Si le natif redémarre, il perd son abonnement et manque les messages publiés
pendant son absence. La re-hydratation depuis la DHT compense, mais avec un délai
pouvant aller jusqu'à 30 s.
- Le gossip ne porte que le `Addr` de l'indexeur, pas sa TTL ni sa signature.
---
### 3.9 Streams applicatifs (node ↔ node)
**Fonctionnement**
`StreamService` gère les streams entre nœuds partenaires (relations `PARTNER` stockées
en base) via des protocols dédiés (`/opencloud/resource/*`). Un heartbeat partenaire
(`ProtocolHeartbeatPartner`) maintient les connexions actives. Les events sont routés
via `handleEvent` et le système NATS en parallèle.
**Avantages**
- TTL par protocol (`PersistantStream`, `WaitResponse`) adapte le comportement au
type d'échange (longue durée pour le planner, courte pour les CRUDs).
- La GC (`gc()` toutes les 8 s, démarrée une seule fois dans `InitStream`) libère
rapidement les streams expirés.
**Limites / risques**
- **Fuite de goroutines GC corrigée** : `HandlePartnerHeartbeat` appelait
`go s.StartGC(30s)` à chaque heartbeat reçu (~20 s), créant un nouveau ticker
goroutine infini à chaque appel.
Appel supprimé ; la GC lancée par `InitStream` est suffisante.
- **Boucle infinie sur EOF corrigée** : `readLoop` effectuait `s.Stream.Close();
continue` après une erreur de décodage, re-tentant indéfiniment de lire un stream
fermé.
→ Remplacé par `return` ; les defers (`Close`, `delete`) nettoient correctement.
- ⚠️ La récupération de partenaires depuis `conf.PeerIDS` est marquée `TO REMOVE` :
présence de code provisoire en production.
---
## 4. Tableau récapitulatif
| Mécanisme | Protocole | Avantage principal | État du risque |
|---|---|---|---|
| Heartbeat node→indexer | `/opencloud/heartbeat/1.0` | Détection rapide de perte | ⚠️ Stream TCP gelé non détecté |
| Scoring de confiance | (inline dans heartbeat) | Filtre les pairs instables | ✅ Deadlock corrigé (seuil 40/75) |
| Enregistrement natif | `/opencloud/native/subscribe/1.0` | TTL ample, cache immédiat | ✅ TTL porté à 90 s |
| Fetch pool d'indexeurs | `/opencloud/native/indexers/1.0` | Prend le 1er natif répondant | ⚠️ Natif au cache périmé possible |
| Consensus | `/opencloud/native/consensus/1.0` | Majorité absolue | ⚠️ Trivial avec 1 seul natif |
| Self-delegation + offload | (in-memory) | Disponibilité sans indexeur | ✅ Limite 50 peers + ClosePeer |
| Mesh natif | `/opencloud/native/peers/1.0` | Gossip multi-hop | ✅ Goroutines dédupliquées |
| DHT | `/oc/kad/1.0.0` | Persistance décentralisée | ⚠️ Retry infini, pas de bootstrap |
| PubSub registry | `oc-indexer-registry` | Dissémination rapide | ✅ Validation multiaddr |
| Streams applicatifs | `/opencloud/resource/*` | TTL par protocol | ✅ Fuite GC + EOF corrigés |
---
## 5. Risques et limites globaux
### Sécurité
- ⚠️ **Adresses auto-rapportées non vérifiées** : le champ `IndexersBinded` dans le heartbeat
est auto-déclaré par le nœud et sert à calculer la diversité. Un pair malveillant peut
gonfler son score en déclarant de fausses adresses.
- ⚠️ **PSK comme seule barrière d'entrée** : si la PSK est compromise (elle est statique et
fichier-based), tout l'isolement réseau saute. Il n'y a pas de rotation de clé ni
d'authentification supplémentaire par pair.
- ⚠️ **DHT sans ACL sur les entrées indexeur** : la signature des `PeerRecord` est vérifiée
à la lecture, mais les `liveIndexerEntry` ne sont pas signées. La validation PubSub
bloque les multiaddrs invalides mais pas les adresses d'indexeurs légitimes usurpées.
### Disponibilité
- ⚠️ **Single point of failure natif** : avec un seul natif, la perte de celui-ci stoppe
toute attribution d'indexeurs. `retryLostNative` pallie, mais sans indexeurs, les nœuds
ne peuvent pas publier.
- ⚠️ **Bootstrap DHT** : sans nœuds bootstrap explicites, la DHT met du temps à converger
si les connexions initiales sont peu nombreuses.
### Cohérence
- ⚠️ **`replaceStaticIndexers` n'efface jamais** : d'anciens indexeurs morts restent dans
`StaticIndexers` jusqu'à ce que le heartbeat échoue. Un nœud peut avoir un pool
surévalué contenant des entrées inatteignables.
- ⚠️ **`TimeWatcher` global** : défini une seule fois au démarrage de `ConnectToIndexers`.
Si l'indexeur tourne depuis longtemps, les nouveaux nœuds auront un `uptime_ratio`
durablement faible. Le seuil abaissé à 40 pour le premier heartbeat atténue l'impact
initial, mais les heartbeats suivants devront accumuler un uptime suffisant.
---
## 6. Pistes d'amélioration
Les pistes déjà implémentées sont marquées ✅. Les pistes ouvertes restent à traiter.
### ✅ Score : double seuil pour les nouveaux peers
~~Remplacer le seuil binaire~~ — **Implémenté** : seuil à 40 pour le premier heartbeat
(peer absent de `StreamRecords`), 75 pour les suivants. Un peer peut désormais être admis
dès sa première connexion sans bloquer sur l'uptime nul.
_Fichier : `common/common_stream.go`, `CheckHeartbeat`_
### ✅ TTL indexeur aligné avec l'intervalle de renouvellement
~~TTL de 66 s trop proche de 60 s~~ — **Implémenté** : `IndexerTTL` passé à **90 s**.
_Fichier : `indexer/native.go`_
### ✅ Limite de la self-delegation
~~`responsiblePeers` illimité~~ — **Implémenté** : `selfDelegate` retourne `false` quand
`len(responsiblePeers) >= maxFallbackPeers` (50). Le site d'appel retourne une réponse
vide et logue un warning.
_Fichier : `indexer/native.go`_
### ✅ Validation PubSub des adresses gossipées
~~`TopicValidator` accepte tout~~ — **Implémenté** : le validateur vérifie que le message
est un multiaddr parseable via `pp.AddrInfoFromString`.
_Fichier : `indexer/native.go`, `subscribeIndexerRegistry`_
### ✅ Goroutines heartbeat dédupliquées dans `EnsureNativePeers`
~~Une goroutine par adresse native~~ — **Implémenté** : `nativeMeshHeartbeatOnce`
garantit qu'une seule goroutine `SendHeartbeat` couvre toute la map `StaticNatives`.
_Fichier : `common/native_stream.go`_
### ✅ Fuite de goroutines GC dans `HandlePartnerHeartbeat`
~~`go s.StartGC(30s)` à chaque heartbeat~~ — **Implémenté** : appel supprimé ; la GC
de `InitStream` est suffisante.
_Fichier : `stream/service.go`_
### ✅ Boucle infinie sur EOF dans `readLoop`
~~`continue` après `Stream.Close()`~~ — **Implémenté** : remplacé par `return` pour
laisser les defers nettoyer proprement.
_Fichier : `stream/service.go`_
---
### ⚠️ Fetch pool : interroger tous les natifs en parallèle
`fetchIndexersFromNative` s'arrête au premier natif répondant. Interroger tous les natifs
en parallèle et fusionner les listes (similairement à `clientSideConsensus`) éviterait
qu'un natif au cache périmé fournisse un pool sous-optimal.
### ⚠️ Consensus avec quorum configurable
Le seuil de confirmation (`count*2 > total`) est câblé en dur. Le rendre configurable
(ex. `consensus_quorum: 0.67`) permettrait de durcir la règle sur des déploiements
à 3+ natifs sans modifier le code.
### ⚠️ Désenregistrement explicite
Ajouter un protocole `/opencloud/native/unsubscribe/1.0` : quand un indexeur s'arrête
proprement, il notifie les natifs pour invalider son TTL immédiatement plutôt qu'attendre
90 s.
### ⚠️ Bootstrap DHT explicite
Configurer les natifs comme nœuds bootstrap DHT via `dht.BootstrapPeers` pour accélérer
la convergence Kademlia au démarrage.
### ⚠️ Context propagé dans les goroutines longue durée
`retryLostNative`, `refreshIndexersFromDHT` et `runOffloadLoop` ne reçoivent aucun
`context.Context`. Les passer depuis `InitNative` permettrait un arrêt propre lors du
shutdown du processus.
### ⚠️ Redirection explicite lors du refus de self-delegation
Quand un natif refuse la self-delegation (pool saturé), retourner vide force le nœud à
réessayer sans lui indiquer vers qui se tourner. Une liste de natifs alternatifs dans la
réponse (`AlternativeNatives []string`) permettrait au nœud de trouver directement un
natif moins chargé.

View File

@@ -1,31 +1,62 @@
FROM golang:alpine as builder # ========================
# Global build arguments
# ========================
ARG CONF_NUM
WORKDIR /app # ========================
# Dependencies stage
COPY . . # ========================
FROM golang:alpine AS deps
RUN apk add git ARG CONF_NUM
RUN go get github.com/beego/bee/v2 && go install github.com/beego/bee/v2@master
RUN timeout 15 bee run -gendoc=true -downdoc=true -runmode=dev || :
RUN sed -i 's/http:\/\/127.0.0.1:8080\/swagger\/swagger.json/swagger.json/g' swagger/index.html
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" .
RUN ls /app
FROM scratch
WORKDIR /app WORKDIR /app
COPY --from=builder /app/oc-discovery /usr/bin/ COPY go.mod go.sum ./
COPY --from=builder /app/swagger /app/swagger RUN sed -i '/replace/d' go.mod
COPY peers.json /app/ RUN go mod download
COPY identity.json /app/
COPY docker_discovery.json /etc/oc/discovery.json
EXPOSE 8080 # ========================
# Builder stage
# ========================
FROM golang:alpine AS builder
ARG CONF_NUM
ENTRYPOINT ["oc-discovery"] WORKDIR /oc-discovery
# Reuse Go cache
COPY --from=deps /go/pkg /go/pkg
COPY --from=deps /app/go.mod /app/go.sum ./
# App sources
COPY . .
# Clean replace directives again (safety)
RUN sed -i '/replace/d' go.mod
# Build package
RUN go install github.com/beego/bee/v2@latest
RUN bee pack
# Extract bundle
RUN mkdir -p /app/extracted \
&& tar -zxvf oc-discovery.tar.gz -C /app/extracted
# ========================
# Runtime stage
# ========================
FROM golang:alpine
ARG CONF_NUM
WORKDIR /app
RUN mkdir ./pem
COPY --from=builder /app/extracted/pem/private${CONF_NUM:-1}.pem ./pem/private.pem
COPY --from=builder /app/extracted/psk ./psk
COPY --from=builder /app/extracted/pem/public${CONF_NUM:-1}.pem ./pem/public.pem
COPY --from=builder /app/extracted/oc-discovery /usr/bin/oc-discovery
COPY --from=builder /app/extracted/docker_discovery${CONF_NUM:-1}.json /etc/oc/discovery.json
EXPOSE 400${CONF_NUM:-1}
ENTRYPOINT ["oc-discovery"]

View File

@@ -1,9 +0,0 @@
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

660
LICENSE.md Normal file
View File

@@ -0,0 +1,660 @@
# GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc.
<https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
## Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains
free software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing
under this license.
The precise terms and conditions for copying, distribution and
modification follow.
## TERMS AND CONDITIONS
### 0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public
License.
"Copyright" also means copyright-like laws that apply to other kinds
of works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of
an exact copy. The resulting work is called a "modified version" of
the earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user
through a computer network, with no transfer of a copy, is not
conveying.
An interactive user interface displays "Appropriate Legal Notices" to
the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
### 1. Source Code.
The "source code" for a work means the preferred form of the work for
making modifications to it. "Object code" means any non-source form of
a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users can
regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same
work.
### 2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey,
without conditions so long as your license otherwise remains in force.
You may convey covered works to others for the sole purpose of having
them make modifications exclusively for you, or provide you with
facilities for running those works, provided that you comply with the
terms of this License in conveying all material for which you do not
control copyright. Those thus making or running the covered works for
you must do so exclusively on your behalf, under your direction and
control, on terms that prohibit them from making any copies of your
copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the
conditions stated below. Sublicensing is not allowed; section 10 makes
it unnecessary.
### 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such
circumvention is effected by exercising rights under this License with
respect to the covered work, and you disclaim any intention to limit
operation or modification of the work as a means of enforcing, against
the work's users, your or third parties' legal rights to forbid
circumvention of technological measures.
### 4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
### 5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these
conditions:
- a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
- b) The work must carry prominent notices stating that it is
released under this License and any conditions added under
section 7. This requirement modifies the requirement in section 4
to "keep intact all notices".
- c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
- d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
### 6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of
sections 4 and 5, provided that you also convey the machine-readable
Corresponding Source under the terms of this License, in one of these
ways:
- a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
- b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the Corresponding
Source from a network server at no charge.
- c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
- d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
- e) Convey the object code using peer-to-peer transmission,
provided you inform other peers where the object code and
Corresponding Source of the work are being offered to the general
public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal,
family, or household purposes, or (2) anything designed or sold for
incorporation into a dwelling. In determining whether a product is a
consumer product, doubtful cases shall be resolved in favor of
coverage. For a particular product received by a particular user,
"normally used" refers to a typical or common use of that class of
product, regardless of the status of the particular user or of the way
in which the particular user actually uses, or expects or is expected
to use, the product. A product is a consumer product regardless of
whether the product has substantial commercial, industrial or
non-consumer uses, unless such uses represent the only significant
mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to
install and execute modified versions of a covered work in that User
Product from a modified version of its Corresponding Source. The
information must suffice to ensure that the continued functioning of
the modified object code is in no case prevented or interfered with
solely because modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or
updates for a work that has been modified or installed by the
recipient, or for the User Product in which it has been modified or
installed. Access to a network may be denied when the modification
itself materially and adversely affects the operation of the network
or violates the rules and protocols for communication across the
network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
### 7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders
of that material) supplement the terms of this License with terms:
- a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
- b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
- c) Prohibiting misrepresentation of the origin of that material,
or requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
- d) Limiting the use for publicity purposes of names of licensors
or authors of the material; or
- e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
- f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions
of it) with contractual assumptions of liability to the recipient,
for any liability that these contractual assumptions directly
impose on those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions; the
above requirements apply either way.
### 8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your license
from a particular copyright holder is reinstated (a) provisionally,
unless and until the copyright holder explicitly and finally
terminates your license, and (b) permanently, if the copyright holder
fails to notify you of the violation by some reasonable means prior to
60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
### 9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run
a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
### 10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
### 11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims owned
or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within the
scope of its coverage, prohibits the exercise of, or is conditioned on
the non-exercise of one or more of the rights that are specifically
granted under this License. You may not convey a covered work if you
are a party to an arrangement with a third party that is in the
business of distributing software, under which you make payment to the
third party based on the extent of your activity of conveying the
work, and under which the third party grants, to any of the parties
who would receive the covered work from you, a discriminatory patent
license (a) in connection with copies of the covered work conveyed by
you (or copies made from those copies), or (b) primarily for and in
connection with specific products or compilations that contain the
covered work, unless you entered into that arrangement, or that patent
license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
### 12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under
this License and any other pertinent obligations, then as a
consequence you may not convey it at all. For example, if you agree to
terms that obligate you to collect a royalty for further conveying
from those to whom you convey the Program, the only way you could
satisfy both those terms and this License would be to refrain entirely
from conveying the Program.
### 13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your
version supports such interaction) an opportunity to receive the
Corresponding Source of your version by providing access to the
Corresponding Source from a network server at no charge, through some
standard or customary means of facilitating copying of software. This
Corresponding Source shall include the Corresponding Source for any
work covered by version 3 of the GNU General Public License that is
incorporated pursuant to the following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
### 14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Affero General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever
published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions
of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
### 15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT
WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND
PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR
CORRECTION.
### 16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR
CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES
ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT
NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR
LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM
TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER
PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
### 17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
## How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these
terms.
To do so, attach the following notices to the program. It is safest to
attach them to the start of each source file to most effectively state
the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper
mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for
the specific requirements.
You should also get your employer (if you work as a programmer) or
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. For more information on this, and how to apply and follow
the GNU AGPL, see <https://www.gnu.org/licenses/>.

26
Makefile Normal file
View File

@@ -0,0 +1,26 @@
.DEFAULT_GOAL := all
build: clean
bee pack
run:
./oc-discovery
clean:
rm -rf oc-discovery
docker:
DOCKER_BUILDKIT=1 docker build -t oc-discovery -f Dockerfile .
docker tag oc-discovery opencloudregistry/oc-discovery:latest
publish-kind:
kind load docker-image opencloudregistry/oc-discovery:latest --name opencloud
publish-registry:
docker push opencloudregistry/oc-discovery:latest
all: docker publish-kind
ci: docker publish-registry
.PHONY: build run clean docker publish-kind publish-registry

View File

@@ -14,3 +14,34 @@ If default Swagger page is displayed instead of tyour api, change url in swagger
url: "swagger.json" url: "swagger.json"
sequenceDiagram
autonumber
participant Dev as Développeur / Owner
participant IPFS as Réseau IPFS
participant CID as CID (hash du fichier)
participant Argo as Orchestrateur Argo
participant CU as Compute Unit
participant MinIO as Storage MinIO
%% 1. Ajout du fichier sur IPFS
Dev->>IPFS: Chiffre et ajoute fichier (algo/dataset)
IPFS-->>CID: Génère CID unique (hash du fichier)
Dev->>Dev: Stocke CID pour référence future
%% 2. Orchestration par Argo
Argo->>CID: Requête CID pour job
CID-->>Argo: Fournit le fichier (vérifié via hash)
%% 3. Execution sur la Compute Unit
Argo->>CU: Déploie job avec fichier récupéré
CU->>CU: Vérifie hash (CID) pour intégrité
CU->>CU: Exécute l'algo sur le dataset
%% 4. Stockage des résultats
CU->>MinIO: Stocke output (résultats) ou logs
CU->>IPFS: Optionnel : ajoute output sur IPFS (nouveau CID)
%% 5. Vérification et traçabilité
Dev->>IPFS: Vérifie CID output si nécessaire
CU->>Dev: Fournit résultat et log de hash

36
conf/config.go Normal file
View File

@@ -0,0 +1,36 @@
package conf
import "sync"
type Config struct {
Name string
Hostname string
PSKPath string
PublicKeyPath string
PrivateKeyPath string
NodeEndpointPort int64
IndexerAddresses string
NativeIndexerAddresses string // multiaddrs of native indexers, comma-separated; bypasses IndexerAddresses when set
PeerIDS string // TO REMOVE
NodeMode string
MinIndexer int
MaxIndexer int
// ConsensusQuorum is the minimum fraction of natives that must agree for a
// candidate indexer to be considered confirmed. Range (0, 1]. Default 0.5
// (strict majority). Raise to 0.67 for stronger Byzantine resistance.
ConsensusQuorum float64
}
var instance *Config
var once sync.Once
func GetConfig() *Config {
once.Do(func() {
instance = &Config{}
})
return instance
}

View File

@@ -1,41 +0,0 @@
package controllers
import (
"encoding/json"
"oc-discovery/models"
beego "github.com/beego/beego/v2/server/web"
)
// Operations about Identitys
type IdentityController struct {
beego.Controller
}
// @Title CreateIdentity
// @Description create identitys
// @Param body body models.Identity true "body for identity content"
// @Success 200 {result} "ok" or error
// @Failure 403 body is empty
// @router / [post]
func (u *IdentityController) Post() {
var identity models.Identity
json.Unmarshal(u.Ctx.Input.RequestBody, &identity)
err := models.UpdateIdentity(&identity)
if err != nil {
u.Data["json"] = err.Error()
} else {
u.Data["json"] = "ok"
}
u.ServeJSON()
}
// @Title Get
// @Description get Identity
// @Success 200 {object} models.Identity
// @router / [get]
func (u *IdentityController) GetAll() {
identity := models.GetIdentity()
u.Data["json"] = identity
u.ServeJSON()
}

View File

@@ -1,80 +0,0 @@
package controllers
import (
"encoding/json"
"oc-discovery/models"
beego "github.com/beego/beego/v2/server/web"
)
// Operations about peer
type PeerController struct {
beego.Controller
}
// @Title Create
// @Description create peers
// @Param body body []models.Peer true "The peer content"
// @Success 200 {string} models.Peer.Id
// @Failure 403 body is empty
// @router / [post]
func (o *PeerController) Post() {
var ob []models.Peer
json.Unmarshal(o.Ctx.Input.RequestBody, &ob)
models.AddPeers(ob)
o.Data["json"] = map[string]string{"Added": "OK"}
o.ServeJSON()
}
// @Title Get
// @Description find peer by peerid
// @Param peerId path string true "the peerid you want to get"
// @Success 200 {peer} models.Peer
// @Failure 403 :peerId is empty
// @router /:peerId [get]
func (o *PeerController) Get() {
peerId := o.Ctx.Input.Param(":peerId")
peer, err := models.GetPeer(peerId)
if err != nil {
o.Data["json"] = err.Error()
} else {
o.Data["json"] = peer
}
o.ServeJSON()
}
// @Title Find
// @Description find peers with query
// @Param query path string true "the keywords you need"
// @Success 200 {peers} []models.Peer
// @Failure 403
// @router /find/:query [get]
func (o *PeerController) Find() {
query := o.Ctx.Input.Param(":query")
peers, err := models.FindPeers(query)
if err != nil {
o.Data["json"] = err.Error()
} else {
o.Data["json"] = peers
}
o.ServeJSON()
}
// @Title Delete
// @Description delete the peer
// @Param peerId path string true "The peerId you want to delete"
// @Success 200 {string} delete success!
// @Failure 403 peerId is empty
// @router /:peerId [delete]
func (o *PeerController) Delete() {
peerId := o.Ctx.Input.Param(":peerId")
err := models.Delete(peerId)
if err != nil {
o.Data["json"] = err.Error()
} else {
o.Data["json"] = "delete success!"
}
o.ServeJSON()
}

View File

@@ -1,19 +0,0 @@
package controllers
import (
beego "github.com/beego/beego/v2/server/web"
)
// VersionController operations for Version
type VersionController struct {
beego.Controller
}
// @Title GetAll
// @Description get version
// @Success 200
// @router / [get]
func (c *VersionController) GetAll() {
c.Data["json"] = map[string]string{"version": "1"}
c.ServeJSON()
}

View File

@@ -0,0 +1,200 @@
package common
import (
"context"
"encoding/json"
"errors"
"fmt"
"sync"
"time"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/tools"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host"
pp "github.com/libp2p/go-libp2p/core/peer"
)
type Event struct {
Type string `json:"type"`
From string `json:"from"` // peerID
User string
DataType int64 `json:"datatype"`
Timestamp int64 `json:"ts"`
Payload []byte `json:"payload"`
Signature []byte `json:"sig"`
}
func NewEvent(name string, from string, dt *tools.DataType, user string, payload []byte) *Event {
priv, err := tools.LoadKeyFromFilePrivate() // your node private key
if err != nil {
return nil
}
evt := &Event{
Type: name,
From: from,
User: user,
Timestamp: time.Now().UTC().Unix(),
Payload: payload,
}
if dt != nil {
evt.DataType = int64(dt.EnumIndex())
} else {
evt.DataType = -1
}
body, _ := json.Marshal(evt)
sig, _ := priv.Sign(body)
evt.Signature = sig
return evt
}
func (e *Event) RawEvent() *Event {
return &Event{
Type: e.Type,
From: e.From,
User: e.User,
DataType: e.DataType,
Timestamp: e.Timestamp,
Payload: e.Payload,
}
}
func (e *Event) toRawByte() ([]byte, error) {
return json.Marshal(e.RawEvent())
}
func (event *Event) Verify(p *peer.Peer) error {
if p == nil {
return errors.New("no peer found")
}
if p.Relation == peer.BLACKLIST { // if peer is blacklisted... quit...
return errors.New("peer is blacklisted")
}
return event.VerifySignature(p.PublicKey)
}
func (event *Event) VerifySignature(pk string) error {
pubKey, err := PubKeyFromString(pk) // extract pubkey from pubkey str
if err != nil {
return errors.New("pubkey is malformed")
}
data, err := event.toRawByte()
if err != nil {
return err
} // extract byte from raw event excluding signature.
if ok, _ := pubKey.Verify(data, event.Signature); !ok { // then verify if pubkey sign this message...
return errors.New("check signature failed")
}
return nil
}
type TopicNodeActivityPub struct {
NodeActivity int `json:"node_activity"`
Disposer string `json:"disposer_address"`
Name string `json:"name"`
DID string `json:"did"` // real PEER ID
PeerID string `json:"peer_id"`
}
type LongLivedPubSubService struct {
Host host.Host
LongLivedPubSubs map[string]*pubsub.Topic
PubsubMu sync.RWMutex
}
func NewLongLivedPubSubService(h host.Host) *LongLivedPubSubService {
return &LongLivedPubSubService{
Host: h,
LongLivedPubSubs: map[string]*pubsub.Topic{},
}
}
func (s *LongLivedPubSubService) GetPubSub(topicName string) *pubsub.Topic {
s.PubsubMu.Lock()
defer s.PubsubMu.Unlock()
return s.LongLivedPubSubs[topicName]
}
func (s *LongLivedPubSubService) processEvent(
ctx context.Context,
p *peer.Peer,
event *Event,
topicName string, handler func(context.Context, string, *Event) error) error {
if err := event.Verify(p); err != nil {
return err
}
return handler(ctx, topicName, event)
}
const TopicPubSubSearch = "oc-node-search"
func (s *LongLivedPubSubService) SubscribeToSearch(ps *pubsub.PubSub, f *func(context.Context, Event, string)) error {
ps.RegisterTopicValidator(TopicPubSubSearch, func(ctx context.Context, p pp.ID, m *pubsub.Message) bool {
return true
})
if topic, err := ps.Join(TopicPubSubSearch); err != nil {
return err
} else {
s.PubsubMu.Lock()
s.LongLivedPubSubs[TopicPubSubSearch] = topic
s.PubsubMu.Unlock()
}
if f != nil {
return SubscribeEvents(s, context.Background(), TopicPubSubSearch, -1, *f)
}
return nil
}
func SubscribeEvents[T interface{}](s *LongLivedPubSubService,
ctx context.Context, proto string, timeout int, f func(context.Context, T, string),
) error {
if s.LongLivedPubSubs[proto] == nil {
return errors.New("no protocol subscribed in pubsub")
}
topic := s.LongLivedPubSubs[proto]
sub, err := topic.Subscribe() // then subscribe to it
if err != nil {
return err
}
// launch loop waiting for results.
go waitResults(s, ctx, sub, proto, timeout, f)
return nil
}
func waitResults[T interface{}](s *LongLivedPubSubService, ctx context.Context, sub *pubsub.Subscription, proto string, timeout int, f func(context.Context, T, string)) {
defer ctx.Done()
for {
s.PubsubMu.Lock() // check safely if cache is actually notified subscribed to topic
if s.LongLivedPubSubs[proto] == nil { // if not kill the loop.
s.PubsubMu.Unlock()
break
}
s.PubsubMu.Unlock()
// if still subscribed -> wait for new message
var cancel context.CancelFunc
if timeout != -1 {
ctx, cancel = context.WithTimeout(ctx, time.Duration(timeout)*time.Second)
defer cancel()
}
msg, err := sub.Next(ctx)
if err != nil {
if errors.Is(err, context.DeadlineExceeded) {
// timeout hit, no message before deadline kill subsciption.
s.PubsubMu.Lock()
delete(s.LongLivedPubSubs, proto)
s.PubsubMu.Unlock()
return
}
continue
}
var evt T
if err := json.Unmarshal(msg.Data, &evt); err != nil { // map to event
continue
}
f(ctx, evt, fmt.Sprintf("%v", proto))
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,66 @@
package common
import (
"bytes"
"encoding/base64"
"errors"
"oc-discovery/conf"
"oc-discovery/models"
"os"
"cloud.o-forge.io/core/oc-lib/models/peer"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/pnet"
)
func VerifyPeer(peers []*peer.Peer, event models.Event) error {
if len(peers) == 0 {
return errors.New("no peer found")
}
p := peers[0]
if p.Relation == peer.BLACKLIST { // if peer is blacklisted... quit...
return errors.New("peer is blacklisted")
}
pubKey, err := PubKeyFromString(p.PublicKey) // extract pubkey from pubkey str
if err != nil {
return errors.New("pubkey is malformed")
}
data, err := event.ToRawByte()
if err != nil {
return err
} // extract byte from raw event excluding signature.
if ok, _ := pubKey.Verify(data, event.Signature); !ok { // then verify if pubkey sign this message...
return errors.New("check signature failed")
}
return nil
}
func Sign(priv crypto.PrivKey, data []byte) ([]byte, error) {
return priv.Sign(data)
}
func Verify(pub crypto.PubKey, data, sig []byte) (bool, error) {
return pub.Verify(data, sig)
}
func LoadPSKFromFile() (pnet.PSK, error) {
path := conf.GetConfig().PSKPath
data, err := os.ReadFile(path)
if err != nil {
return nil, err
}
psk, err := pnet.DecodeV1PSK(bytes.NewReader(data))
if err != nil {
return nil, err
}
return psk, nil
}
func PubKeyFromString(s string) (crypto.PubKey, error) {
data, err := base64.StdEncoding.DecodeString(s)
if err != nil {
return nil, err
}
return crypto.UnmarshalPublicKey(data)
}

View File

@@ -0,0 +1,17 @@
package common
import (
"context"
"cloud.o-forge.io/core/oc-lib/models/peer"
pubsub "github.com/libp2p/go-libp2p-pubsub"
)
type HeartBeatStreamed interface {
GetUptimeTracker() *UptimeTracker
}
type DiscoveryPeer interface {
GetPeerRecord(ctx context.Context, key string, search bool) ([]*peer.Peer, error)
GetPubSub(topicName string) *pubsub.Topic
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,39 @@
package common
import (
"context"
"fmt"
"net"
"time"
"github.com/libp2p/go-libp2p/core/host"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/multiformats/go-multiaddr"
)
func PeerIsAlive(h host.Host, ad pp.AddrInfo) bool {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
err := h.Connect(ctx, ad)
return err == nil
}
func ExtractIP(addr string) (net.IP, error) {
ma, err := multiaddr.NewMultiaddr(addr)
if err != nil {
return nil, err
}
ipStr, err := ma.ValueForProtocol(multiaddr.P_IP4)
if err != nil {
ipStr, err = ma.ValueForProtocol(multiaddr.P_IP6)
if err != nil {
return nil, err
}
}
ip := net.ParseIP(ipStr)
if ip == nil {
return nil, fmt.Errorf("invalid IP: %s", ipStr)
}
return ip, nil
}

View File

@@ -0,0 +1,409 @@
package indexer
import (
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"oc-discovery/conf"
"oc-discovery/daemons/node/common"
"strings"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
pp "cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/models/utils"
"cloud.o-forge.io/core/oc-lib/tools"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
)
type PeerRecordPayload struct {
Name string `json:"name"`
DID string `json:"did"`
PubKey []byte `json:"pub_key"`
ExpiryDate time.Time `json:"expiry_date"`
}
type PeerRecord struct {
PeerRecordPayload
PeerID string `json:"peer_id"`
APIUrl string `json:"api_url"`
StreamAddress string `json:"stream_address"`
NATSAddress string `json:"nats_address"`
WalletAddress string `json:"wallet_address"`
Signature []byte `json:"signature"`
}
func (p *PeerRecord) Sign() error {
priv, err := tools.LoadKeyFromFilePrivate()
if err != nil {
return err
}
payload, _ := json.Marshal(p.PeerRecordPayload)
b, err := common.Sign(priv, payload)
p.Signature = b
return err
}
func (p *PeerRecord) Verify() (crypto.PubKey, error) {
pubKey, err := crypto.UnmarshalPublicKey(p.PubKey) // retrieve pub key in message
if err != nil {
return pubKey, err
}
payload, _ := json.Marshal(p.PeerRecordPayload)
if ok, _ := pubKey.Verify(payload, p.Signature); !ok { // verify minimal message was sign per pubKey
return pubKey, errors.New("invalid signature")
}
return pubKey, nil
}
func (pr *PeerRecord) ExtractPeer(ourkey string, key string, pubKey crypto.PubKey) (bool, *pp.Peer, error) {
pubBytes, err := crypto.MarshalPublicKey(pubKey)
if err != nil {
return false, nil, err
}
rel := pp.NONE
if ourkey == key { // at this point is PeerID is same as our... we are... thats our peer INFO
rel = pp.SELF
}
p := &pp.Peer{
AbstractObject: utils.AbstractObject{
UUID: pr.DID,
Name: pr.Name,
},
Relation: rel, // VERIFY.... it crush nothing
PeerID: pr.PeerID,
PublicKey: base64.StdEncoding.EncodeToString(pubBytes),
APIUrl: pr.APIUrl,
StreamAddress: pr.StreamAddress,
NATSAddress: pr.NATSAddress,
WalletAddress: pr.WalletAddress,
}
if time.Now().UTC().After(pr.ExpiryDate) {
return pp.SELF == p.Relation, nil, errors.New("peer " + key + " is offline")
}
return pp.SELF == p.Relation, p, nil
}
type GetValue struct {
Key string `json:"key"`
PeerID string `json:"peer_id,omitempty"`
Name string `json:"name,omitempty"`
Search bool `json:"search,omitempty"`
}
type GetResponse struct {
Found bool `json:"found"`
Records map[string]PeerRecord `json:"records,omitempty"`
}
func (ix *IndexerService) genKey(did string) string {
return "/node/" + did
}
func (ix *IndexerService) genNameKey(name string) string {
return "/name/" + name
}
func (ix *IndexerService) genPIDKey(peerID string) string {
return "/pid/" + peerID
}
func (ix *IndexerService) initNodeHandler() {
logger := oclib.GetLogger()
logger.Info().Msg("Init Node Handler")
// Each heartbeat from a node carries a freshly signed PeerRecord.
// Republish it to the DHT so the record never expires as long as the node
// is alive — no separate publish stream needed from the node side.
ix.AfterHeartbeat = func(hb *common.Heartbeat) {
// Priority 1: use the fresh signed PeerRecord embedded in the heartbeat.
// Each heartbeat tick, the node re-signs with ExpiryDate = now+2min, so
// this record is always fresh. Fetching from DHT would give a stale expiry.
var rec PeerRecord
if len(hb.Record) > 0 {
if err := json.Unmarshal(hb.Record, &rec); err != nil {
logger.Warn().Err(err).Msg("indexer: heartbeat embedded record unmarshal failed")
return
}
} else {
// Fallback: node didn't embed a record yet (first heartbeat before claimInfo).
// Fetch from DHT using the DID resolved by HandleHeartbeat.
ctx2, cancel2 := context.WithTimeout(context.Background(), 10*time.Second)
res, err := ix.DHT.GetValue(ctx2, ix.genKey(hb.DID))
cancel2()
if err != nil {
logger.Warn().Err(err).Str("did", hb.DID).Msg("indexer: DHT fetch for refresh failed")
return
}
if err := json.Unmarshal(res, &rec); err != nil {
logger.Warn().Err(err).Str("did", hb.DID).Msg("indexer: heartbeat record unmarshal failed")
return
}
}
if _, err := rec.Verify(); err != nil {
logger.Warn().Err(err).Str("did", rec.DID).Msg("indexer: heartbeat record signature invalid")
return
}
data, err := json.Marshal(rec)
if err != nil {
return
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
logger.Info().Msg("REFRESH PutValue " + ix.genKey(rec.DID))
if err := ix.DHT.PutValue(ctx, ix.genKey(rec.DID), data); err != nil {
logger.Warn().Err(err).Str("did", rec.DID).Msg("indexer: DHT refresh failed")
cancel()
return
}
cancel()
ix.publishNameEvent(NameIndexAdd, rec.Name, rec.PeerID, rec.DID)
if rec.Name != "" {
ctx2, cancel2 := context.WithTimeout(context.Background(), 10*time.Second)
ix.DHT.PutValue(ctx2, ix.genNameKey(rec.Name), []byte(rec.DID))
cancel2()
}
if rec.PeerID != "" {
ctx3, cancel3 := context.WithTimeout(context.Background(), 10*time.Second)
ix.DHT.PutValue(ctx3, ix.genPIDKey(rec.PeerID), []byte(rec.DID))
cancel3()
}
}
ix.Host.SetStreamHandler(common.ProtocolHeartbeat, ix.HandleHeartbeat)
ix.Host.SetStreamHandler(common.ProtocolPublish, ix.handleNodePublish)
ix.Host.SetStreamHandler(common.ProtocolGet, ix.handleNodeGet)
ix.Host.SetStreamHandler(common.ProtocolIndexerGetNatives, ix.handleGetNatives)
ix.Host.SetStreamHandler(common.ProtocolIndexerConsensus, ix.handleIndexerConsensus)
}
// handleIndexerConsensus implements Phase 2 liveness voting (ProtocolIndexerConsensus).
// The caller sends a list of candidate multiaddrs; this indexer replies with the
// subset it considers currently alive (recent heartbeat in StreamRecords).
func (ix *IndexerService) handleIndexerConsensus(stream network.Stream) {
defer stream.Reset()
var req common.IndexerConsensusRequest
if err := json.NewDecoder(stream).Decode(&req); err != nil {
return
}
ix.StreamMU.RLock()
streams := ix.StreamRecords[common.ProtocolHeartbeat]
ix.StreamMU.RUnlock()
alive := make([]string, 0, len(req.Candidates))
for _, addr := range req.Candidates {
ad, err := peer.AddrInfoFromString(addr)
if err != nil {
continue
}
ix.StreamMU.RLock()
rec, ok := streams[ad.ID]
ix.StreamMU.RUnlock()
if !ok || rec.HeartbeatStream == nil || rec.HeartbeatStream.UptimeTracker == nil {
continue
}
// D: consider alive only if recent heartbeat AND score above minimum quality bar.
if time.Since(rec.HeartbeatStream.UptimeTracker.LastSeen) <= 2*common.RecommendedHeartbeatInterval &&
rec.LastScore >= 30.0 {
alive = append(alive, addr)
}
}
json.NewEncoder(stream).Encode(common.IndexerConsensusResponse{Alive: alive})
}
func (ix *IndexerService) handleNodePublish(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
for {
var rec PeerRecord
if err := json.NewDecoder(s).Decode(&rec); err != nil {
logger.Err(err)
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) ||
strings.Contains(err.Error(), "reset") ||
strings.Contains(err.Error(), "closed") ||
strings.Contains(err.Error(), "too many connections") {
return
}
continue
}
if _, err := rec.Verify(); err != nil {
logger.Err(err)
return
}
if rec.PeerID == "" || rec.ExpiryDate.Before(time.Now().UTC()) {
logger.Err(errors.New(rec.PeerID + " is expired."))
return
}
pid, err := peer.Decode(rec.PeerID)
if err != nil {
return
}
ix.StreamMU.Lock()
defer ix.StreamMU.Unlock()
if ix.StreamRecords[common.ProtocolHeartbeat] == nil {
ix.StreamRecords[common.ProtocolHeartbeat] = map[peer.ID]*common.StreamRecord[PeerRecord]{}
}
streams := ix.StreamRecords[common.ProtocolHeartbeat]
if srec, ok := streams[pid]; ok {
srec.DID = rec.DID
srec.Record = rec
srec.HeartbeatStream.UptimeTracker.LastSeen = time.Now().UTC()
}
key := ix.genKey(rec.DID)
data, err := json.Marshal(rec)
if err != nil {
logger.Err(err)
return
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
if err := ix.DHT.PutValue(ctx, key, data); err != nil {
logger.Err(err)
cancel()
return
}
cancel()
fmt.Println("publishNameEvent")
ix.publishNameEvent(NameIndexAdd, rec.Name, rec.PeerID, rec.DID)
// Secondary index: /name/<name> → DID, so peers can resolve by human-readable name.
if rec.Name != "" {
ctx2, cancel2 := context.WithTimeout(context.Background(), 10*time.Second)
if err := ix.DHT.PutValue(ctx2, ix.genNameKey(rec.Name), []byte(rec.DID)); err != nil {
logger.Err(err).Str("name", rec.Name).Msg("indexer: failed to write name index")
}
cancel2()
}
// Secondary index: /pid/<peerID> → DID, so peers can resolve by libp2p PeerID.
if rec.PeerID != "" {
ctx3, cancel3 := context.WithTimeout(context.Background(), 10*time.Second)
if err := ix.DHT.PutValue(ctx3, ix.genPIDKey(rec.PeerID), []byte(rec.DID)); err != nil {
logger.Err(err).Str("pid", rec.PeerID).Msg("indexer: failed to write pid index")
}
cancel3()
}
return
}
}
func (ix *IndexerService) handleNodeGet(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
for {
var req GetValue
if err := json.NewDecoder(s).Decode(&req); err != nil {
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) ||
strings.Contains(err.Error(), "reset") ||
strings.Contains(err.Error(), "closed") ||
strings.Contains(err.Error(), "too many connections") {
return
}
logger.Err(err)
continue
}
resp := GetResponse{Found: false, Records: map[string]PeerRecord{}}
fmt.Println("handleNodeGet", req.Search, req.Name)
keys := []string{}
// Name substring search — scan in-memory connected nodes first, then DHT exact match.
if req.Name != "" {
if req.Search {
for _, did := range ix.LookupNameIndex(strings.ToLower(req.Name)) {
keys = append(keys, did)
}
} else {
// 2. DHT exact-name lookup: covers nodes that published but aren't currently connected.
nameCtx, nameCancel := context.WithTimeout(context.Background(), 5*time.Second)
if ch, err := ix.DHT.SearchValue(nameCtx, ix.genNameKey(req.Name)); err == nil {
for did := range ch {
keys = append(keys, string(did))
break
}
}
nameCancel()
}
} else if req.PeerID != "" {
pidCtx, pidCancel := context.WithTimeout(context.Background(), 5*time.Second)
if did, err := ix.DHT.GetValue(pidCtx, ix.genPIDKey(req.PeerID)); err == nil {
keys = append(keys, string(did))
}
pidCancel()
} else {
keys = append(keys, req.Key)
}
// DHT record fetch by DID key (covers exact-name and PeerID paths).
if len(keys) > 0 {
for _, k := range keys {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
fmt.Println("TRY TO CATCH DID", ix.genKey(k))
c, err := ix.DHT.GetValue(ctx, ix.genKey(k))
cancel()
fmt.Println("TRY TO CATCH DID ERR", ix.genKey(k), c, err)
if err == nil {
var rec PeerRecord
if json.Unmarshal(c, &rec) == nil {
fmt.Println("CATCH DID ERR", ix.genKey(k), rec)
resp.Records[rec.PeerID] = rec
}
} else if req.Name == "" && req.PeerID == "" {
logger.Err(err).Msg("Failed to fetch PeerRecord from DHT " + req.Key)
}
}
}
resp.Found = len(resp.Records) > 0
_ = json.NewEncoder(s).Encode(resp)
break
}
}
// handleGetNatives returns this indexer's configured native addresses,
// excluding any in the request's Exclude list.
func (ix *IndexerService) handleGetNatives(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
for {
var req common.GetIndexerNativesRequest
if err := json.NewDecoder(s).Decode(&req); err != nil {
logger.Err(err).Msg("indexer get natives: decode")
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) ||
strings.Contains(err.Error(), "reset") ||
strings.Contains(err.Error(), "closed") ||
strings.Contains(err.Error(), "too many connections") {
return
}
continue
}
excludeSet := make(map[string]struct{}, len(req.Exclude))
for _, e := range req.Exclude {
excludeSet[e] = struct{}{}
}
resp := common.GetIndexerNativesResponse{}
for _, addr := range strings.Split(conf.GetConfig().NativeIndexerAddresses, ",") {
addr = strings.TrimSpace(addr)
if addr == "" {
continue
}
if _, excluded := excludeSet[addr]; !excluded {
resp.Natives = append(resp.Natives, addr)
}
}
if err := json.NewEncoder(s).Encode(resp); err != nil {
logger.Err(err).Msg("indexer get natives: encode response")
}
break
}
}

View File

@@ -0,0 +1,171 @@
package indexer
import (
"context"
"encoding/json"
"fmt"
"strings"
"sync"
"time"
"oc-discovery/daemons/node/common"
oclib "cloud.o-forge.io/core/oc-lib"
pubsub "github.com/libp2p/go-libp2p-pubsub"
pp "github.com/libp2p/go-libp2p/core/peer"
)
// TopicNameIndex is the GossipSub topic shared by regular indexers to exchange
// add/delete events for the distributed name→peerID mapping.
const TopicNameIndex = "oc-name-index"
// nameIndexDedupWindow suppresses re-emission of the same (action, name, peerID)
// tuple within this window, reducing duplicate events when a node is registered
// with multiple indexers simultaneously.
const nameIndexDedupWindow = 30 * time.Second
// NameIndexAction indicates whether a name mapping is being added or removed.
type NameIndexAction string
const (
NameIndexAdd NameIndexAction = "add"
NameIndexDelete NameIndexAction = "delete"
)
// NameIndexEvent is published on TopicNameIndex by each indexer when a node
// registers (add) or is evicted by the GC (delete).
type NameIndexEvent struct {
Action NameIndexAction `json:"action"`
Name string `json:"name"`
PeerID string `json:"peer_id"`
DID string `json:"did"`
}
// nameIndexState holds the local in-memory name index and the sender-side
// deduplication tracker.
type nameIndexState struct {
// index: name → peerID → DID, built from events received from all indexers.
index map[string]map[string]string
indexMu sync.RWMutex
// emitted tracks the last emission time for each (action, name, peerID) key
// to suppress duplicates within nameIndexDedupWindow.
emitted map[string]time.Time
emittedMu sync.Mutex
}
// shouldEmit returns true if the (action, name, peerID) tuple has not been
// emitted within nameIndexDedupWindow, updating the tracker if so.
func (s *nameIndexState) shouldEmit(action NameIndexAction, name, peerID string) bool {
key := string(action) + ":" + name + ":" + peerID
s.emittedMu.Lock()
defer s.emittedMu.Unlock()
if t, ok := s.emitted[key]; ok && time.Since(t) < nameIndexDedupWindow {
return false
}
s.emitted[key] = time.Now()
return true
}
// onEvent applies a received NameIndexEvent to the local index.
// "add" inserts/updates the mapping; "delete" removes it.
// Operations are idempotent — duplicate events from multiple indexers are harmless.
func (s *nameIndexState) onEvent(evt NameIndexEvent) {
if evt.Name == "" || evt.PeerID == "" {
return
}
s.indexMu.Lock()
defer s.indexMu.Unlock()
switch evt.Action {
case NameIndexAdd:
if s.index[evt.Name] == nil {
s.index[evt.Name] = map[string]string{}
}
s.index[evt.Name][evt.PeerID] = evt.DID
case NameIndexDelete:
if s.index[evt.Name] != nil {
delete(s.index[evt.Name], evt.PeerID)
if len(s.index[evt.Name]) == 0 {
delete(s.index, evt.Name)
}
}
}
}
// initNameIndex joins TopicNameIndex and starts consuming events.
// Must be called after ix.PS is ready.
func (ix *IndexerService) initNameIndex(ps *pubsub.PubSub) {
logger := oclib.GetLogger()
ix.nameIndex = &nameIndexState{
index: map[string]map[string]string{},
emitted: map[string]time.Time{},
}
ps.RegisterTopicValidator(TopicNameIndex, func(_ context.Context, _ pp.ID, _ *pubsub.Message) bool {
return true
})
topic, err := ps.Join(TopicNameIndex)
if err != nil {
logger.Err(err).Msg("name index: failed to join topic")
return
}
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.Lock()
ix.LongLivedStreamRecordedService.LongLivedPubSubService.LongLivedPubSubs[TopicNameIndex] = topic
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.Unlock()
common.SubscribeEvents(
ix.LongLivedStreamRecordedService.LongLivedPubSubService,
context.Background(),
TopicNameIndex,
-1,
func(_ context.Context, evt NameIndexEvent, _ string) {
ix.nameIndex.onEvent(evt)
},
)
}
// publishNameEvent emits a NameIndexEvent on TopicNameIndex, subject to the
// sender-side deduplication window.
func (ix *IndexerService) publishNameEvent(action NameIndexAction, name, peerID, did string) {
if ix.nameIndex == nil || name == "" || peerID == "" {
return
}
if !ix.nameIndex.shouldEmit(action, name, peerID) {
return
}
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.RLock()
topic := ix.LongLivedStreamRecordedService.LongLivedPubSubService.LongLivedPubSubs[TopicNameIndex]
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.RUnlock()
if topic == nil {
return
}
evt := NameIndexEvent{Action: action, Name: name, PeerID: peerID, DID: did}
b, err := json.Marshal(evt)
if err != nil {
return
}
_ = topic.Publish(context.Background(), b)
}
// LookupNameIndex searches the distributed name index for peers whose name
// contains needle (case-insensitive). Returns peerID → DID for matched peers.
// Returns nil if the name index is not initialised (e.g. native indexers).
func (ix *IndexerService) LookupNameIndex(needle string) map[string]string {
if ix.nameIndex == nil {
return nil
}
result := map[string]string{}
needleLow := strings.ToLower(needle)
ix.nameIndex.indexMu.RLock()
defer ix.nameIndex.indexMu.RUnlock()
for name, peers := range ix.nameIndex.index {
fmt.Println(strings.Contains(strings.ToLower(name), needleLow), needleLow, strings.ToLower(name))
if strings.Contains(strings.ToLower(name), needleLow) {
for peerID, did := range peers {
result[peerID] = did
}
}
}
fmt.Println("RESULT", result)
return result
}

View File

@@ -0,0 +1,783 @@
package indexer
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"math/rand"
"slices"
"strings"
"sync"
"time"
"oc-discovery/daemons/node/common"
oclib "cloud.o-forge.io/core/oc-lib"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network"
pp "github.com/libp2p/go-libp2p/core/peer"
)
const (
// IndexerTTL is the lifetime of a live-indexer cache entry. Set to 50% above
// the recommended 60s heartbeat interval so a single delayed renewal does not
// evict a healthy indexer from the native's cache.
IndexerTTL = 90 * time.Second
// offloadInterval is how often the native checks if it can release responsible peers.
offloadInterval = 30 * time.Second
// dhtRefreshInterval is how often the background goroutine queries the DHT for
// known-but-expired indexer entries (written by neighbouring natives).
dhtRefreshInterval = 30 * time.Second
// maxFallbackPeers caps how many peers the native will accept in self-delegation
// mode. Beyond this limit the native refuses to act as a fallback indexer so it
// is not overwhelmed during prolonged indexer outages.
maxFallbackPeers = 50
)
// liveIndexerEntry tracks a registered indexer in the native's in-memory cache and DHT.
// PubKey and Signature are forwarded from the IndexerRegistration so the DHT validator
// can verify that the entry was produced by the peer owning the declared PeerID.
// FillRate is the fraction of capacity used (0=empty, 1=full) at last registration.
type liveIndexerEntry struct {
PeerID string `json:"peer_id"`
Addr string `json:"addr"`
ExpiresAt time.Time `json:"expires_at"`
RegTimestamp int64 `json:"reg_ts,omitempty"` // Timestamp from the original IndexerRegistration
PubKey []byte `json:"pub_key,omitempty"`
Signature []byte `json:"sig,omitempty"`
FillRate float64 `json:"fill_rate,omitempty"`
}
// NativeState holds runtime state specific to native indexer operation.
type NativeState struct {
liveIndexers map[string]*liveIndexerEntry // keyed by PeerID, local cache with TTL
liveIndexersMu sync.RWMutex
responsiblePeers map[pp.ID]struct{} // peers for which the native is fallback indexer
responsibleMu sync.RWMutex
// knownPeerIDs accumulates all indexer PeerIDs ever seen (local stream or gossip).
// Used by refreshIndexersFromDHT to re-hydrate expired entries from the shared DHT,
// including entries written by other natives.
knownPeerIDs map[string]string
knownMu sync.RWMutex
// cancel stops background goroutines (runOffloadLoop, refreshIndexersFromDHT)
// when the native shuts down.
cancel context.CancelFunc
}
func newNativeState(cancel context.CancelFunc) *NativeState {
return &NativeState{
liveIndexers: map[string]*liveIndexerEntry{},
responsiblePeers: map[pp.ID]struct{}{},
knownPeerIDs: map[string]string{},
cancel: cancel,
}
}
// IndexerRecordValidator validates indexer DHT entries under the "indexer" namespace.
type IndexerRecordValidator struct{}
func (v IndexerRecordValidator) Validate(_ string, value []byte) error {
var e liveIndexerEntry
if err := json.Unmarshal(value, &e); err != nil {
return err
}
if e.Addr == "" {
return errors.New("missing addr")
}
if e.ExpiresAt.Before(time.Now().UTC()) {
return errors.New("expired indexer record")
}
// Verify self-signature when present — rejects entries forged by a
// compromised native that does not control the declared PeerID.
if len(e.Signature) > 0 && len(e.PubKey) > 0 {
pub, err := crypto.UnmarshalPublicKey(e.PubKey)
if err != nil {
return fmt.Errorf("indexer entry: invalid public key: %w", err)
}
payload := []byte(fmt.Sprintf("%s|%s|%d", e.PeerID, e.Addr, e.RegTimestamp))
if ok, err := pub.Verify(payload, e.Signature); err != nil || !ok {
return errors.New("indexer entry: invalid signature")
}
}
return nil
}
func (v IndexerRecordValidator) Select(_ string, values [][]byte) (int, error) {
var newest time.Time
index := 0
for i, val := range values {
var e liveIndexerEntry
if err := json.Unmarshal(val, &e); err != nil {
continue
}
if e.ExpiresAt.After(newest) {
newest = e.ExpiresAt
index = i
}
}
return index, nil
}
// InitNative registers native-specific stream handlers and starts background loops.
// Must be called after DHT is initialized.
func (ix *IndexerService) InitNative() {
ctx, cancel := context.WithCancel(context.Background())
ix.Native = newNativeState(cancel)
ix.Host.SetStreamHandler(common.ProtocolHeartbeat, ix.HandleHeartbeat) // specific heartbeat for Indexer.
ix.Host.SetStreamHandler(common.ProtocolNativeSubscription, ix.handleNativeSubscription)
ix.Host.SetStreamHandler(common.ProtocolNativeUnsubscribe, ix.handleNativeUnsubscribe)
ix.Host.SetStreamHandler(common.ProtocolNativeGetIndexers, ix.handleNativeGetIndexers)
ix.Host.SetStreamHandler(common.ProtocolNativeConsensus, ix.handleNativeConsensus)
ix.Host.SetStreamHandler(common.ProtocolNativeGetPeers, ix.handleNativeGetPeers)
ix.Host.SetStreamHandler(common.ProtocolIndexerGetNatives, ix.handleGetNatives)
ix.subscribeIndexerRegistry()
// Ensure long connections to other configured natives (native-to-native mesh).
common.EnsureNativePeers(ix.Host)
go ix.runOffloadLoop(ctx)
go ix.refreshIndexersFromDHT(ctx)
}
// subscribeIndexerRegistry joins the PubSub topic used by natives to gossip newly
// registered indexer PeerIDs to one another, enabling cross-native DHT discovery.
func (ix *IndexerService) subscribeIndexerRegistry() {
logger := oclib.GetLogger()
ix.PS.RegisterTopicValidator(common.TopicIndexerRegistry, func(_ context.Context, _ pp.ID, msg *pubsub.Message) bool {
// Parse as a signed IndexerRegistration.
var reg common.IndexerRegistration
if err := json.Unmarshal(msg.Data, &reg); err != nil {
return false
}
if reg.Addr == "" {
return false
}
if _, err := pp.AddrInfoFromString(reg.Addr); err != nil {
return false
}
// Verify the self-signature when present (rejects forged gossip from a
// compromised native that does not control the announced PeerID).
if ok, _ := reg.Verify(); !ok {
return false
}
// Accept only messages from known native peers or from this host itself.
// This prevents external PSK participants from injecting registry entries.
from := msg.GetFrom()
if from == ix.Host.ID() {
return true
}
common.StreamNativeMu.RLock()
_, knownNative := common.StaticNatives[from.String()]
if !knownNative {
for _, ad := range common.StaticNatives {
if ad.ID == from {
knownNative = true
break
}
}
}
common.StreamNativeMu.RUnlock()
return knownNative
})
topic, err := ix.PS.Join(common.TopicIndexerRegistry)
if err != nil {
logger.Err(err).Msg("native: failed to join indexer registry topic")
return
}
sub, err := topic.Subscribe()
if err != nil {
logger.Err(err).Msg("native: failed to subscribe to indexer registry topic")
return
}
ix.PubsubMu.Lock()
ix.LongLivedPubSubs[common.TopicIndexerRegistry] = topic
ix.PubsubMu.Unlock()
go func() {
for {
msg, err := sub.Next(context.Background())
if err != nil {
return
}
// The gossip payload is a JSON-encoded IndexerRegistration (signed).
var gossipReg common.IndexerRegistration
if jsonErr := json.Unmarshal(msg.Data, &gossipReg); jsonErr != nil {
continue
}
if gossipReg.Addr == "" || gossipReg.PeerID == "" {
continue
}
// A neighbouring native registered this PeerID; add to known set for DHT refresh.
ix.Native.knownMu.Lock()
ix.Native.knownPeerIDs[gossipReg.PeerID] = gossipReg.Addr
ix.Native.knownMu.Unlock()
}
}()
}
// handleNativeSubscription stores an indexer's alive registration in the local cache
// immediately, then persists it to the DHT asynchronously.
// The stream is temporary: indexer sends one IndexerRegistration and closes.
func (ix *IndexerService) handleNativeSubscription(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
logger.Info().Msg("Subscription")
for {
var reg common.IndexerRegistration
if err := json.NewDecoder(s).Decode(&reg); err != nil {
logger.Err(err).Msg("native subscription: decode")
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) ||
strings.Contains(err.Error(), "reset") ||
strings.Contains(err.Error(), "closed") ||
strings.Contains(err.Error(), "too many connections") {
return
}
continue
}
logger.Info().Msg("Subscription " + reg.Addr)
if reg.Addr == "" {
logger.Error().Msg("native subscription: missing addr")
return
}
if reg.PeerID == "" {
ad, err := pp.AddrInfoFromString(reg.Addr)
if err != nil {
logger.Err(err).Msg("native subscription: invalid addr")
return
}
reg.PeerID = ad.ID.String()
}
// Reject registrations with an invalid self-signature.
if ok, err := reg.Verify(); !ok {
logger.Warn().Str("peer", reg.PeerID).Err(err).Msg("native subscription: invalid signature, rejecting")
return
}
// Build entry with a fresh TTL — must happen before the cache write so the
// TTL window is not consumed by DHT retries.
entry := &liveIndexerEntry{
PeerID: reg.PeerID,
Addr: reg.Addr,
ExpiresAt: time.Now().UTC().Add(IndexerTTL),
RegTimestamp: reg.Timestamp,
PubKey: reg.PubKey,
Signature: reg.Signature,
FillRate: reg.FillRate,
}
// Verify that the declared address is actually reachable before admitting
// the registration. This async dial runs in the background; the indexer is
// tentatively admitted immediately (so heartbeats don't get stuck) but is
// evicted from the cache if the dial fails within 5 s.
go func(e *liveIndexerEntry) {
ad, err := pp.AddrInfoFromString(e.Addr)
if err != nil {
logger.Warn().Str("addr", e.Addr).Msg("native subscription: invalid addr during validation, rejecting")
ix.Native.liveIndexersMu.Lock()
if cur := ix.Native.liveIndexers[e.PeerID]; cur == e {
delete(ix.Native.liveIndexers, e.PeerID)
}
ix.Native.liveIndexersMu.Unlock()
return
}
dialCtx, dialCancel := context.WithTimeout(context.Background(), 5*time.Second)
defer dialCancel()
if err := ix.Host.Connect(dialCtx, *ad); err != nil {
logger.Warn().Str("addr", e.Addr).Err(err).Msg("native subscription: declared address unreachable, rejecting")
ix.Native.liveIndexersMu.Lock()
if cur := ix.Native.liveIndexers[e.PeerID]; cur == e {
delete(ix.Native.liveIndexers, e.PeerID)
}
ix.Native.liveIndexersMu.Unlock()
}
}(entry)
// Update local cache and known set immediately so concurrent GetIndexers calls
// can already see this indexer without waiting for the DHT write to complete.
ix.Native.liveIndexersMu.Lock()
_, isRenewal := ix.Native.liveIndexers[reg.PeerID]
ix.Native.liveIndexers[reg.PeerID] = entry
ix.Native.liveIndexersMu.Unlock()
ix.Native.knownMu.Lock()
ix.Native.knownPeerIDs[reg.PeerID] = reg.Addr
ix.Native.knownMu.Unlock()
// Gossip the signed registration to neighbouring natives.
// The payload is JSON-encoded so the receiver can verify the self-signature.
ix.PubsubMu.RLock()
topic := ix.LongLivedPubSubs[common.TopicIndexerRegistry]
ix.PubsubMu.RUnlock()
if topic != nil {
if gossipData, marshalErr := json.Marshal(reg); marshalErr == nil {
if err := topic.Publish(context.Background(), gossipData); err != nil {
logger.Err(err).Msg("native subscription: registry gossip publish")
}
}
}
if isRenewal {
// logger.Debug().Str("peer", reg.PeerID).Msg("native: indexer TTL renewed : " + fmt.Sprintf("%v", len(ix.Native.liveIndexers)))
} else {
logger.Info().Str("peer", reg.PeerID).Msg("native: indexer registered : " + fmt.Sprintf("%v", len(ix.Native.liveIndexers)))
}
// Persist in DHT asynchronously with bounded retry.
// Max retry window = IndexerTTL (90 s) — retrying past entry expiry is pointless.
// Backoff: 10 s → 20 s → 40 s, then repeats at 40 s until deadline.
key := ix.genIndexerKey(reg.PeerID)
data, err := json.Marshal(entry)
if err != nil {
logger.Err(err).Msg("native subscription: marshal entry")
return
}
go func() {
deadline := time.Now().Add(IndexerTTL)
backoff := 10 * time.Second
for {
if time.Now().After(deadline) {
logger.Warn().Str("key", key).Msg("native subscription: DHT put abandoned, entry TTL exceeded")
return
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
err := ix.DHT.PutValue(ctx, key, data)
cancel()
if err == nil {
return
}
logger.Err(err).Msg("native subscription: DHT put " + key)
if !strings.Contains(err.Error(), "failed to find any peer in table") {
return // non-retryable error
}
remaining := time.Until(deadline)
if backoff > remaining {
backoff = remaining
}
if backoff <= 0 {
return
}
time.Sleep(backoff)
if backoff < 40*time.Second {
backoff *= 2
}
}
}()
break
}
}
// handleNativeUnsubscribe removes a departing indexer from the local cache and
// known set immediately, without waiting for TTL expiry.
func (ix *IndexerService) handleNativeUnsubscribe(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
var reg common.IndexerRegistration
if err := json.NewDecoder(s).Decode(&reg); err != nil {
logger.Err(err).Msg("native unsubscribe: decode")
return
}
if reg.PeerID == "" {
logger.Warn().Msg("native unsubscribe: missing peer_id")
return
}
ix.Native.liveIndexersMu.Lock()
delete(ix.Native.liveIndexers, reg.PeerID)
ix.Native.liveIndexersMu.Unlock()
ix.Native.knownMu.Lock()
delete(ix.Native.knownPeerIDs, reg.PeerID)
ix.Native.knownMu.Unlock()
logger.Info().Str("peer", reg.PeerID).Msg("native: indexer explicitly unregistered")
}
// handleNativeGetIndexers returns this native's own list of reachable indexers.
// Self-delegation (native acting as temporary fallback indexer) is only permitted
// for nodes — never for peers that are themselves registered indexers in knownPeerIDs.
// The consensus across natives is the responsibility of the requesting node/indexer.
func (ix *IndexerService) handleNativeGetIndexers(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
for {
var req common.GetIndexersRequest
if err := json.NewDecoder(s).Decode(&req); err != nil {
logger.Err(err).Msg("native get indexers: decode")
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) ||
strings.Contains(err.Error(), "reset") ||
strings.Contains(err.Error(), "closed") ||
strings.Contains(err.Error(), "too many connections") {
return
}
continue
}
if req.Count <= 0 {
req.Count = 3
}
callerPeerID := s.Conn().RemotePeer().String()
reachable := ix.reachableLiveIndexers(req.Count, callerPeerID)
var resp common.GetIndexersResponse
if len(reachable) == 0 {
// No live indexers reachable — try to self-delegate.
if ix.selfDelegate(s.Conn().RemotePeer(), &resp) {
logger.Info().Str("peer", callerPeerID).Msg("native: no indexers, acting as fallback for node")
} else {
// Fallback pool saturated: return empty so the caller retries another
// native instead of piling more load onto this one.
logger.Warn().Str("peer", callerPeerID).Int("pool", maxFallbackPeers).Msg(
"native: fallback pool saturated, refusing self-delegation")
}
} else {
// Sort by fill rate ascending so less-full indexers are preferred for routing.
ix.Native.liveIndexersMu.RLock()
fillRates := make(map[string]float64, len(reachable))
for _, addr := range reachable {
ad, err := pp.AddrInfoFromString(addr)
if err != nil {
continue
}
for _, e := range ix.Native.liveIndexers {
if e.PeerID == ad.ID.String() {
fillRates[addr] = e.FillRate
break
}
}
}
ix.Native.liveIndexersMu.RUnlock()
// Sort by routing weight descending: weight = fillRate × (1 fillRate).
// This prefers indexers in the "trust sweet spot" — proven popular (fillRate > 0)
// but not saturated (fillRate < 1). Peak at fillRate ≈ 0.5.
routingWeight := func(addr string) float64 {
f := fillRates[addr]
return f * (1 - f)
}
for i := 1; i < len(reachable); i++ {
for j := i; j > 0 && routingWeight(reachable[j]) > routingWeight(reachable[j-1]); j-- {
reachable[j], reachable[j-1] = reachable[j-1], reachable[j]
}
}
if req.Count > len(reachable) {
req.Count = len(reachable)
}
resp.Indexers = reachable[:req.Count]
resp.FillRates = fillRates
}
if err := json.NewEncoder(s).Encode(resp); err != nil {
logger.Err(err).Msg("native get indexers: encode response")
}
break
}
}
// handleNativeConsensus answers a consensus challenge from a node/indexer.
// It returns:
// - Trusted: which of the candidates it considers alive.
// - Suggestions: extras it knows and trusts that were not in the candidate list.
func (ix *IndexerService) handleNativeConsensus(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
for {
var req common.ConsensusRequest
if err := json.NewDecoder(s).Decode(&req); err != nil {
logger.Err(err).Msg("native consensus: decode")
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) ||
strings.Contains(err.Error(), "reset") ||
strings.Contains(err.Error(), "closed") ||
strings.Contains(err.Error(), "too many connections") {
return
}
continue
}
myList := ix.reachableLiveIndexers(-1, s.Conn().RemotePeer().String())
mySet := make(map[string]struct{}, len(myList))
for _, addr := range myList {
mySet[addr] = struct{}{}
}
trusted := []string{}
candidateSet := make(map[string]struct{}, len(req.Candidates))
for _, addr := range req.Candidates {
candidateSet[addr] = struct{}{}
if _, ok := mySet[addr]; ok {
trusted = append(trusted, addr) // candidate we also confirm as reachable
}
}
// Extras we trust but that the requester didn't include → suggestions.
suggestions := []string{}
for _, addr := range myList {
if _, inCandidates := candidateSet[addr]; !inCandidates {
suggestions = append(suggestions, addr)
}
}
resp := common.ConsensusResponse{Trusted: trusted, Suggestions: suggestions}
if err := json.NewEncoder(s).Encode(resp); err != nil {
logger.Err(err).Msg("native consensus: encode response")
}
break
}
}
// selfDelegate marks the caller as a responsible peer and exposes this native's own
// address as its temporary indexer. Returns false when the fallback pool is saturated
// (maxFallbackPeers reached) — the caller must return an empty response so the node
// retries later instead of pinning indefinitely to an overloaded native.
func (ix *IndexerService) selfDelegate(remotePeer pp.ID, resp *common.GetIndexersResponse) bool {
ix.Native.responsibleMu.Lock()
defer ix.Native.responsibleMu.Unlock()
if len(ix.Native.responsiblePeers) >= maxFallbackPeers {
return false
}
ix.Native.responsiblePeers[remotePeer] = struct{}{}
resp.IsSelfFallback = true
resp.Indexers = []string{ix.Host.Addrs()[len(ix.Host.Addrs())-1].String() + "/p2p/" + ix.Host.ID().String()}
return true
}
// reachableLiveIndexers returns the multiaddrs of non-expired, pingable indexers
// from the local cache (kept fresh by refreshIndexersFromDHT in background).
func (ix *IndexerService) reachableLiveIndexers(count int, from ...string) []string {
ix.Native.liveIndexersMu.RLock()
now := time.Now().UTC()
candidates := []*liveIndexerEntry{}
for _, e := range ix.Native.liveIndexers {
fmt.Println("liveIndexers", slices.Contains(from, e.PeerID), from, e.PeerID)
if e.ExpiresAt.After(now) && !slices.Contains(from, e.PeerID) {
candidates = append(candidates, e)
}
}
ix.Native.liveIndexersMu.RUnlock()
fmt.Println("midway...", candidates, from, ix.Native.knownPeerIDs)
if (count > 0 && len(candidates) < count) || count < 0 {
ix.Native.knownMu.RLock()
for k, v := range ix.Native.knownPeerIDs {
// Include peers whose liveIndexers entry is absent OR expired.
// A non-nil but expired entry means the peer was once known but
// has since timed out — PeerIsAlive below will decide if it's back.
fmt.Println("knownPeerIDs", slices.Contains(from, k), from, k)
if !slices.Contains(from, k) {
candidates = append(candidates, &liveIndexerEntry{
PeerID: k,
Addr: v,
})
}
}
ix.Native.knownMu.RUnlock()
}
fmt.Println("midway...1", candidates)
reachable := []string{}
for _, e := range candidates {
ad, err := pp.AddrInfoFromString(e.Addr)
if err != nil {
continue
}
if common.PeerIsAlive(ix.Host, *ad) {
reachable = append(reachable, e.Addr)
}
}
return reachable
}
// refreshIndexersFromDHT runs in background and queries the shared DHT for every known
// indexer PeerID whose local cache entry is missing or expired. This supplements the
// local cache with entries written by neighbouring natives.
func (ix *IndexerService) refreshIndexersFromDHT(ctx context.Context) {
t := time.NewTicker(dhtRefreshInterval)
defer t.Stop()
logger := oclib.GetLogger()
for {
select {
case <-ctx.Done():
return
case <-t.C:
}
ix.Native.knownMu.RLock()
peerIDs := make([]string, 0, len(ix.Native.knownPeerIDs))
for pid := range ix.Native.knownPeerIDs {
peerIDs = append(peerIDs, pid)
}
ix.Native.knownMu.RUnlock()
now := time.Now().UTC()
for _, pid := range peerIDs {
ix.Native.liveIndexersMu.RLock()
existing := ix.Native.liveIndexers[pid]
ix.Native.liveIndexersMu.RUnlock()
if existing != nil && existing.ExpiresAt.After(now) {
continue // still fresh in local cache
}
key := ix.genIndexerKey(pid)
dhtCtx, dhtCancel := context.WithTimeout(context.Background(), 5*time.Second)
ch, err := ix.DHT.SearchValue(dhtCtx, key)
if err != nil {
dhtCancel()
continue
}
var best *liveIndexerEntry
for b := range ch {
var e liveIndexerEntry
if err := json.Unmarshal(b, &e); err != nil {
continue
}
if e.ExpiresAt.After(time.Now().UTC()) {
if best == nil || e.ExpiresAt.After(best.ExpiresAt) {
best = &e
}
}
}
dhtCancel()
if best != nil {
ix.Native.liveIndexersMu.Lock()
ix.Native.liveIndexers[best.PeerID] = best
ix.Native.liveIndexersMu.Unlock()
logger.Info().Str("peer", best.PeerID).Msg("native: refreshed indexer from DHT")
} else {
// DHT has no fresh entry — peer is gone, prune from known set.
ix.Native.knownMu.Lock()
delete(ix.Native.knownPeerIDs, pid)
ix.Native.knownMu.Unlock()
logger.Info().Str("peer", pid).Msg("native: pruned stale peer from knownPeerIDs")
}
}
}
}
func (ix *IndexerService) genIndexerKey(peerID string) string {
return "/indexer/" + peerID
}
// runOffloadLoop periodically checks if real indexers are available and releases
// responsible peers so they can reconnect to actual indexers on their next attempt.
func (ix *IndexerService) runOffloadLoop(ctx context.Context) {
t := time.NewTicker(offloadInterval)
defer t.Stop()
logger := oclib.GetLogger()
for {
select {
case <-ctx.Done():
return
case <-t.C:
}
fmt.Println("runOffloadLoop", ix.Native.responsiblePeers)
ix.Native.responsibleMu.RLock()
count := len(ix.Native.responsiblePeers)
ix.Native.responsibleMu.RUnlock()
if count == 0 {
continue
}
ix.Native.responsibleMu.RLock()
peerIDS := []string{}
for p := range ix.Native.responsiblePeers {
peerIDS = append(peerIDS, p.String())
}
fmt.Println("COUNT --> ", count, len(ix.reachableLiveIndexers(-1, peerIDS...)))
ix.Native.responsibleMu.RUnlock()
if len(ix.reachableLiveIndexers(-1, peerIDS...)) > 0 {
ix.Native.responsibleMu.RLock()
released := ix.Native.responsiblePeers
ix.Native.responsibleMu.RUnlock()
// Reset (not Close) heartbeat streams of released peers.
// Close() only half-closes the native's write direction — the peer's write
// direction stays open and sendHeartbeat never sees an error.
// Reset() abruptly terminates both directions, making the peer's next
// json.Encode return an error which triggers replenishIndexersFromNative.
ix.StreamMU.Lock()
if streams := ix.StreamRecords[common.ProtocolHeartbeat]; streams != nil {
for pid := range released {
if rec, ok := streams[pid]; ok {
if rec.HeartbeatStream != nil && rec.HeartbeatStream.Stream != nil {
rec.HeartbeatStream.Stream.Reset()
}
ix.Native.responsibleMu.Lock()
delete(ix.Native.responsiblePeers, pid)
ix.Native.responsibleMu.Unlock()
delete(streams, pid)
logger.Info().Str("peer", pid.String()).Str("proto", string(common.ProtocolHeartbeat)).Msg(
"native: offload — stream reset, peer will reconnect to real indexer")
} else {
// No recorded heartbeat stream for this peer: either it never
// passed the score check (new peer, uptime=0 → score<75) or the
// stream was GC'd. We cannot send a Reset signal, so close the
// whole connection instead — this makes the peer's sendHeartbeat
// return an error, which triggers replenishIndexersFromNative and
// migrates it to a real indexer.
ix.Native.responsibleMu.Lock()
delete(ix.Native.responsiblePeers, pid)
ix.Native.responsibleMu.Unlock()
go ix.Host.Network().ClosePeer(pid)
logger.Info().Str("peer", pid.String()).Msg(
"native: offload — no heartbeat stream, closing connection so peer re-requests real indexers")
}
}
}
ix.StreamMU.Unlock()
logger.Info().Int("released", count).Msg("native: offloaded responsible peers to real indexers")
}
}
}
// handleNativeGetPeers returns a random selection of this native's known native
// contacts, excluding any in the request's Exclude list.
func (ix *IndexerService) handleNativeGetPeers(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
for {
var req common.GetNativePeersRequest
if err := json.NewDecoder(s).Decode(&req); err != nil {
logger.Err(err).Msg("native get peers: decode")
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) ||
strings.Contains(err.Error(), "reset") ||
strings.Contains(err.Error(), "closed") ||
strings.Contains(err.Error(), "too many connections") {
return
}
continue
}
if req.Count <= 0 {
req.Count = 1
}
excludeSet := make(map[string]struct{}, len(req.Exclude))
for _, e := range req.Exclude {
excludeSet[e] = struct{}{}
}
common.StreamNativeMu.RLock()
candidates := make([]string, 0, len(common.StaticNatives))
for addr := range common.StaticNatives {
if _, excluded := excludeSet[addr]; !excluded {
candidates = append(candidates, addr)
}
}
common.StreamNativeMu.RUnlock()
rand.Shuffle(len(candidates), func(i, j int) { candidates[i], candidates[j] = candidates[j], candidates[i] })
if req.Count > len(candidates) {
req.Count = len(candidates)
}
resp := common.GetNativePeersResponse{Peers: candidates[:req.Count]}
if err := json.NewEncoder(s).Encode(resp); err != nil {
logger.Err(err).Msg("native get peers: encode response")
}
break
}
}
// StartNativeRegistration starts a goroutine that periodically registers this
// indexer with all configured native indexers (every RecommendedHeartbeatInterval).

View File

@@ -0,0 +1,143 @@
package indexer
import (
"context"
"oc-discovery/conf"
"oc-discovery/daemons/node/common"
"strings"
"sync"
oclib "cloud.o-forge.io/core/oc-lib"
dht "github.com/libp2p/go-libp2p-kad-dht"
pubsub "github.com/libp2p/go-libp2p-pubsub"
record "github.com/libp2p/go-libp2p-record"
"github.com/libp2p/go-libp2p/core/host"
pp "github.com/libp2p/go-libp2p/core/peer"
)
// IndexerService manages the indexer node's state: stream records, DHT, pubsub.
type IndexerService struct {
*common.LongLivedStreamRecordedService[PeerRecord]
PS *pubsub.PubSub
DHT *dht.IpfsDHT
isStrictIndexer bool
mu sync.RWMutex
IsNative bool
Native *NativeState // non-nil when IsNative == true
nameIndex *nameIndexState
}
// NewIndexerService creates an IndexerService.
// If ps is nil, this is a strict indexer (no pre-existing gossip sub from a node).
func NewIndexerService(h host.Host, ps *pubsub.PubSub, maxNode int, isNative bool) *IndexerService {
logger := oclib.GetLogger()
logger.Info().Msg("open indexer mode...")
var err error
ix := &IndexerService{
LongLivedStreamRecordedService: common.NewStreamRecordedService[PeerRecord](h, maxNode),
isStrictIndexer: ps == nil,
IsNative: isNative,
}
if ps == nil {
ps, err = pubsub.NewGossipSub(context.Background(), ix.Host)
if err != nil {
panic(err) // can't run your indexer without a propagation pubsub
}
}
ix.PS = ps
if ix.isStrictIndexer && !isNative {
logger.Info().Msg("connect to indexers as strict indexer...")
common.ConnectToIndexers(h, conf.GetConfig().MinIndexer, conf.GetConfig().MaxIndexer, ix.Host.ID())
logger.Info().Msg("subscribe to decentralized search flow as strict indexer...")
go ix.SubscribeToSearch(ix.PS, nil)
}
if !isNative {
logger.Info().Msg("init distributed name index...")
ix.initNameIndex(ps)
ix.LongLivedStreamRecordedService.AfterDelete = func(pid pp.ID, name, did string) {
ix.publishNameEvent(NameIndexDelete, name, pid.String(), did)
}
}
// Parse bootstrap peers from configured native/indexer addresses so that the
// DHT can find its routing table entries even in a fresh deployment.
var bootstrapPeers []pp.AddrInfo
for _, addrStr := range strings.Split(conf.GetConfig().NativeIndexerAddresses+","+conf.GetConfig().IndexerAddresses, ",") {
addrStr = strings.TrimSpace(addrStr)
if addrStr == "" {
continue
}
if ad, err := pp.AddrInfoFromString(addrStr); err == nil {
bootstrapPeers = append(bootstrapPeers, *ad)
}
}
dhtOpts := []dht.Option{
dht.Mode(dht.ModeServer),
dht.ProtocolPrefix("oc"), // 🔥 réseau privé
dht.Validator(record.NamespacedValidator{
"node": PeerRecordValidator{},
"indexer": IndexerRecordValidator{}, // for native indexer registry
"name": DefaultValidator{},
"pid": DefaultValidator{},
}),
}
if len(bootstrapPeers) > 0 {
dhtOpts = append(dhtOpts, dht.BootstrapPeers(bootstrapPeers...))
}
if ix.DHT, err = dht.New(context.Background(), ix.Host, dhtOpts...); err != nil {
logger.Info().Msg(err.Error())
return nil
}
// InitNative must happen after DHT is ready
if isNative {
ix.InitNative()
} else {
ix.initNodeHandler()
// Register with configured natives so this indexer appears in their cache.
// Pass a fill rate provider so the native can route new nodes to less-loaded indexers.
if nativeAddrs := conf.GetConfig().NativeIndexerAddresses; nativeAddrs != "" {
fillRateFn := func() float64 {
ix.StreamMU.RLock()
n := len(ix.StreamRecords[common.ProtocolHeartbeat])
ix.StreamMU.RUnlock()
maxN := ix.MaxNodesConn()
if maxN <= 0 {
return 0
}
rate := float64(n) / float64(maxN)
if rate > 1 {
rate = 1
}
return rate
}
common.StartNativeRegistration(ix.Host, nativeAddrs, fillRateFn)
}
}
return ix
}
func (ix *IndexerService) Close() {
if ix.Native != nil && ix.Native.cancel != nil {
ix.Native.cancel()
}
// Explicitly deregister from natives on clean shutdown so they evict this
// indexer immediately rather than waiting for TTL expiry (~90 s).
if !ix.IsNative {
if nativeAddrs := conf.GetConfig().NativeIndexerAddresses; nativeAddrs != "" {
common.UnregisterFromNative(ix.Host, nativeAddrs)
}
}
ix.DHT.Close()
ix.PS.UnregisterTopicValidator(common.TopicPubSubSearch)
if ix.nameIndex != nil {
ix.PS.UnregisterTopicValidator(TopicNameIndex)
}
for _, s := range ix.StreamRecords {
for _, ss := range s {
ss.HeartbeatStream.Stream.Close()
}
}
}

View File

@@ -0,0 +1,64 @@
package indexer
import (
"encoding/json"
"errors"
"time"
)
type DefaultValidator struct{}
func (v DefaultValidator) Validate(key string, value []byte) error {
return nil
}
func (v DefaultValidator) Select(key string, values [][]byte) (int, error) {
return 0, nil
}
type PeerRecordValidator struct{}
func (v PeerRecordValidator) Validate(key string, value []byte) error {
var rec PeerRecord
if err := json.Unmarshal(value, &rec); err != nil {
return errors.New("invalid json")
}
// PeerID must exist
if rec.PeerID == "" {
return errors.New("missing peerID")
}
// Expiry check
if rec.ExpiryDate.Before(time.Now().UTC()) {
return errors.New("record expired")
}
// Signature verification
if _, err := rec.Verify(); err != nil {
return errors.New("invalid signature")
}
return nil
}
func (v PeerRecordValidator) Select(key string, values [][]byte) (int, error) {
var newest time.Time
index := 0
for i, val := range values {
var rec PeerRecord
if err := json.Unmarshal(val, &rec); err != nil {
continue
}
if rec.ExpiryDate.After(newest) {
newest = rec.ExpiryDate
index = i
}
}
return index, nil
}

225
daemons/node/nats.go Normal file
View File

@@ -0,0 +1,225 @@
package node
import (
"context"
"encoding/json"
"fmt"
"oc-discovery/daemons/node/common"
"oc-discovery/daemons/node/stream"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/config"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/tools"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
)
type configPayload struct {
PeerID string `json:"source_peer_id"`
}
type executionConsidersPayload struct {
PeerIDs []string `json:"peer_ids"`
}
func ListenNATS(n *Node) {
tools.NewNATSCaller().ListenNats(map[tools.NATSMethod]func(tools.NATSResponse){
/*tools.VERIFY_RESOURCE: func(resp tools.NATSResponse) {
if resp.FromApp == config.GetAppName() {
return
}
if res, err := resources.ToResource(resp.Datatype.EnumIndex(), resp.Payload); err == nil {
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
p := access.LoadOne(res.GetCreatorID())
realP := p.ToPeer()
if realP == nil {
return
} else if realP.Relation == peer.SELF {
pubKey, err := common.PubKeyFromString(realP.PublicKey) // extract pubkey from pubkey str
if err != nil {
return
}
ok, _ := pubKey.Verify(resp.Payload, res.GetSignature())
if b, err := json.Marshal(stream.Verify{
IsVerified: ok,
}); err == nil {
tools.NewNATSCaller().SetNATSPub(tools.VERIFY_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Method: int(tools.VERIFY_RESOURCE),
Payload: b,
})
}
} else if realP.Relation != peer.BLACKLIST {
n.StreamService.PublishVerifyResources(&resp.Datatype, resp.User, realP.PeerID, resp.Payload)
}
}
},*/
tools.CREATE_RESOURCE: func(resp tools.NATSResponse) {
if resp.FromApp == config.GetAppName() && resp.Datatype != tools.PEER && resp.Datatype != tools.WORKFLOW {
return
}
logger := oclib.GetLogger()
m := map[string]interface{}{}
err := json.Unmarshal(resp.Payload, &m)
if err != nil {
logger.Err(err)
return
}
p := &peer.Peer{}
p = p.Deserialize(m, p).(*peer.Peer)
ad, err := pp.AddrInfoFromString(p.StreamAddress)
if err != nil {
return
}
n.StreamService.Mu.Lock()
defer n.StreamService.Mu.Unlock()
if p.Relation == peer.PARTNER {
n.StreamService.ConnectToPartner(p.StreamAddress)
} else {
ps := common.ProtocolStream{}
for p, s := range n.StreamService.Streams {
m := map[pp.ID]*common.Stream{}
for k := range s {
if ad.ID != k {
m[k] = s[k]
} else {
s[k].Stream.Close()
}
}
ps[p] = m
}
n.StreamService.Streams = ps
}
},
tools.PROPALGATION_EVENT: func(resp tools.NATSResponse) {
fmt.Println("PROPALGATION")
if resp.FromApp == config.GetAppName() {
return
}
var propalgation tools.PropalgationMessage
err := json.Unmarshal(resp.Payload, &propalgation)
var dt *tools.DataType
if propalgation.DataType > 0 {
dtt := tools.DataType(propalgation.DataType)
dt = &dtt
}
fmt.Println("PROPALGATION ACT", propalgation.Action, propalgation.Action == tools.PB_CREATE, err)
if err == nil {
switch propalgation.Action {
case tools.PB_ADMIRALTY_CONFIG, tools.PB_MINIO_CONFIG:
var m configPayload
var proto protocol.ID = stream.ProtocolAdmiraltyConfigResource
if propalgation.Action == tools.PB_MINIO_CONFIG {
proto = stream.ProtocolMinioConfigResource
}
if err := json.Unmarshal(resp.Payload, &m); err == nil {
peers, _ := n.GetPeerRecord(context.Background(), m.PeerID, false)
for _, p := range peers {
n.StreamService.PublishCommon(&resp.Datatype, resp.User,
p.PeerID, proto, resp.Payload)
}
}
case tools.PB_CREATE, tools.PB_UPDATE, tools.PB_DELETE:
fmt.Println(propalgation.Action, dt, resp.User, propalgation.Payload)
fmt.Println(n.StreamService.ToPartnerPublishEvent(
context.Background(),
propalgation.Action,
dt, resp.User,
propalgation.Payload,
))
case tools.PB_CONSIDERS:
switch resp.Datatype {
case tools.BOOKING, tools.PURCHASE_RESOURCE, tools.WORKFLOW_EXECUTION:
var m executionConsidersPayload
if err := json.Unmarshal(resp.Payload, &m); err == nil {
for _, p := range m.PeerIDs {
peers, _ := n.GetPeerRecord(context.Background(), p, false)
for _, pp := range peers {
n.StreamService.PublishCommon(&resp.Datatype, resp.User,
pp.PeerID, stream.ProtocolConsidersResource, resp.Payload)
}
}
}
default:
// minio / admiralty config considers — route back to OriginID.
var m struct {
OriginID string `json:"origin_id"`
}
if err := json.Unmarshal(propalgation.Payload, &m); err == nil && m.OriginID != "" {
peers, _ := n.GetPeerRecord(context.Background(), m.OriginID, false)
for _, p := range peers {
n.StreamService.PublishCommon(nil, resp.User,
p.PeerID, stream.ProtocolConsidersResource, propalgation.Payload)
}
}
}
case tools.PB_PLANNER:
m := map[string]interface{}{}
if err := json.Unmarshal(resp.Payload, &m); err == nil {
b := []byte{}
if len(m) > 1 {
b = resp.Payload
}
if m["peer_id"] == nil { // send to every active stream
n.StreamService.Mu.Lock()
if n.StreamService.Streams[stream.ProtocolSendPlanner] != nil {
for pid := range n.StreamService.Streams[stream.ProtocolSendPlanner] {
n.StreamService.PublishCommon(nil, resp.User, pid.String(), stream.ProtocolSendPlanner, b)
}
}
} else {
n.StreamService.PublishCommon(nil, resp.User, fmt.Sprintf("%v", m["peer_id"]), stream.ProtocolSendPlanner, b)
}
n.StreamService.Mu.Unlock()
}
case tools.PB_CLOSE_PLANNER:
m := map[string]interface{}{}
if err := json.Unmarshal(resp.Payload, &m); err == nil {
n.StreamService.Mu.Lock()
if pid, err := pp.Decode(fmt.Sprintf("%v", m["peer_id"])); err == nil {
if n.StreamService.Streams[stream.ProtocolSendPlanner] != nil && n.StreamService.Streams[stream.ProtocolSendPlanner][pid] != nil {
n.StreamService.Streams[stream.ProtocolSendPlanner][pid].Stream.Close()
delete(n.StreamService.Streams[stream.ProtocolSendPlanner], pid)
}
}
n.StreamService.Mu.Unlock()
}
case tools.PB_SEARCH:
if propalgation.DataType == int(tools.PEER) {
m := map[string]interface{}{}
if err := json.Unmarshal(propalgation.Payload, &m); err == nil {
if peers, err := n.GetPeerRecord(context.Background(), fmt.Sprintf("%v", m["search"]), true); err == nil {
for _, p := range peers {
if b, err := json.Marshal(p); err == nil {
go tools.NewNATSCaller().SetNATSPub(tools.SEARCH_EVENT, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(tools.PEER),
Method: int(tools.SEARCH_EVENT),
Payload: b,
})
}
}
}
}
} else {
m := map[string]interface{}{}
if err := json.Unmarshal(propalgation.Payload, &m); err == nil {
n.PubSubService.SearchPublishEvent(
context.Background(),
dt,
fmt.Sprintf("%v", m["type"]),
resp.User,
fmt.Sprintf("%v", m["search"]),
)
}
}
}
}
},
})
}

340
daemons/node/node.go Normal file
View File

@@ -0,0 +1,340 @@
package node
import (
"context"
"encoding/json"
"errors"
"fmt"
"oc-discovery/conf"
"oc-discovery/daemons/node/common"
"oc-discovery/daemons/node/indexer"
"oc-discovery/daemons/node/pubsub"
"oc-discovery/daemons/node/stream"
"sync"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/tools"
"github.com/google/uuid"
"github.com/libp2p/go-libp2p"
pubsubs "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/crypto"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
)
type Node struct {
*common.LongLivedStreamRecordedService[interface{}] // change type of stream
PS *pubsubs.PubSub
IndexerService *indexer.IndexerService
PubSubService *pubsub.PubSubService
StreamService *stream.StreamService
PeerID pp.ID
isIndexer bool
peerRecord *indexer.PeerRecord
Mu sync.RWMutex
}
func InitNode(isNode bool, isIndexer bool, isNativeIndexer bool) (*Node, error) {
if !isNode && !isIndexer {
return nil, errors.New("wait... what ? your node need to at least something. Retry we can't be friend in that case")
}
logger := oclib.GetLogger()
logger.Info().Msg("retrieving private key...")
priv, err := tools.LoadKeyFromFilePrivate() // your node private key
if err != nil {
return nil, err
}
logger.Info().Msg("retrieving psk file...")
psk, err := common.LoadPSKFromFile() // network common private Network. Public OC PSK is Public Network
if err != nil {
return nil, nil
}
logger.Info().Msg("open a host...")
h, err := libp2p.New(
libp2p.PrivateNetwork(psk),
libp2p.Identity(priv),
libp2p.ListenAddrStrings(
fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", conf.GetConfig().NodeEndpointPort),
),
)
logger.Info().Msg("Host open on " + h.ID().String())
if err != nil {
return nil, errors.New("no host no node")
}
node := &Node{
PeerID: h.ID(),
isIndexer: isIndexer,
LongLivedStreamRecordedService: common.NewStreamRecordedService[interface{}](h, 1000),
}
// Register the bandwidth probe handler so any peer measuring this node's
// throughput can open a dedicated probe stream and read the echo.
h.SetStreamHandler(common.ProtocolBandwidthProbe, common.HandleBandwidthProbe)
var ps *pubsubs.PubSub
if isNode {
logger.Info().Msg("generate opencloud node...")
ps, err = pubsubs.NewGossipSub(context.Background(), node.Host)
if err != nil {
panic(err) // can't run your node without a propalgation pubsub, of state of node.
}
node.PS = ps
// buildRecord returns a fresh signed PeerRecord as JSON, embedded in each
// heartbeat so the receiving indexer can republish it to the DHT directly.
// peerRecord is nil until claimInfo runs, so the first ~20s heartbeats carry
// no record — that's fine, claimInfo publishes once synchronously at startup.
buildRecord := func() json.RawMessage {
if node.peerRecord == nil {
return nil
}
priv, err := tools.LoadKeyFromFilePrivate()
if err != nil {
return nil
}
fresh := *node.peerRecord
fresh.PeerRecordPayload.ExpiryDate = time.Now().UTC().Add(2 * time.Minute)
payload, _ := json.Marshal(fresh.PeerRecordPayload)
fresh.Signature, err = priv.Sign(payload)
if err != nil {
return nil
}
b, _ := json.Marshal(fresh)
return json.RawMessage(b)
}
logger.Info().Msg("connect to indexers...")
common.ConnectToIndexers(node.Host, conf.GetConfig().MinIndexer, conf.GetConfig().MaxIndexer, node.PeerID, buildRecord)
logger.Info().Msg("claims my node...")
if _, err := node.claimInfo(conf.GetConfig().Name, conf.GetConfig().Hostname); err != nil {
panic(err)
}
logger.Info().Msg("run garbage collector...")
node.StartGC(30 * time.Second)
if node.StreamService, err = stream.InitStream(context.Background(), node.Host, node.PeerID, 1000, node); err != nil {
panic(err)
}
if node.PubSubService, err = pubsub.InitPubSub(context.Background(), node.Host, node.PS, node, node.StreamService); err != nil {
panic(err)
}
f := func(ctx context.Context, evt common.Event, topic string) {
m := map[string]interface{}{}
err := json.Unmarshal(evt.Payload, &m)
if err != nil || evt.From == node.PeerID.String() {
return
}
if p, err := node.GetPeerRecord(ctx, evt.From, false); err == nil && len(p) > 0 && m["search"] != nil {
node.StreamService.SendResponse(p[0], &evt, fmt.Sprintf("%v", m["search"]))
}
}
logger.Info().Msg("subscribe to decentralized search flow...")
node.SubscribeToSearch(node.PS, &f)
logger.Info().Msg("connect to NATS")
go ListenNATS(node)
logger.Info().Msg("Node is actually running.")
}
if isIndexer {
logger.Info().Msg("generate opencloud indexer...")
node.IndexerService = indexer.NewIndexerService(node.Host, ps, 500, isNativeIndexer)
}
return node, nil
}
func (d *Node) Close() {
if d.isIndexer && d.IndexerService != nil {
d.IndexerService.Close()
}
d.PubSubService.Close()
d.StreamService.Close()
d.Host.Close()
}
func (d *Node) publishPeerRecord(
rec *indexer.PeerRecord,
) error {
priv, err := tools.LoadKeyFromFilePrivate() // your node private key
if err != nil {
return err
}
common.StreamMuIndexes.RLock()
indexerSnapshot := make([]*pp.AddrInfo, 0, len(common.StaticIndexers))
for _, ad := range common.StaticIndexers {
indexerSnapshot = append(indexerSnapshot, ad)
}
common.StreamMuIndexes.RUnlock()
for _, ad := range indexerSnapshot {
var err error
if common.StreamIndexers, err = common.TempStream(d.Host, *ad, common.ProtocolPublish, "", common.StreamIndexers, map[protocol.ID]*common.ProtocolInfo{},
&common.StreamMuIndexes); err != nil {
continue
}
stream := common.StreamIndexers[common.ProtocolPublish][ad.ID]
base := indexer.PeerRecordPayload{
Name: rec.Name,
DID: rec.DID,
PubKey: rec.PubKey,
ExpiryDate: time.Now().UTC().Add(2 * time.Minute),
}
payload, _ := json.Marshal(base)
rec.PeerRecordPayload = base
rec.Signature, err = priv.Sign(payload)
if err := json.NewEncoder(stream.Stream).Encode(&rec); err != nil { // then publish on stream
return err
}
}
return nil
}
func (d *Node) GetPeerRecord(
ctx context.Context,
pidOrdid string,
search bool,
) ([]*peer.Peer, error) {
var err error
var info map[string]indexer.PeerRecord
common.StreamMuIndexes.RLock()
indexerSnapshot2 := make([]*pp.AddrInfo, 0, len(common.StaticIndexers))
for _, ad := range common.StaticIndexers {
indexerSnapshot2 = append(indexerSnapshot2, ad)
}
common.StreamMuIndexes.RUnlock()
// Build the GetValue request: if pidOrdid is neither a UUID DID nor a libp2p
// PeerID, treat it as a human-readable name and let the indexer resolve it.
getReq := indexer.GetValue{Key: pidOrdid}
if pidR, pidErr := pp.Decode(pidOrdid); pidErr == nil {
getReq.PeerID = pidR.String()
} else if _, uuidErr := uuid.Parse(pidOrdid); uuidErr != nil {
// Not a UUID DID → treat pidOrdid as a name substring search.
getReq.Name = pidOrdid
getReq.Key = ""
}
getReq.Search = search
for _, ad := range indexerSnapshot2 {
if common.StreamIndexers, err = common.TempStream(d.Host, *ad, common.ProtocolGet, "",
common.StreamIndexers, map[protocol.ID]*common.ProtocolInfo{}, &common.StreamMuIndexes); err != nil {
continue
}
stream := common.StreamIndexers[common.ProtocolGet][ad.ID]
if err := json.NewEncoder(stream.Stream).Encode(getReq); err != nil {
continue
}
var resp indexer.GetResponse
if err := json.NewDecoder(stream.Stream).Decode(&resp); err != nil {
continue
}
if resp.Found {
info = resp.Records
}
break
}
var ps []*peer.Peer
for _, pr := range info {
if pk, err := pr.Verify(); err != nil {
return nil, err
} else if _, p, err := pr.ExtractPeer(d.PeerID.String(), pr.PeerID, pk); err != nil {
return nil, err
} else {
ps = append(ps, p)
}
}
return ps, err
}
func (d *Node) claimInfo(
name string,
endPoint string, // TODO : endpoint is not necesserry StreamAddress
) (*peer.Peer, error) {
if endPoint == "" {
return nil, errors.New("no endpoint found for peer")
}
did := uuid.New().String()
peers := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil).Search(&dbs.Filters{
And: map[string][]dbs.Filter{ // search by name if no filters are provided
"peer_id": {{Operator: dbs.EQUAL.String(), Value: d.Host.ID().String()}},
},
}, "", false)
if len(peers.Data) > 0 {
did = peers.Data[0].GetID() // if already existing set up did as made
}
priv, err := tools.LoadKeyFromFilePrivate()
if err != nil {
return nil, err
}
pub, err := tools.LoadKeyFromFilePublic()
if err != nil {
return nil, err
}
pubBytes, err := crypto.MarshalPublicKey(pub)
if err != nil {
return nil, err
}
now := time.Now().UTC()
expiry := now.Add(150 * time.Second)
pRec := indexer.PeerRecordPayload{
Name: name,
DID: did, // REAL PEER ID
PubKey: pubBytes,
ExpiryDate: expiry,
}
d.PeerID = d.Host.ID()
payload, _ := json.Marshal(pRec)
rec := &indexer.PeerRecord{
PeerRecordPayload: pRec,
}
rec.Signature, err = priv.Sign(payload)
if err != nil {
return nil, err
}
rec.PeerID = d.Host.ID().String()
rec.APIUrl = endPoint
rec.StreamAddress = "/ip4/" + conf.GetConfig().Hostname + "/tcp/" + fmt.Sprintf("%v", conf.GetConfig().NodeEndpointPort) + "/p2p/" + rec.PeerID
rec.NATSAddress = oclib.GetConfig().NATSUrl
rec.WalletAddress = "my-wallet"
if err := d.publishPeerRecord(rec); err != nil {
return nil, err
}
d.peerRecord = rec
if _, err := rec.Verify(); err != nil {
return nil, err
} else {
_, p, err := rec.ExtractPeer(did, did, pub)
b, err := json.Marshal(p)
if err != nil {
return p, err
}
go tools.NewNATSCaller().SetNATSPub(tools.CREATE_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.PEER,
Method: int(tools.CREATE_RESOURCE),
SearchAttr: "peer_id",
Payload: b,
})
return p, err
}
}
/*
TODO:
- Le booking est un flow neuf décentralisé :
On check on attend une réponse, on valide, il passe par discovery, on relais.
- Le shared workspace est une affaire de décentralisation,
on communique avec les shared les mouvements
- Un shared remplace la notion de partnership à l'échelle de partnershipping
-> quand on share un workspace on devient partenaire temporaire
qu'on le soit originellement ou non.
-> on a alors les mêmes privilèges.
- Les orchestrations admiralty ont le même fonctionnement.
Un evenement provoque alors une création de clé de service.
On doit pouvoir crud avec verification de signature un DBobject.
*/

View File

@@ -0,0 +1,61 @@
package pubsub
import (
"context"
"encoding/json"
"errors"
"oc-discovery/daemons/node/common"
"oc-discovery/daemons/node/stream"
"oc-discovery/models"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/tools"
)
func (ps *PubSubService) SearchPublishEvent(
ctx context.Context, dt *tools.DataType, typ string, user string, search string) error {
b, err := json.Marshal(map[string]string{"search": search})
if err != nil {
return err
}
switch typ {
case "known": // define Search Strategy
return ps.StreamService.PublishesCommon(dt, user, nil, b, stream.ProtocolSearchResource) //if partners focus only them*/
case "partner": // define Search Strategy
return ps.StreamService.PublishesCommon(dt, user, &dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
And: map[string][]dbs.Filter{
"relation": {{Operator: dbs.EQUAL.String(), Value: peer.PARTNER}},
},
}, b, stream.ProtocolSearchResource)
case "all": // Gossip PubSub
b, err := json.Marshal(map[string]string{"search": search})
if err != nil {
return err
}
return ps.publishEvent(ctx, dt, tools.PB_SEARCH, common.TopicPubSubSearch, user, b)
default:
return errors.New("no type of research found")
}
}
func (ps *PubSubService) publishEvent(
ctx context.Context, dt *tools.DataType, action tools.PubSubAction, topicName string, user string, payload []byte,
) error {
priv, err := tools.LoadKeyFromFilePrivate()
if err != nil {
return err
}
msg, _ := json.Marshal(models.NewEvent(action.String(), ps.Host.ID().String(), dt, user, payload, priv))
topic := ps.Node.GetPubSub(topicName)
if topic == nil {
topic, err = ps.PS.Join(topicName)
if err != nil {
return err
}
}
return topic.Publish(ctx, msg)
}
// TODO REVIEW PUBLISHING + ADD SEARCH ON PUBLIC : YES
// TODO : Search should verify DataType

View File

@@ -0,0 +1,32 @@
package pubsub
import (
"context"
"oc-discovery/daemons/node/common"
"oc-discovery/daemons/node/stream"
"sync"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host"
)
type PubSubService struct {
Node common.DiscoveryPeer
Host host.Host
PS *pubsub.PubSub
StreamService *stream.StreamService
Subscription []string
mutex sync.RWMutex
}
func InitPubSub(ctx context.Context, h host.Host, ps *pubsub.PubSub, node common.DiscoveryPeer, streamService *stream.StreamService) (*PubSubService, error) {
return &PubSubService{
Host: h,
Node: node,
StreamService: streamService,
PS: ps,
}, nil
}
func (ix *PubSubService) Close() {
}

View File

@@ -0,0 +1,234 @@
package stream
import (
"context"
"crypto/subtle"
"encoding/json"
"errors"
"fmt"
"oc-discovery/daemons/node/common"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/booking/planner"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/models/resources"
"cloud.o-forge.io/core/oc-lib/tools"
)
type Verify struct {
IsVerified bool `json:"is_verified"`
}
func (ps *StreamService) handleEvent(protocol string, evt *common.Event) error {
fmt.Println("handleEvent")
ps.handleEventFromPartner(evt, protocol)
/*if protocol == ProtocolVerifyResource {
if evt.DataType == -1 {
tools.NewNATSCaller().SetNATSPub(tools.VERIFY_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Method: int(tools.VERIFY_RESOURCE),
Payload: evt.Payload,
})
} else if err := ps.verifyResponse(evt); err != nil {
return err
}
}*/
if protocol == ProtocolSendPlanner {
if err := ps.sendPlanner(evt); err != nil {
return err
}
}
if protocol == ProtocolSearchResource && evt.DataType > -1 {
if err := ps.retrieveResponse(evt); err != nil {
return err
}
}
if protocol == ProtocolConsidersResource {
if err := ps.pass(evt, tools.PB_CONSIDERS); err != nil {
return err
}
}
if protocol == ProtocolAdmiraltyConfigResource {
if err := ps.pass(evt, tools.PB_ADMIRALTY_CONFIG); err != nil {
return err
}
}
if protocol == ProtocolMinioConfigResource {
if err := ps.pass(evt, tools.PB_MINIO_CONFIG); err != nil {
return err
}
}
return errors.New("no action authorized available : " + protocol)
}
func (abs *StreamService) verifyResponse(event *common.Event) error { //
res, err := resources.ToResource(int(event.DataType), event.Payload)
if err != nil || res == nil {
return nil
}
verify := Verify{
IsVerified: false,
}
access := oclib.NewRequestAdmin(oclib.LibDataEnum(event.DataType), nil)
data := access.LoadOne(res.GetID())
if data.Err == "" && data.Data != nil {
if b, err := json.Marshal(data.Data); err == nil {
if res2, err := resources.ToResource(int(event.DataType), b); err == nil {
verify.IsVerified = subtle.ConstantTimeCompare(res.GetSignature(), res2.GetSignature()) == 1
}
}
}
if b, err := json.Marshal(verify); err == nil {
abs.PublishCommon(nil, "", event.From, ProtocolVerifyResource, b)
}
return nil
}
func (abs *StreamService) sendPlanner(event *common.Event) error { //
if len(event.Payload) == 0 {
if plan, err := planner.GenerateShallow(&tools.APIRequest{Admin: true}); err == nil {
if b, err := json.Marshal(plan); err == nil {
abs.PublishCommon(nil, event.User, event.From, ProtocolSendPlanner, b)
} else {
return err
}
} else {
m := map[string]interface{}{}
if err := json.Unmarshal(event.Payload, &m); err == nil {
m["peer_id"] = event.From
if pl, err := json.Marshal(m); err == nil {
if b, err := json.Marshal(tools.PropalgationMessage{
DataType: -1,
Action: tools.PB_PLANNER,
Payload: pl,
}); err == nil {
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(oclib.BOOKING),
Method: int(tools.PROPALGATION_EVENT),
Payload: b,
})
}
}
}
}
} else {
}
return nil
}
func (abs *StreamService) retrieveResponse(event *common.Event) error { //
res, err := resources.ToResource(int(event.DataType), event.Payload)
if err != nil || res == nil {
return nil
}
b, err := json.Marshal(res.Serialize(res))
go tools.NewNATSCaller().SetNATSPub(tools.SEARCH_EVENT, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(event.DataType),
Method: int(tools.SEARCH_EVENT),
Payload: b,
})
return nil
}
func (abs *StreamService) pass(event *common.Event, action tools.PubSubAction) error { //
if b, err := json.Marshal(&tools.PropalgationMessage{
Action: action,
DataType: int(event.DataType),
Payload: event.Payload,
}); err == nil {
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(event.DataType),
Method: int(tools.PROPALGATION_EVENT),
Payload: b,
})
}
return nil
}
func (ps *StreamService) handleEventFromPartner(evt *common.Event, protocol string) error {
switch protocol {
case ProtocolSearchResource:
m := map[string]interface{}{}
err := json.Unmarshal(evt.Payload, &m)
if err != nil {
return err
}
if search, ok := m["search"]; ok {
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
peers := access.Search(&dbs.Filters{
And: map[string][]dbs.Filter{
"peer_id": {{Operator: dbs.EQUAL.String(), Value: evt.From}},
},
}, evt.From, false)
if len(peers.Data) > 0 {
p := peers.Data[0].(*peer.Peer)
fmt.Println(evt.From, p.GetID(), peers.Data)
ps.SendResponse(p, evt, fmt.Sprintf("%v", search))
} else if p, err := ps.Node.GetPeerRecord(context.Background(), evt.From, false); err == nil && len(p) > 0 { // peer from is peerID
ps.SendResponse(p[0], evt, fmt.Sprintf("%v", search))
}
} else {
fmt.Println("SEND SEARCH_EVENT SetNATSPub", m)
go tools.NewNATSCaller().SetNATSPub(tools.SEARCH_EVENT, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(evt.DataType),
Method: int(tools.SEARCH_EVENT),
Payload: evt.Payload,
})
}
case ProtocolCreateResource, ProtocolUpdateResource:
fmt.Println("RECEIVED Protocol.Update")
go tools.NewNATSCaller().SetNATSPub(tools.CREATE_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(evt.DataType),
Method: int(tools.CREATE_RESOURCE),
Payload: evt.Payload,
})
case ProtocolDeleteResource:
go tools.NewNATSCaller().SetNATSPub(tools.REMOVE_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(evt.DataType),
Method: int(tools.REMOVE_RESOURCE),
Payload: evt.Payload,
})
default:
return errors.New("no action authorized available : " + protocol)
}
return nil
}
func (abs *StreamService) SendResponse(p *peer.Peer, event *common.Event, search string) error {
dts := []tools.DataType{tools.DataType(event.DataType)}
if event.DataType == -1 { // expect all resources
dts = []tools.DataType{
tools.COMPUTE_RESOURCE,
tools.STORAGE_RESOURCE,
tools.PROCESSING_RESOURCE,
tools.DATA_RESOURCE,
tools.WORKFLOW_RESOURCE,
}
}
if self, err := oclib.GetMySelf(); err != nil {
return err
} else {
for _, dt := range dts {
access := oclib.NewRequestAdmin(oclib.LibDataEnum(dt), nil)
peerID := p.GetID()
searched := access.Search(abs.FilterPeer(self.GetID(), search), "", false)
fmt.Println("SEND SEARCH_EVENT", self.GetID(), dt, len(searched.Data), peerID)
for _, ss := range searched.Data {
if j, err := json.Marshal(ss); err == nil {
_, err := abs.PublishCommon(&dt, event.User, p.PeerID, ProtocolSearchResource, j)
fmt.Println("Publish ERR", err)
}
}
}
}
return nil
}

View File

@@ -0,0 +1,164 @@
package stream
import (
"context"
"encoding/json"
"errors"
"fmt"
"oc-discovery/daemons/node/common"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/tools"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
)
func (ps *StreamService) PublishesCommon(dt *tools.DataType, user string, filter *dbs.Filters, resource []byte, protos ...protocol.ID) error {
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
var p oclib.LibDataShallow
if filter == nil {
p = access.LoadAll(false)
} else {
p = access.Search(filter, "", false)
}
for _, pes := range p.Data {
for _, proto := range protos {
if _, err := ps.PublishCommon(dt, user, pes.(*peer.Peer).PeerID, proto, resource); err != nil {
return err
}
}
}
return nil
}
func (ps *StreamService) PublishCommon(dt *tools.DataType, user string, toPeerID string, proto protocol.ID, resource []byte) (*common.Stream, error) {
fmt.Println("PublishCommon")
if toPeerID == ps.Key.String() {
fmt.Println("Can't send to ourself !")
return nil, errors.New("Can't send to ourself !")
}
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
p := access.Search(&dbs.Filters{
And: map[string][]dbs.Filter{ // search by name if no filters are provided
"peer_id": {{Operator: dbs.EQUAL.String(), Value: toPeerID}},
},
}, toPeerID, false)
var pe *peer.Peer
if len(p.Data) > 0 && p.Data[0].(*peer.Peer).Relation != peer.BLACKLIST {
pe = p.Data[0].(*peer.Peer)
} else if pps, err := ps.Node.GetPeerRecord(context.Background(), toPeerID, false); err == nil && len(pps) > 0 {
pe = pps[0]
}
if pe != nil {
ad, err := pp.AddrInfoFromString(pe.StreamAddress)
if err != nil {
return nil, err
}
fmt.Println("WRITE")
return ps.write(toPeerID, ad, dt, user, resource, proto)
}
return nil, errors.New("peer unvalid " + toPeerID)
}
func (ps *StreamService) ToPartnerPublishEvent(
ctx context.Context, action tools.PubSubAction, dt *tools.DataType, user string, payload []byte) error {
if *dt == tools.PEER {
var p peer.Peer
if err := json.Unmarshal(payload, &p); err != nil {
return err
}
pid, err := pp.Decode(p.PeerID)
if err != nil {
return err
}
if pe, err := oclib.GetMySelf(); err != nil {
return err
} else if pe.GetID() == p.GetID() {
return fmt.Errorf("can't send to ourself")
} else {
pe.Relation = p.Relation
pe.Verify = false
if b2, err := json.Marshal(pe); err == nil {
if _, err := ps.PublishCommon(dt, user, p.PeerID, ProtocolUpdateResource, b2); err != nil {
return err
}
if p.Relation == peer.PARTNER {
if ps.Streams[ProtocolHeartbeatPartner] == nil {
ps.Streams[ProtocolHeartbeatPartner] = map[pp.ID]*common.Stream{}
}
fmt.Println("SHOULD CONNECT")
ps.ConnectToPartner(p.StreamAddress)
} else if ps.Streams[ProtocolHeartbeatPartner] != nil && ps.Streams[ProtocolHeartbeatPartner][pid] != nil {
for _, pids := range ps.Streams {
if pids[pid] != nil {
delete(pids, pid)
}
}
}
}
}
return nil
}
ks := []protocol.ID{}
for k := range protocolsPartners {
ks = append(ks, k)
}
var proto protocol.ID
proto = ProtocolCreateResource
switch action {
case tools.PB_DELETE:
proto = ProtocolDeleteResource
case tools.PB_UPDATE:
proto = ProtocolUpdateResource
}
ps.PublishesCommon(dt, user, &dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
And: map[string][]dbs.Filter{
"relation": {{Operator: dbs.EQUAL.String(), Value: peer.PARTNER}},
},
}, payload, proto)
return nil
}
func (s *StreamService) write(
did string,
peerID *pp.AddrInfo,
dt *tools.DataType,
user string,
payload []byte,
proto protocol.ID) (*common.Stream, error) {
logger := oclib.GetLogger()
var err error
pts := map[protocol.ID]*common.ProtocolInfo{}
for k, v := range protocols {
pts[k] = v
}
for k, v := range protocolsPartners {
pts[k] = v
}
// should create a very temp stream
if s.Streams, err = common.TempStream(s.Host, *peerID, proto, did, s.Streams, pts, &s.Mu); err != nil {
return nil, errors.New("no stream available for protocol " + fmt.Sprintf("%v", proto) + " from PID " + peerID.ID.String())
}
if self, err := oclib.GetMySelf(); err != nil {
return nil, err
} else {
stream := s.Streams[proto][peerID.ID]
evt := common.NewEvent(string(proto), self.PeerID, dt, user, payload)
fmt.Println("SEND EVENT ", peerID, proto, evt.From, evt.DataType, evt.Timestamp)
if err := json.NewEncoder(stream.Stream).Encode(evt); err != nil {
stream.Stream.Close()
logger.Err(err)
return nil, err
}
if protocolInfo, ok := protocols[proto]; ok && protocolInfo.WaitResponse {
go s.readLoop(stream, peerID.ID, proto, &common.ProtocolInfo{PersistantStream: true})
}
return stream, nil
}
}

View File

@@ -0,0 +1,312 @@
package stream
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"oc-discovery/conf"
"oc-discovery/daemons/node/common"
"strings"
"sync"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/models/utils"
"github.com/google/uuid"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/network"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
ma "github.com/multiformats/go-multiaddr"
)
const ProtocolConsidersResource = "/opencloud/resource/considers/1.0"
const ProtocolMinioConfigResource = "/opencloud/minio/config/1.0"
const ProtocolAdmiraltyConfigResource = "/opencloud/admiralty/config/1.0"
const ProtocolSearchResource = "/opencloud/resource/search/1.0"
const ProtocolCreateResource = "/opencloud/resource/create/1.0"
const ProtocolUpdateResource = "/opencloud/resource/update/1.0"
const ProtocolDeleteResource = "/opencloud/resource/delete/1.0"
const ProtocolSendPlanner = "/opencloud/resource/planner/1.0"
const ProtocolVerifyResource = "/opencloud/resource/verify/1.0"
const ProtocolHeartbeatPartner = "/opencloud/resource/heartbeat/partner/1.0"
var protocols = map[protocol.ID]*common.ProtocolInfo{
ProtocolConsidersResource: {WaitResponse: false, TTL: 3 * time.Second},
ProtocolSendPlanner: {WaitResponse: true, TTL: 24 * time.Hour},
ProtocolSearchResource: {WaitResponse: true, TTL: 1 * time.Minute},
ProtocolVerifyResource: {WaitResponse: true, TTL: 1 * time.Minute},
ProtocolMinioConfigResource: {WaitResponse: true, TTL: 1 * time.Minute},
ProtocolAdmiraltyConfigResource: {WaitResponse: true, TTL: 1 * time.Minute},
}
var protocolsPartners = map[protocol.ID]*common.ProtocolInfo{
ProtocolCreateResource: {TTL: 3 * time.Second},
ProtocolUpdateResource: {TTL: 3 * time.Second},
ProtocolDeleteResource: {TTL: 3 * time.Second},
}
type StreamService struct {
Key pp.ID
Host host.Host
Node common.DiscoveryPeer
Streams common.ProtocolStream
maxNodesConn int
Mu sync.RWMutex
// Stream map[protocol.ID]map[pp.ID]*daemons.Stream
}
func InitStream(ctx context.Context, h host.Host, key pp.ID, maxNode int, node common.DiscoveryPeer) (*StreamService, error) {
logger := oclib.GetLogger()
service := &StreamService{
Key: key,
Node: node,
Host: h,
Streams: common.ProtocolStream{},
maxNodesConn: maxNode,
}
logger.Info().Msg("handle to partner heartbeat protocol...")
service.Host.SetStreamHandler(ProtocolHeartbeatPartner, service.HandlePartnerHeartbeat)
for proto := range protocols {
service.Host.SetStreamHandler(proto, service.HandleResponse)
}
logger.Info().Msg("connect to partners...")
service.connectToPartners() // we set up a stream
go service.StartGC(8 * time.Second)
return service, nil
}
func (s *StreamService) HandleResponse(stream network.Stream) {
s.Mu.Lock()
defer s.Mu.Unlock()
stream.Protocol()
if s.Streams[stream.Protocol()] == nil {
s.Streams[stream.Protocol()] = map[pp.ID]*common.Stream{}
}
expiry := 1 * time.Minute
if protocols[stream.Protocol()] != nil {
expiry = protocols[stream.Protocol()].TTL
} else if protocolsPartners[stream.Protocol()] != nil {
expiry = protocolsPartners[stream.Protocol()].TTL
}
s.Streams[stream.Protocol()][stream.Conn().RemotePeer()] = &common.Stream{
Stream: stream,
Expiry: time.Now().UTC().Add(expiry + 1*time.Minute),
}
go s.readLoop(s.Streams[stream.Protocol()][stream.Conn().RemotePeer()],
stream.Conn().RemotePeer(),
stream.Protocol(), protocols[stream.Protocol()])
}
func (s *StreamService) HandlePartnerHeartbeat(stream network.Stream) {
s.Mu.Lock()
if s.Streams[ProtocolHeartbeatPartner] == nil {
s.Streams[ProtocolHeartbeatPartner] = map[pp.ID]*common.Stream{}
}
streams := s.Streams[ProtocolHeartbeatPartner]
streamsAnonym := map[pp.ID]common.HeartBeatStreamed{}
for k, v := range streams {
streamsAnonym[k] = v
}
s.Mu.Unlock()
pid, hb, err := common.CheckHeartbeat(s.Host, stream, json.NewDecoder(stream), streamsAnonym, &s.Mu, s.maxNodesConn)
if err != nil {
return
}
s.Mu.Lock()
defer s.Mu.Unlock()
// if record already seen update last seen
if rec, ok := streams[*pid]; ok {
rec.DID = hb.DID
rec.Expiry = time.Now().UTC().Add(10 * time.Second)
} else { // if not in stream ?
val, err := stream.Conn().RemoteMultiaddr().ValueForProtocol(ma.P_IP4)
if err == nil {
s.ConnectToPartner(val)
}
}
// GC is already running via InitStream — starting a new ticker goroutine on
// every heartbeat would leak an unbounded number of goroutines.
}
func (s *StreamService) connectToPartners() error {
logger := oclib.GetLogger()
for proto, info := range protocolsPartners {
f := func(ss network.Stream) {
if s.Streams[proto] == nil {
s.Streams[proto] = map[pp.ID]*common.Stream{}
}
s.Streams[proto][ss.Conn().RemotePeer()] = &common.Stream{
Stream: ss,
Expiry: time.Now().UTC().Add(10 * time.Second),
}
go s.readLoop(s.Streams[proto][ss.Conn().RemotePeer()], ss.Conn().RemotePeer(), proto, info)
}
logger.Info().Msg("SetStreamHandler " + string(proto))
s.Host.SetStreamHandler(proto, f)
}
peers, err := s.searchPeer(fmt.Sprintf("%v", peer.PARTNER.EnumIndex()))
if err != nil {
logger.Err(err)
return err
}
for _, p := range peers {
s.ConnectToPartner(p.StreamAddress)
}
return nil
}
func (s *StreamService) ConnectToPartner(address string) {
logger := oclib.GetLogger()
if ad, err := pp.AddrInfoFromString(address); err == nil {
logger.Info().Msg("Connect to Partner " + ProtocolHeartbeatPartner + " " + address)
common.SendHeartbeat(context.Background(), ProtocolHeartbeatPartner, conf.GetConfig().Name,
s.Host, s.Streams, map[string]*pp.AddrInfo{address: ad}, nil, 20*time.Second)
}
}
func (s *StreamService) searchPeer(search string) ([]*peer.Peer, error) {
ps := []*peer.Peer{}
if conf.GetConfig().PeerIDS != "" {
for _, peerID := range strings.Split(conf.GetConfig().PeerIDS, ",") {
ppID := strings.Split(peerID, "/")
ps = append(ps, &peer.Peer{
AbstractObject: utils.AbstractObject{
UUID: uuid.New().String(),
Name: ppID[1],
},
PeerID: ppID[len(ppID)-1],
StreamAddress: peerID,
Relation: peer.PARTNER,
})
}
}
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
peers := access.Search(nil, search, false)
for _, p := range peers.Data {
ps = append(ps, p.(*peer.Peer))
}
return ps, nil
}
func (ix *StreamService) Close() {
for _, s := range ix.Streams {
for _, ss := range s {
ss.Stream.Close()
}
}
}
func (s *StreamService) StartGC(interval time.Duration) {
go func() {
t := time.NewTicker(interval)
defer t.Stop()
for range t.C {
s.gc()
}
}()
}
func (s *StreamService) gc() {
s.Mu.Lock()
defer s.Mu.Unlock()
now := time.Now().UTC()
if s.Streams[ProtocolHeartbeatPartner] == nil {
s.Streams[ProtocolHeartbeatPartner] = map[pp.ID]*common.Stream{}
}
streams := s.Streams[ProtocolHeartbeatPartner]
for pid, rec := range streams {
if now.After(rec.Expiry) {
for _, sstreams := range s.Streams {
if sstreams[pid] != nil {
sstreams[pid].Stream.Close()
delete(sstreams, pid)
}
}
}
}
}
func (ps *StreamService) readLoop(s *common.Stream, id pp.ID, proto protocol.ID, protocolInfo *common.ProtocolInfo) {
defer s.Stream.Close()
defer func() {
ps.Mu.Lock()
defer ps.Mu.Unlock()
delete(ps.Streams[proto], id)
}()
loop := true
if !protocolInfo.PersistantStream && !protocolInfo.WaitResponse { // 2 sec is enough... to wait a response
time.AfterFunc(2*time.Second, func() {
loop = false
})
}
for {
if !loop {
break
}
var evt common.Event
if err := json.NewDecoder(s.Stream).Decode(&evt); err != nil {
// Any decode error (EOF, reset, malformed JSON) terminates the loop;
// continuing on a dead/closed stream creates an infinite spin.
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) ||
strings.Contains(err.Error(), "reset") ||
strings.Contains(err.Error(), "closed") ||
strings.Contains(err.Error(), "too many connections") {
return
}
continue
}
ps.handleEvent(evt.Type, &evt)
if protocolInfo.WaitResponse && !protocolInfo.PersistantStream {
break
}
}
}
func (abs *StreamService) FilterPeer(peerID string, search string) *dbs.Filters {
p, err := oclib.GetMySelf()
if err != nil {
return nil
}
filter := map[string][]dbs.Filter{
"abstractinstanciatedresource.abstractresource.abstractobject.creator_id": {{Operator: dbs.EQUAL.String(), Value: p.GetID()}}, // is my resource...
"": {{Operator: dbs.OR.String(), Value: &dbs.Filters{
Or: map[string][]dbs.Filter{
"abstractinstanciatedresource.abstractresource.abstractobject.access_mode": {{Operator: dbs.EQUAL.String(), Value: 1}}, // if public
"abstractinstanciatedresource.instances": {{Operator: dbs.ELEMMATCH.String(), Value: &dbs.Filters{ // or got a partners instances
And: map[string][]dbs.Filter{
"resourceinstance.partnerships": {{Operator: dbs.ELEMMATCH.String(), Value: &dbs.Filters{
And: map[string][]dbs.Filter{
"resourcepartnership.peer_groups." + peerID: {{Operator: dbs.EXISTS.String(), Value: true}},
},
}}},
},
}}},
},
}}},
}
if search != "" {
filter[" "] = []dbs.Filter{{Operator: dbs.OR.String(), Value: &dbs.Filters{
Or: map[string][]dbs.Filter{ // filter by like name, short_description, description, owner, url if no filters are provided
"abstractinstanciatedresource.abstractresource.abstractobject.name": {{Operator: dbs.LIKE.String(), Value: search}},
"abstractinstanciatedresource.abstractresource.type": {{Operator: dbs.LIKE.String(), Value: search}},
"abstractinstanciatedresource.abstractresource.short_description": {{Operator: dbs.LIKE.String(), Value: search}},
"abstractinstanciatedresource.abstractresource.description": {{Operator: dbs.LIKE.String(), Value: search}},
"abstractinstanciatedresource.abstractresource.owners.name": {{Operator: dbs.LIKE.String(), Value: search}},
},
}}}
}
return &dbs.Filters{
And: filter,
}
}

33
demo-discovery.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
IMAGE_BASE_NAME="oc-discovery"
DOCKERFILE_PATH="."
docker network create \
--subnet=172.40.0.0/24 \
discovery
for i in $(seq ${1:-0} ${2:-3}); do
NUM=$((i + 1))
PORT=$((4000 + $NUM))
IMAGE_NAME="${IMAGE_BASE_NAME}:${NUM}"
echo "▶ Building image ${IMAGE_NAME} with CONF_NUM=${NUM}"
docker build \
--build-arg CONF_NUM=${NUM} \
-t "${IMAGE_BASE_NAME}_${NUM}" \
${DOCKERFILE_PATH}
docker kill "${IMAGE_BASE_NAME}_${NUM}" | true
docker rm "${IMAGE_BASE_NAME}_${NUM}" | true
echo "▶ Running container ${IMAGE_NAME} on port ${PORT}:${PORT}"
docker run -d \
--network="${3:-oc}" \
-p ${PORT}:${PORT} \
--name "${IMAGE_BASE_NAME}_${NUM}" \
"${IMAGE_BASE_NAME}_${NUM}"
docker network connect --ip "172.40.0.${NUM}" discovery "${IMAGE_BASE_NAME}_${NUM}"
done

View File

@@ -1,10 +0,0 @@
{
"port": 8080,
"redisurl":"localhost:6379",
"redispassword":"",
"zincurl":"http://localhost:4080",
"zinclogin":"admin",
"zincpassword":"admin",
"identityfile":"/app/identity.json",
"defaultpeers":"/app/peers.json"
}

View File

@@ -1,10 +1,14 @@
version: '3.4' version: '3.4'
services: services:
ocdiscovery: oc-schedulerd:
image: 'ocdiscovery:latest' image: 'oc-discovery:latest'
ports: ports:
- 8088:8080 - 9002:8080
container_name: ocdiscovery container_name: oc-discovery
networks:
- oc
networks:
oc:
external: true

View File

@@ -1,10 +0,0 @@
{
"port": 8080,
"redisurl":"localhost:6379",
"redispassword":"",
"zincurl":"http://localhost:4080",
"zinclogin":"admin",
"zincpassword":"admin",
"identityfile":"/app/identity.json",
"defaultpeers":"/app/peers.json"
}

6
docker_discovery1.json Normal file
View File

@@ -0,0 +1,6 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "indexer"
}

10
docker_discovery10.json Normal file
View File

@@ -0,0 +1,10 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "node",
"NODE_ENDPOINT_PORT": 4010,
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu",
"MIN_INDEXER": 2,
"PEER_IDS": "/ip4/172.40.0.9/tcp/4009/p2p/12D3KooWGnQfKwX9E4umCPE8dUKZuig4vw5BndDowRLEbGmcZyta"
}

8
docker_discovery2.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "indexer",
"NODE_ENDPOINT_PORT": 4002,
"INDEXER_ADDRESSES": "/ip4/172.40.0.1/tcp/4001/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
}

8
docker_discovery3.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "node",
"NODE_ENDPOINT_PORT": 4003,
"INDEXER_ADDRESSES": "/ip4/172.40.0.2/tcp/4002/p2p/12D3KooWC3GNStak8KCYtJq11Dxiq45EJV53z1ZvKetMcZBeBX6u"
}

9
docker_discovery4.json Normal file
View File

@@ -0,0 +1,9 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "node",
"NODE_ENDPOINT_PORT": 4004,
"INDEXER_ADDRESSES": "/ip4/172.40.0.1/tcp/4001/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu",
"PEER_IDS": "/ip4/172.40.0.3/tcp/4003/p2p/12D3KooWBh9kZrekBAE5G33q4jCLNRAzygem3gP1mMdK8mhoCTaw"
}

7
docker_discovery5.json Normal file
View File

@@ -0,0 +1,7 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "native-indexer",
"NODE_ENDPOINT_PORT": 4005
}

8
docker_discovery6.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "native-indexer",
"NODE_ENDPOINT_PORT": 4006,
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
}

8
docker_discovery7.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "indexer",
"NODE_ENDPOINT_PORT": 4007,
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.6/tcp/4006/p2p/12D3KooWC3GNStak8KCYtJq11Dxiq45EJV53z1ZvKetMcZBeBX6u"
}

8
docker_discovery8.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "indexer",
"NODE_ENDPOINT_PORT": 4008,
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
}

8
docker_discovery9.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "node",
"NODE_ENDPOINT_PORT": 4009,
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.6/tcp/4006/p2p/12D3KooWC3GNStak8KCYtJq11Dxiq45EJV53z1ZvKetMcZBeBX6u,/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,362 @@
================================================================================
OC-DISCOVERY : ARCHITECTURE CIBLE — RÉSEAU DHT SANS NATIFS
Vision d'évolution long terme, issue d'une analyse comparative
================================================================================
Rédigé à partir de l'analyse de l'architecture actuelle et de la discussion
comparative avec Tapestry, Kademlia, EigenTrust et les systèmes de réputation
distribués.
Référence : DECENTRALIZED_SYSTEMS_COMPARISON.txt §9
================================================================================
1. MOTIVATION
================================================================================
L'architecture actuelle (node → indexer → native indexer) est robuste et bien
adaptée à une phase précoce du réseau. Ses limites à l'échelle sont :
- Pool de natives statique au démarrage → dépendance à la configuration
- Cache local des natives = point de défaillance unique (perte = pool vide)
- Consensus inter-natives bloquant (~7s) déclenché à chaque bootstrap node
- État O(N indexers) par native → croît linéairement avec le réseau
- Nœuds privilégiés structurellement → SPOFs relatifs
La cible décrite ici supprime la notion de native indexer en tant que tier
architectural. Le réseau devient plat : indexers et nodes sont des acteurs
de même nature, différenciés uniquement par leur rôle volontaire.
================================================================================
2. PRINCIPES FONDAMENTAUX
================================================================================
P1. Aucun nœud n'est structurellement privilégié.
P2. La confiance est un produit du temps et de la vérification, pas d'un arbitre.
P3. Les claims d'un acteur sont vérifiables indépendamment par tout pair.
P4. La réputation émerge du comportement collectif, pas d'un signalement central.
P5. La DHT est une infrastructure neutre — elle stocke des faits, pas des jugements.
P6. La configuration statique n'existe plus au runtime — seulement au bootstrap.
================================================================================
3. RÔLES
================================================================================
3.1 Node
--------
Consommateur du réseau. Démarre, sélectionne un pool d'indexers via DHT,
heartbeat ses indexers, accumule des scores localement. Ne publie rien en
routine. Participe aux challenges de consensus à la demande.
3.2 Indexer
-----------
Acteur volontaire. S'inscrit dans la DHT à la naissance, maintient son record,
sert le trafic des nodes (heartbeat, Publish, Get). Déclare ses métriques dans
chaque réponse heartbeat. Maintient un score agrégé depuis ses nodes connectés.
Différence avec l'actuel : l'indexer n'a plus de lien avec une native.
Il est autonome. Son existence dans le réseau est prouvée par son record DHT
et par les nodes qui le contactent directement.
3.3 Nœud DHT infrastructure (ex-native)
----------------------------------------
N'importe quel nœud suffisamment stable peut maintenir la DHT sans être un
indexer. C'est une configuration, pas un type architectural : `dht_mode: server`.
Ces nœuds maintiennent les k-buckets Kademlia et stockent les records des
indexers. Ils ne connaissent pas le trafic node↔indexer et ne l'orchestrent pas.
================================================================================
4. BOOTSTRAP D'UN NODE
================================================================================
4.1 Entrée dans le réseau
-------------------------
Le node démarre avec 1 à 3 adresses de nœuds DHT connus (bootstrap peers).
Ce sont les seules informations statiques nécessaires. Ces peers n'ont pas de
rôle sémantique — ils servent uniquement à entrer dans l'overlay DHT.
4.2 Découverte du pool d'indexers
----------------------------------
Node → DHT.FindProviders(hash("/opencloud/indexers"))
→ reçoit une liste de N candidats avec leurs records
Sélection du pool initial :
1. Filtre latence : ping < seuil → proximité réseau réelle
2. Filtre fill rate : préférer les indexers moins chargés
3. Tirage pondéré : probabilité ∝ (1 - fill_rate), courbe w(F) = F×(1-F)
indexer à 20% charge → très probable
indexer à 80% charge → peu probable
4. Filtre diversité : subnet /24 différent pour chaque entrée du pool
Aucun consensus nécessaire à cette étape. Le node démarre avec une tolérance
basse (voir §7) — il accepte des indexers imparfaits et les évalue au fil du temps.
================================================================================
5. REGISTRATION D'UN INDEXER DANS LA DHT
================================================================================
À la naissance, l'indexer publie son record DHT :
clé : hash("/opencloud/indexers") ← clé fixe, connue de tous
valeur: {
multiaddr : <adresse réseau>,
region : <subnet /24>,
capacity : <maxNodesConn>,
fill_rate : <float 0-1>, ← auto-déclaré, vérifiable
peer_count : <int>, ← auto-déclaré, vérifiable
peers : [hash(nodeID1), ...], ← liste hashée des nodes connectés
born_at : <timestamp>,
sig : <signature clé indexer>, ← non-forgeable (PSK context)
}
Le record est rafraîchi toutes les ~60s (avant expiration du TTL).
Si l'indexer tombe : TTL expire → disparaît de la DHT automatiquement.
La peer list est hashée pour la confidentialité mais reste vérifiable :
un challenger peut demander directement à un node s'il est connecté à cet indexer.
================================================================================
6. PROTOCOLE HEARTBEAT — QUESTION ET RÉPONSE
================================================================================
Le heartbeat devient bidirectionnel : le node pose des questions, l'indexer
répond avec ses déclarations courantes.
6.1 Structure
-------------
Node → Indexer :
{
ts : now,
challenge : <optionnel, voir §8>
}
Indexer → Node :
{
ts : now,
fill_rate : 0.42,
peer_count : 87,
cached_score : 0.74, ← score agrégé depuis tous ses nodes connectés
challenge_response : {...} ← si challenge présent dans la requête
}
Le heartbeat normal (sans challenge) est quasi-identique à l'actuel en poids.
Le cached_score indexer est mis à jour progressivement par les feedbacks reçus.
6.2 Le cached_score de l'indexer
---------------------------------
L'indexer agrège les scores que ses nodes connectés lui communiquent
(implicitement via le fait qu'ils restent connectés, ou explicitement lors
d'un consensus). Ce score lui donne une vision de sa propre qualité réseau.
Un node peut comparer son score local de l'indexer avec le cached_score déclaré.
Une forte divergence est un signal d'alerte.
Score local node : 0.40 ← cet indexer est médiocre pour moi
Cached score : 0.91 ← il se prétend excellent globalement
→ déclenche un challenge de vérification
================================================================================
7. MODÈLE DE CONFIANCE PROGRESSIVE
================================================================================
7.1 Cycle de vie d'un node
---------------------------
Naissance
→ tolérance basse : accepte presque n'importe quel indexer du DHT
→ switching cost faible : peu de contexte accumulé
→ minScore ≈ 20% (dynamicMinScore existant, conservé)
Quelques heures
→ uptime s'accumule sur chaque indexer connu
→ scores se stabilisent
→ seuil de remplacement qui monte progressivement
Long terme (jours)
→ pool stable, confiance élevée sur les indexers connus
→ switching coûteux mais déclenché sur déception franche
→ minScore ≈ 80% (maturité)
7.2 Modèle sous-jacent : beta distribution implicite
------------------------------------------------------
α = succès cumulés (heartbeats OK, probes OK, challenges réussis)
β = échecs cumulés (timeouts, probes échoués, challenges ratés)
confiance = α / (α + β)
Nouveau indexer : α=0, β=0 → prior neutre, tolérance basse
Après 10 jours : α élevé → confiance stable, seuil de switch élevé
Déception franche : β monte → confiance chute → switch déclenché
7.3 Ce que "décevoir" signifie
--------------------------------
Heartbeat rate → trop de timeouts → fiabilité en baisse
Bandwidth probe → chute sous déclaré → dégradation ou mensonge
Fill rate réel → supérieur au déclaré → indexer surchargé ou malhonnête
Challenge échoué → peer déclaré absent du réseau → claim invalide
Latence → dérive progressive → qualité réseau dégradée
Cached_score gonflé → divergence forte avec score local → suspicion
================================================================================
8. VÉRIFICATION DES CLAIMS — TROIS COUCHES
================================================================================
8.1 Couche 1 : passive (chaque heartbeat, 60s)
-----------------------------------------------
Mesures automatiques, zéro coût supplémentaire.
- RTT du heartbeat → latence directe
- fill_rate déclaré → tiny payload dans la réponse
- peer_count déclaré → tiny payload
- cached_score indexer → comparé au score local
8.2 Couche 2 : sampling actif (1 heartbeat sur N)
--------------------------------------------------
Vérifications périodiques, asynchrones, légères.
Tous les 5 HB (~5min) : spot-check 1 peer aléatoire (voir §8.4)
Tous les 10 HB (~10min): vérification diversité subnet (lookups DHT légers)
Tous les 15 HB (~15min): bandwidth probe (transfert réel, protocole dédié)
8.3 Couche 3 : consensus (événementiel)
-----------------------------------------
Déclenché sur : admission d'un nouvel indexer dans le pool, ou suspicion détectée.
Node sélectionne une claim vérifiable de l'indexer cible X
Node vérifie lui-même
Node demande à ses indexers de confiance : "vérifiez cette claim sur X"
Chaque indexer vérifie indépendamment
Convergence des résultats → X est honnête → admission
Divergence → X est suspect → rejet ou probation
Le consensus est léger : quelques contacts out-of-band, pas de round bloquant.
Il n'est pas continu — il est événementiel.
8.4 Vérification out-of-band (pas de DHT writes par les nodes)
----------------------------------------------------------------
Les nodes ne publient PAS de contact records continus dans la DHT.
Cela éviterait N×M records à rafraîchir (coût DHT élevé à l'échelle).
À la place, lors d'un challenge :
Challenger sélectionne 2-3 peers dans la peer list déclarée par X
→ contacte ces peers directement : "es-tu connecté à indexer X ?"
→ réponse directe (out-of-band, pas via DHT)
→ vérification sans écriture DHT
L'indexer ne peut pas faire répondre "oui" à des peers qui ne lui sont pas
connectés. La vérification est non-falsifiable et sans coût DHT.
8.5 Pourquoi X ne peut pas tricher
------------------------------------
X ne peut pas coordonner des réponses différentes vers des challengers
simultanés. Chaque challenger contacte indépendamment les mêmes peers.
Si X ment sur sa peer list :
- Challenger A contacte peer P → "non, pas connecté à X"
- Challenger B contacte peer P → "non, pas connecté à X"
- Consensus : X ment → score chute chez tous les challengers
- Effet réseau : progressivement, X perd ses connections
- Peer list DHT se vide → claims futures encore moins crédibles
================================================================================
9. EFFET RÉSEAU SANS SIGNALEMENT CENTRAL
================================================================================
Un node qui pénalise un indexer n'envoie aucun "rapport" à quiconque.
Ses actions locales produisent l'effet réseau par agrégation :
Node baisse le score de X → X reçoit moins de trafic de ce node
Node switche vers Y → X perd un client
Node refuse les challenges X → X ne peut plus participer aux consensus
Si 200 nodes font pareil :
X perd la majorité de ses connections
Sa peer list DHT se vide (peers contactés directement disent "non")
Son cached_score s'effondre (peu de nodes restent)
Les nouveaux nodes qui voient X dans la DHT obtiennent des challenges échoués
X est naturellement exclu sans aucune décision centrale
Inversement, un indexer honnête voit ses scores monter sur tous ses nodes
connectés, sa peer list se densifier, ses challenges réussis systématiquement.
Sa réputation est un produit observable et vérifiable.
================================================================================
10. RÉSUMÉ DE L'ARCHITECTURE
================================================================================
DHT → annuaire neutre, vérité des records indexers
maintenu par tout nœud stable (dht_mode: server)
Indexer → acteur volontaire, s'inscrit, maintient ses claims,
sert le trafic, accumule son propre score agrégé
Node → consommateur, score passif + sampling + consensus léger,
confiance progressive, switching adaptatif
Heartbeat → métronome 60s + vecteur de déclarations légères + challenge optionnel
Consensus → événementiel, multi-challengers indépendants,
vérification out-of-band sur claims DHT
Confiance → beta implicite, progressive, switching cost croissant avec l'âge
Réputation → émerge du comportement collectif, aucun arbitre central
Bootstrap → 1-3 peers DHT connus → seule configuration statique nécessaire
================================================================================
11. TRAJECTOIRE DE MIGRATION
================================================================================
Phase 1 (actuel)
Natives statiques, pool indexers dynamique, consensus inter-natives
→ robuste, adapté à la phase précoce
Phase 2 (intermédiaire)
Pool de natives dynamique via DHT (bootstrap + gossip)
Même protocole natif, juste la découverte devient dynamique
→ supprime la dépendance à la configuration statique des natives
→ voir DECENTRALIZED_SYSTEMS_COMPARISON.txt §9.2
Phase 3 (cible)
Architecture décrite dans ce document
Natives disparaissent en tant que tier architectural
DHT = infrastructure, indexers = acteurs autonomes
Scoring et consensus entièrement côté node
→ aucun nœud privilégié, scalabilité O(log N)
La migration Phase 2 → Phase 3 est une refonte du plan de contrôle.
Le plan de données (heartbeat node↔indexer, Publish, Get) est inchangé.
Les primitives libp2p (Kademlia DHT, GossipSub) sont déjà présentes.
================================================================================
12. PROPRIÉTÉS DU SYSTÈME CIBLE
================================================================================
Scalabilité O(log N) — routage DHT Kademlia
Résilience Pas de SPOF structurel, TTL = seule source de vérité
Confiance Progressive, vérifiable, émergente
Sybil resistance PSK — seuls les nœuds avec la clé peuvent publier
Cold start Tolérance basse initiale, montée progressive (existant)
Honnêteté Claims vérifiables out-of-band, non-falsifiables
Décentralisation Aucun nœud ne connaît l'état global complet
================================================================================

View File

@@ -0,0 +1,58 @@
@startuml
title Node Initialization — Peer A (InitNode)
participant "main (Peer A)" as MainA
participant "Node A" as NodeA
participant "libp2p (Peer A)" as libp2pA
participant "DB Peer A (oc-lib)" as DBA
participant "NATS A" as NATSA
participant "Indexer (partagé)" as IndexerA
participant "StreamService A" as StreamA
participant "PubSubService A" as PubSubA
MainA -> NodeA: InitNode(isNode, isIndexer, isNativeIndexer)
NodeA -> NodeA: LoadKeyFromFilePrivate() → priv
NodeA -> NodeA: LoadPSKFromFile() → psk
NodeA -> libp2pA: New(PrivateNetwork(psk), Identity(priv), ListenAddr:4001)
libp2pA --> NodeA: host A (PeerID_A)
note over NodeA: isNode == true
NodeA -> libp2pA: NewGossipSub(ctx, host)
libp2pA --> NodeA: ps (GossipSub)
NodeA -> IndexerA: ConnectToIndexers → SendHeartbeat /opencloud/heartbeat/1.0
note over IndexerA: Heartbeat long-lived established\nQuality Score evaluated (bw + uptime + diversity)
IndexerA --> NodeA: OK
NodeA -> NodeA: claimInfo(name, hostname)
NodeA -> IndexerA: TempStream /opencloud/record/publish/1.0
NodeA -> IndexerA: stream.Encode(PeerRecord A signé)
IndexerA -> IndexerA: DHT.PutValue("/node/"+DID_A, record)
NodeA -> DBA: DB(PEER).Search(SELF)
DBA --> NodeA: local peer A (or new generated UUID)
NodeA -> NodeA: StartGC(30s) — GarbageCollector on StreamRecords
NodeA -> StreamA: InitStream(ctx, host, PeerID_A, 1000, nodeA)
StreamA -> StreamA: SetStreamHandler(heartbeat/partner, search, planner, ...)
StreamA -> DBA: Search(PEER, PARTNER) → partner list
DBA --> StreamA: Heartbeat long-lived established to partners
StreamA --> NodeA: StreamService A
NodeA -> PubSubA: InitPubSub(ctx, host, ps, nodeA, streamA)
PubSubA -> PubSubA: subscribeEvents(PB_SEARCH, timeout=-1)
PubSubA --> NodeA: PubSubService A
NodeA -> NodeA: SubscribeToSearch(ps, callback) (search global topic for resources)
note over NodeA: callback: GetPeerRecord(evt.From)\n→ StreamService.SendResponse
NodeA -> NATSA: ListenNATS(nodeA)
note over NATSA: Subscribes handlers:\nCREATE_RESOURCE, PROPALGATION_EVENT
NodeA --> MainA: *Node A is ready
@enduml

View File

@@ -0,0 +1,38 @@
@startuml
title Node Claim — Peer A publish its PeerRecord (claimInfo + publishPeerRecord)
participant "DB Peer A (oc-lib)" as DBA
participant "Node A" as NodeA
participant "Indexer (shared)" as IndexerA
participant "DHT Kademlia" as DHT
participant "NATS A" as NATSA
NodeA -> DBA: DB(PEER).Search(SELF)
DBA --> NodeA: existing peer (DID_A) or new UUID
NodeA -> NodeA: LoadKeyFromFilePrivate() → priv A
NodeA -> NodeA: LoadKeyFromFilePublic() → pub A
NodeA -> NodeA: Build PeerRecord A {\n Name, DID, PubKey,\n PeerID: PeerID_A,\n APIUrl: hostname,\n StreamAddress: /ip4/.../tcp/4001/p2p/PeerID_A,\n NATSAddress, WalletAddress\n}
NodeA -> NodeA: priv.Sign(rec) → signature
NodeA -> NodeA: rec.ExpiryDate = now + 150s
loop For every Node Binded Indexer (Indexer A, B, ...)
NodeA -> IndexerA: TempStream /opencloud/record/publish/1.0
NodeA -> IndexerA: strea!.Encode(Signed PeerRecord A)
IndexerA -> IndexerA: Verify signature
IndexerA -> IndexerA: Check PeerID_A heartbeat stream
IndexerA -> DHT: PutValue("/node/"+DID_A, PeerRecord A)
DHT --> IndexerA: ok
end
NodeA -> NodeA: rec.ExtractPeer(DID_A, DID_A, pub A)
NodeA -> NATSA: SetNATSPub(CREATE_RESOURCE, {PEER, Peer A JSON})
NATSA -> DBA: Upsert Peer A (SearchAttr: peer_id)
DBA --> NATSA: ok
NodeA --> NodeA: *peer.Peer A (SELF)
@enduml

View File

@@ -0,0 +1,49 @@
@startuml indexer_heartbeat
title Indexer — Heartbeat node → indexer (score on 5 metrics)
participant "Node A" as NodeA
participant "Node B" as NodeB
participant "IndexerService" as Indexer
note over NodeA,NodeB: Every node tick every 20s (SendHeartbeat)
par Node A heartbeat
NodeA -> Indexer: NewStream /opencloud/heartbeat/1.0
NodeA -> Indexer: stream.Encode(Heartbeat{Name, PeerID_A, IndexersBinded, Record})
Indexer -> Indexer: CheckHeartbeat(host, stream, dec, streams, mu, maxNodes)
note over Indexer: len(h.Network().Peers()) >= maxNodes → reject
Indexer -> Indexer: getBandwidthChallengeRate(host, remotePeer, 512-2048B)
Indexer -> Indexer: getOwnDiversityRate(host)\\nh.Network().Peers() + Peerstore.Addrs()\\n→ ratio /24 subnets distincts
Indexer -> Indexer: fillRate = len(h.Network().Peers()) / maxNodes
Indexer -> Indexer: Retrieve existing UptimeTracker\\noldTracker.RecordHeartbeat()\\n→ TotalOnline += gap si gap ≤ 120s\\nuptimeRatio = TotalOnline / time.Since(FirstSeen)
Indexer -> Indexer: ComputeIndexerScore(\\n uptimeRatio, bpms, diversity,\\n latencyScore, fillRate\\n)\\nScore = (0.20×U + 0.20×B + 0.20×D + 0.15×L + 0.25×F) × 100
Indexer -> Indexer: dynamicMinScore(age)\\n= 20 + 60×(hours/24), max 80
alt Score A < dynamicMinScore(age)
Indexer -> NodeA: (close stream — "not enough trusting value")
else Score A >= dynamicMinScore(age)
Indexer -> Indexer: streams[PeerID_A].HeartbeatStream = hb.Stream\\nstreams[PeerID_A].HeartbeatStream.UptimeTracker = oldTracker\\nstreams[PeerID_A].LastScore = hb.Score
note over Indexer: AfterHeartbeat → republish PeerRecord on DHT
end
else Node B heartbeat
NodeB -> Indexer: NewStream /opencloud/heartbeat/1.0
NodeB -> Indexer: stream.Encode(Heartbeat{Name, PeerID_B, IndexersBinded, Record})
Indexer -> Indexer: CheckHeartbeat → getBandwidthChallengeRate\\n→ getOwnDiversityRate → ComputeIndexerScore(5 composants)
alt Score B >= dynamicMinScore(age)
Indexer -> Indexer: streams[PeerID_B] subscribed + LastScore updated
end
end par
note over Indexer: GC ticker 30s — gc()\\nnow.After(Expiry) où Expiry = lastHBTime + 2min\\n→ AfterDelete(pid, name, did)
@enduml

View File

@@ -0,0 +1,47 @@
@startuml
title Indexer — Peer A publishing, Peer B publishing (handleNodePublish → DHT)
participant "Node A" as NodeA
participant "Node B" as NodeB
participant "IndexerService (shared)" as Indexer
participant "DHT Kademlia" as DHT
note over NodeA: Start after claimInfo or refresh TTL
par Peer A publish its PeerRecord
NodeA -> Indexer: TempStream /opencloud/record/publish/1.0
NodeA -> Indexer: stream.Encode(PeerRecord A {DID_A, PeerID_A, PubKey_A, Expiry, Sig_A})
Indexer -> Indexer: Verify sig_A (reconstruit rec minimal, pubKey_A.Verify)
Indexer -> Indexer: Check StreamRecords[Heartbeat][PeerID_A] existe
alt A active Heartbeat
Indexer -> Indexer: StreamRecord A → DID_A, Record=PeerRecord A, LastSeen=now
Indexer -> DHT: PutValue("/node/"+DID_A, PeerRecord A JSON)
Indexer -> DHT: PutValue("/name/"+name_A, DID_A)
Indexer -> DHT: PutValue("/peer/"+peer_id_A, DID_A)
DHT --> Indexer: ok
else Pas de heartbeat
Indexer -> NodeA: (erreur "no heartbeat", stream close)
end
else Peer B publish its PeerRecord
NodeB -> Indexer: TempStream /opencloud/record/publish/1.0
NodeB -> Indexer: stream.Encode(PeerRecord B {DID_B, PeerID_B, PubKey_B, Expiry, Sig_B})
Indexer -> Indexer: Verify sig_B
Indexer -> Indexer: Check StreamRecords[Heartbeat][PeerID_B] existe
alt B Active Heartbeat
Indexer -> Indexer: StreamRecord B → DID_B, Record=PeerRecord B, LastSeen=now
Indexer -> DHT: PutValue("/node/"+DID_B, PeerRecord B JSON)
Indexer -> DHT: PutValue("/name/"+name_B, DID_B)
Indexer -> DHT: PutValue("/peer/"+peer_id_B, DID_B)
DHT --> Indexer: ok
else Pas de heartbeat
Indexer -> NodeB: (erreur "no heartbeat", stream close)
end
end par
note over DHT: DHT got \n"/node/DID_A" et "/node/DID_B"
@enduml

View File

@@ -0,0 +1,51 @@
@startuml
title Indexer — Peer A discover Peer B (GetPeerRecord + handleNodeGet)
participant "NATS A" as NATSA
participant "DB Pair A (oc-lib)" as DBA
participant "Node A" as NodeA
participant "IndexerService (partagé)" as Indexer
participant "DHT Kademlia" as DHT
participant "NATS A (retour)" as NATSA2
note over NodeA: Trigger : NATS PB_SEARCH PEER\nor callback SubscribeToSearch
NodeA -> DBA: (PEER).Search(DID_B or PeerID_B)
DBA --> NodeA: Local Peer B (if known) → solve DID_B + PeerID_B\nor use search value
loop For every Peer A Binded Indexer
NodeA -> Indexer: TempStream /opencloud/record/get/1.0 -> streamAI
NodeA -> Indexer: streamAI.Encode(GetValue{Key: DID_B, PeerID: PeerID_B})
Indexer -> Indexer: key = "/node/" + DID_B
Indexer -> DHT: SearchValue(ctx 10s, "/node/"+DID_B)
DHT --> Indexer: channel de bytes (PeerRecord B)
loop Pour every results in DHT
Indexer -> Indexer: read → PeerRecord B
alt PeerRecord.PeerID == PeerID_B
Indexer -> Indexer: resp.Found=true, resp.Records[PeerID_B]=PeerRecord B
Indexer -> Indexer: StreamRecord B.LastSeen = now (if active heartbeat)
end
end
Indexer -> NodeA: streamAI.Encode(GetResponse{Found:true, Records:{PeerID_B: PeerRecord B}})
end
loop For every PeerRecord founded
NodeA -> NodeA: rec.Verify() → valid B signature
NodeA -> NodeA: rec.ExtractPeer(ourDID_A, DID_B, pubKey_B)
alt ourDID_A == DID_B (it's our proper entry)
note over NodeA: Republish to refresh TTL
NodeA -> Indexer: publishPeerRecord(rec) [refresh 2 min]
end
NodeA -> NATSA2: SetNATSPub(CREATE_RESOURCE, {PEER, Peer B JSON,\nSearchAttr:"peer_id"})
NATSA2 -> DBA: Upsert Peer B in DB A
DBA --> NATSA2: ok
end
NodeA --> NodeA: []*peer.Peer → [Peer B]
@enduml

View File

@@ -0,0 +1,49 @@
@startuml native_registration
title Native Indexer — Indexer Subscription (StartNativeRegistration)
participant "Indexer A" as IndexerA
participant "Indexer B" as IndexerB
participant "Native Indexer" as Native
participant "DHT Kademlia" as DHT
participant "GossipSub (oc-indexer-registry)" as PubSub
note over IndexerA,IndexerB: At start + every 60s (RecommendedHeartbeatInterval)\\nStartNativeRegistration → RegisterWithNative
par Indexer A subscribe
IndexerA -> IndexerA: fillRateFn()\\n= len(StreamRecords[HB]) / maxNodes
IndexerA -> IndexerA: Build IndexerRegistration{\\n PeerID_A, Addr_A,\\n Timestamp=now.UnixNano(),\\n FillRate=fillRateFn(),\\n PubKey, Signature\\n}\\nreg.Sign(h)
IndexerA -> Native: NewStream /opencloud/native/subscribe/1.0
IndexerA -> Native: stream.Encode(IndexerRegistration A)
Native -> Native: reg.Verify() — verify signature
Native -> Native: liveIndexerEntry{\\n PeerID_A, Addr_A,\\n ExpiresAt = now + IndexerTTL (90s),\\n FillRate = reg.FillRate,\\n PubKey, Signature\\n}
Native -> Native: liveIndexers[PeerID_A] = entry A
Native -> Native: knownPeerIDs[PeerID_A] = Addr_A
Native -> DHT: PutValue("/indexer/"+PeerID_A, entry A)
DHT --> Native: ok
Native -> PubSub: topic.Publish([]byte(PeerID_A))
note over PubSub: Gossip to other Natives\\n→ it adds PeerID_A to knownPeerIDs\\n→ refresh DHT next tick (30s)
IndexerA -> Native: stream.Close()
else Indexer B subscribe
IndexerB -> IndexerB: fillRateFn() + reg.Sign(h)
IndexerB -> Native: NewStream /opencloud/native/subscribe/1.0
IndexerB -> Native: stream.Encode(IndexerRegistration B)
Native -> Native: reg.Verify() + liveIndexerEntry{FillRate=reg.FillRate, ExpiresAt=now+90s}
Native -> Native: liveIndexers[PeerID_B] = entry B
Native -> DHT: PutValue("/indexer/"+PeerID_B, entry B)
Native -> PubSub: topic.Publish([]byte(PeerID_B))
IndexerB -> Native: stream.Close()
end par
note over Native: liveIndexers = {PeerID_A: {FillRate:0.3}, PeerID_B: {FillRate:0.6}}\\nTTL 90s — IndexerTTL
note over Native: Explicit unsubcrive on stop :\\nUnregisterFromNative → /opencloud/native/unsubscribe/1.0\\nNative close all now.
@enduml

View File

@@ -0,0 +1,70 @@
@startuml native_get_consensus
title Native — ConnectToNatives : fetch pool + Phase 1 + Phase 2
participant "Node / Indexer\\n(appelant)" as Caller
participant "Native A" as NA
participant "Native B" as NB
participant "Indexer A\\n(stable voter)" as IA
note over Caller: NativeIndexerAddresses configured\\nConnectToNatives() called from ConnectToIndexers
== Step 1 : heartbeat to the native mesh (nativeHeartbeatOnce) ==
Caller -> NA: SendHeartbeat /opencloud/heartbeat/1.0
Caller -> NB: SendHeartbeat /opencloud/heartbeat/1.0
== Step 2 : parrallel fetch pool (timeout 6s) ==
par fetchIndexersFromNative — parallel
Caller -> NA: NewStream /opencloud/native/indexers/1.0\\nGetIndexersRequest{Count: maxIndexer, From: PeerID}
NA -> NA: reachableLiveIndexers()\\ntri par w(F) = fillRate×(1fillRate) desc
NA --> Caller: GetIndexersResponse{Indexers:[IA,IB], FillRates:{IA:0.3,IB:0.6}}
else
Caller -> NB: NewStream /opencloud/native/indexers/1.0
NB -> NB: reachableLiveIndexers()
NB --> Caller: GetIndexersResponse{Indexers:[IA,IB], FillRates:{IA:0.3,IB:0.6}}
end par
note over Caller: Fusion → candidates=[IA,IB]\\nisFallback=false
alt isFallback=true (native give themself as Fallback indexer)
note over Caller: resolvePool : avoid consensus\\nadmittedAt = Now (zero)\\nStaticIndexers = {native_addr}
else isFallback=false → Phase 1 + Phase 2
== Phase 1 — clientSideConsensus (timeout 3s/natif, 4s total) ==
par Parralel Consensus
Caller -> NA: NewStream /opencloud/native/consensus/1.0\\nConsensusRequest{Candidates:[IA,IB]}
NA -> NA: compare with clean liveIndexers
NA --> Caller: ConsensusResponse{Trusted:[IA,IB], Suggestions:[]}
else
Caller -> NB: NewStream /opencloud/native/consensus/1.0
NB --> Caller: ConsensusResponse{Trusted:[IA], Suggestions:[IC]}
end par
note over Caller: IA → 2/2 votes → confirmed ✓\\nIB → 1/2 vote → refusé ✗\\nIC → suggestion → round 2 if confirmed < maxIndexer
alt confirmed < maxIndexer && available suggestions
note over Caller: Round 2 — rechallenge with confirmed + sample(suggestions)\\nclientSideConsensus([IA, IC])
end
note over Caller: admittedAt = time.Now()
== Phase 2 — indexerLivenessVote (timeout 3s/votant, 4s total) ==
note over Caller: Search for stable voters in Subscribed Indexers\\nAdmittedAt != zero && age >= MinStableAge (2min)
alt Stable Voters are available
par Phase 2 parrallel
Caller -> IA: NewStream /opencloud/indexer/consensus/1.0\\nIndexerConsensusRequest{Candidates:[IA]}
IA -> IA: StreamRecords[ProtocolHB][candidate]\\ntime.Since(LastSeen) <= 120s && LastScore >= 30.0
IA --> Caller: IndexerConsensusResponse{Alive:[IA]}
end par
note over Caller: alive IA confirmed per quorum > 0.5\\npool = {IA}
else No voters are stable (startup)
note over Caller: Phase 1 keep directly\\n(no indexer reaches MinStableAge)
end
== Replacement pool ==
Caller -> Caller: replaceStaticIndexers(pool, admittedAt)\\nStaticIndexerMeta[IA].AdmittedAt = admittedAt
end
== Étape 3 : heartbeat to indexers pool (ConnectToIndexers) ==
Caller -> Caller: SendHeartbeat /opencloud/heartbeat/1.0\\nvers StaticIndexers
@enduml

View File

@@ -0,0 +1,42 @@
@startuml
title NATS — CREATE_RESOURCE : Peer A Create/Update Peer B & establishing stream
participant "App Peer A (oc-api)" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "StreamService A" as StreamA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "DB Peer A (oc-lib)" as DBA
note over AppA: Peer B is discovered\n(per indexer or manually)
AppA -> NATSA: Publish(CREATE_RESOURCE, {\n FromApp:"oc-api",\n Datatype:PEER,\n Payload: Peer B {StreamAddress_B, Relation:PARTNER}\n})
NATSA -> NodeA: ListenNATS callback → CREATE_RESOURCE
NodeA -> NodeA: if from himself ? → No, continue
NodeA -> NodeA: json.Unmarshal(payload) → peer.Peer B
alt peer B.Relation == PARTNER
NodeA -> StreamA: ConnectToPartner(B.StreamAddress)
StreamA -> NodeB: Connect (libp2p)
StreamA -> NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
StreamA -> NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A})
NodeB -> StreamB: HandlePartnerHeartbeat(stream)
StreamB -> StreamB: CheckHeartbeat → bandwidth challenge
StreamB -> StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
StreamA -> StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
note over StreamA,StreamB: Stream partner long-lived établi\nbi-directionnal
else peer B.Relation != PARTNER (revoke / blacklist)
note over NodeA: Suppress all streams onto Peer B
loop For every Streams
NodeA -> StreamA: streams[proto][PeerID_B].Stream.Close()
NodeA -> StreamA: delete(streams[proto], PeerID_B)
end
end
NodeA -> DBA: (no write — only app source manually add peer)
@enduml

View File

@@ -0,0 +1,73 @@
@startuml
title NATS — PROPALGATION_EVENT : Peer A propalgate to Peer B lookup
participant "App Pair A" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "StreamService A" as StreamA
participant "Node Partner B" as PeerB
participant "Node C" as PeerC
participant "NATS B" as NATSB
participant "DB Pair B (oc-lib)" as DBB
note over App: only our proper resource (db data) can be propalgate : creator_id==self
AppA -> NATSA: Publish(PROPALGATION_EVENT, {Action, DataType, Payload})
NATSA -> NodeA: ListenNATS callback → PROPALGATION_EVENT
NodeA -> NodeA: propalgate from himself ? → no, continue
NodeA -> NodeA: json.Unmarshal → PropalgationMessage{Action, DataType, Payload}
alt Action == PB_DELETE
NodeA -> StreamA: ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
StreamA -> StreamA: searchPeer(PARTNER) → [Peer Partner B, ...]
StreamA -> NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
note over NodeB: /opencloud/resource/delete/1.0
NodeB -> NodeB: handleEventFromPartner(evt, ProtocolDeleteResource)
NodeB -> NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
NATSB -> DBB: Suppress ressource into DB B
else Action == PB_UPDATE (per ProtocolUpdateResource)
NodeA -> StreamA: ToPartnerPublishEvent(PB_UPDATE, dt, user, payload)
StreamA -> StreamA: searchPeer(PARTNER) → [Peer Partner B, ...]
StreamA -> NodeB: write → /opencloud/resource/update/1.0
NodeB -> NATSB: SetNATSPub(CREATE_RESOURCE, {DataType, resource JSON})
NATSB -> DBB: Upsert ressource dans DB B
else Action == PB_CREATE (per ProtocolCreateResource)
NodeA -> StreamA: ToPartnerPublishEvent(PB_UPDATE, dt, user, payload)
StreamA -> StreamA: searchPeer(PARTNER) → [Peer Partner B, ...]
StreamA -> NodeB: write → /opencloud/resource/create/1.0
NodeB -> NATSB: SetNATSPub(CREATE_RESOURCE, {DataType, resource JSON})
NATSB -> DBB: Create ressource dans DB B
else Action == PB_CONSIDERS (is a considering a previous action, such as planning or creating resource)
NodeA -> NodeA: Unmarshal → executionConsidersPayload{PeerIDs:[PeerID_B, ...]}
loop For every peer_id targeted
NodeA -> StreamA: PublishCommon(dt, user, PeerID_B, ProtocolConsidersResource, payload)
StreamA -> NodeB: write → /opencloud/resource/considers/1.0
NodeB -> NodeB: passConsidering(evt)
NodeB -> NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_CONSIDERS, dt, payload})
NATSB -> DBB: (treat per emmitters app of a previous action on NATS B)
end
else Action == PB_CLOSE_PLANNER
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B}
NodeA -> StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
NodeA -> StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
else Action == PB_SEARCH + DataType == PEER
NodeA -> NodeA: read → {search: "..."}
NodeA -> NodeA: GetPeerRecord(ctx, search)
note over NodeA: Resolved per DB A or Indexer + DHT
NodeA -> NATSA: SetNATSPub(SEARCH_EVENT, {PEER, PeerRecord JSON})
NATSA -> NATSA: (AppA retrieve results)
else Action == PB_SEARCH + other DataType
NodeA -> NodeA: read → {type:"all"|"known"|"partner", search:"..."}
NodeA -> NodeA: PubSubService.SearchPublishEvent(ctx, dt, type, user, search)
note over NodeA: Watch after pubsub_search & stream_search diagrams
end
@enduml

View File

@@ -0,0 +1,58 @@
@startuml
title PubSub — Gossip Global search (type "all") : Peer A searching, Peer B answering
participant "App UI A" as UIA
participant "App Peer A" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "StreamService A" as StreamA
participant "PubSubService A" as PubSubA
participant "GossipSub libp2p (mesh)" as GossipSub
participant "Node B" as NodeB
participant "PubSubService B" as PubSubB
participant "DB Peer B (oc-lib)" as DBB
participant "StreamService B" as StreamB
UIA -> AppA: websocket subscription, sending {type:"all", search:"search"} in query
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"all", search:"search"})
NATSA -> NodeA: ListenNATS → PB_SEARCH (type "all")
NodeA -> PubSubA: SearchPublishEvent(ctx, dt, "all", user, "search")
PubSubA -> PubSubA: publishEvent(PB_SEARCH, user, {search:"search"})
PubSubA -> PubSubA: priv_A.Sign(event body) → sig
PubSubA -> PubSubA: Build Event{Type:"search", From:DID_A, Payload:{search:"search"}, Sig}
PubSubA -> GossipSub: topic.Join("search")
PubSubA -> GossipSub: topic.Publish(ctx, json(Event))
GossipSub --> NodeB: Propalgate message (gossip mesh)
NodeB -> PubSubB: subscribeEvents listen to topic "search#"
PubSubB -> PubSubB: read → Event{From: DID_A}
PubSubB -> NodeB: GetPeerRecord(ctx, DID_A)
note over NodeB: Resolve Peer A per DB B or ask to Indexer
NodeB --> PubSubB: Peer A {PublicKey_A, Relation, ...}
PubSubB -> PubSubB: event.Verify(Peer A) → valid sig_A
PubSubB -> PubSubB: handleEventSearch(ctx, evt, PB_SEARCH)
PubSubB -> StreamB: SendResponse(Peer A, evt)
StreamB -> DBB: Search(COMPUTE + STORAGE + ..., filters{creator=self, access=PUBLIC OR partnerships[PeerID_A]}, search="search")
DBB --> StreamB: [Resource1, Resource2, ...]
loop For every matching resource, only match our own resource creator_id=self_did
StreamB -> StreamB: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
StreamB -> StreamA: NewStream /opencloud/resource/search/1.0
StreamB -> StreamA: stream.Encode(Event{Type:search, From:DID_B, DataType, Payload:resource})
end
StreamA -> StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
StreamA -> StreamA: retrieveResponse(evt)
StreamA -> NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
NATSA -> AppA: Search results from Peer B
AppA -> UIA: emit on websocket
@enduml

View File

@@ -0,0 +1,54 @@
@startuml
title Stream — Direct search (type "known"/"partner") : Peer A → Peer B
participant "App UI A" as UIA
participant "App Pair A" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "PubSubService A" as PubSubA
participant "StreamService A" as StreamA
participant "DB Pair A (oc-lib)" as DBA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "DB Pair B (oc-lib)" as DBB
UIA -> AppA: websocket subscription, sending {type:"all", search:"search"} in query
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"partner", search:"gpu"})
NATSA -> NodeA: ListenNATS → PB_SEARCH (type "partner")
NodeA -> PubSubA: SearchPublishEvent(ctx, dt, "partner", user, "gpu")
PubSubA -> StreamA: SearchPartnersPublishEvent(dt, user, "gpu")
StreamA -> DBA: Search(PEER, PARTNER) + PeerIDS config
DBA --> StreamA: [Peer B, ...]
loop Pour chaque pair partenaire (Pair B)
StreamA -> StreamA: write(PeerID_B, addr_B, dt, user, payload, ProtocolSearchResource)
StreamA -> NodeB: TempStream /opencloud/resource/search/1.0
StreamA -> NodeB: stream.Encode(Event{Type:search, From:DID_A, DataType, Payload:{search:"gpu"}})
NodeB -> StreamB: HandleResponse(stream) → readLoop
StreamB -> StreamB: handleEvent(ProtocolSearchResource, evt)
StreamB -> StreamB: handleEventFromPartner(evt, ProtocolSearchResource)
alt evt.DataType == -1 (toutes ressources)
StreamB -> DBA: Search(PEER, evt.From=DID_A)
note over StreamB: Local Resolving (DB) or GetPeerRecord (Indexer Way)
StreamB -> StreamB: SendResponse(Peer A, evt)
StreamB -> DBB: Search(ALL_RESOURCES, filter{creator=B + public OR partner A + search:"gpu"})
DBB --> StreamB: [Resource1, Resource2, ...]
else evt.DataType specified
StreamB -> DBB: Search(DataType, filter{creator=B + access + search:"gpu"})
DBB --> StreamB: [Resource1, ...]
end
loop Pour chaque ressource
StreamB -> StreamA: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
StreamA -> StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
StreamA -> StreamA: retrieveResponse(evt)
StreamA -> NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
NATSA -> AppA: Peer B results
AppA -> UIA: emit on websocket
end
end
@enduml

View File

@@ -0,0 +1,60 @@
@startuml
title Stream — Partner Heartbeat et propagation CRUD Pair A ↔ Pair B
participant "DB Pair A (oc-lib)" as DBA
participant "StreamService A" as StreamA
participant "Node A" as NodeA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "NATS B" as NATSB
participant "DB Pair B (oc-lib)" as DBB
participant "NATS A" as NATSA
note over StreamA: Démarrage → connectToPartners()
StreamA -> DBA: Search(PEER, PARTNER) + PeerIDS config
DBA --> StreamA: [Peer B, ...]
StreamA -> NodeB: Connect (libp2p)
StreamA -> NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
StreamA -> NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A, IndexersBinded_A})
NodeB -> StreamB: HandlePartnerHeartbeat(stream)
StreamB -> StreamB: CheckHeartbeat → bandwidth challenge
StreamB -> StreamA: Echo(payload)
StreamB -> StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
StreamA -> StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
note over StreamA,StreamB: Stream partner long-lived établi\nGC toutes les 8s (StreamService A)\nGC toutes les 30s (StreamService B)
note over NATSA: Pair A reçoit PROPALGATION_EVENT{PB_DELETE, dt:"storage", payload:res}
NATSA -> NodeA: ListenNATS → ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
NodeA -> StreamA: ToPartnerPublishEvent(ctx, PB_DELETE, dt_storage, user, payload)
alt dt == PEER (mise à jour relation partenaire)
StreamA -> StreamA: json.Unmarshal → peer.Peer B updated
alt B.Relation == PARTNER
StreamA -> NodeB: ConnectToPartner(B.StreamAddress)
note over StreamA,NodeB: Reconnexion heartbeat si relation upgrade
else B.Relation != PARTNER
loop Tous les protocoles
StreamA -> StreamA: delete(streams[proto][PeerID_B])
StreamA -> NodeB: (streams fermés)
end
end
else dt != PEER (ressource ordinaire)
StreamA -> DBA: Search(PEER, PARTNER) → [Pair B, ...]
loop Pour chaque protocole partner (Create/Update/Delete)
StreamA -> NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
note over NodeB: /opencloud/resource/delete/1.0
NodeB -> StreamB: HandleResponse → readLoop
StreamB -> StreamB: handleEventFromPartner(evt, ProtocolDeleteResource)
StreamB -> NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
NATSB -> DBB: Supprimer ressource dans DB B
end
end
@enduml

View File

@@ -0,0 +1,51 @@
@startuml
title Stream — Session Planner : Pair A demande le plan de Pair B
participant "App Pair A (oc-booking)" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "StreamService A" as StreamA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "DB Pair B (oc-lib)" as DBB
participant "NATS B" as NATSB
' Ouverture session planner
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_PLANNER, peer_id:PeerID_B, payload:{}})
NATSA -> NodeA: ListenNATS → PB_PLANNER
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B, payload: {}}
NodeA -> StreamA: PublishCommon(nil, user, PeerID_B, ProtocolSendPlanner, {})
note over StreamA: WaitResponse=true, TTL=24h\nStream long-lived vers Pair B
StreamA -> NodeB: TempStream /opencloud/resource/planner/1.0
StreamA -> NodeB: json.Encode(Event{Type:planner, From:DID_A, Payload:{}})
NodeB -> StreamB: HandleResponse → readLoop(ProtocolSendPlanner)
StreamB -> StreamB: handleEvent(ProtocolSendPlanner, evt)
StreamB -> StreamB: sendPlanner(evt)
alt evt.Payload vide (requête initiale)
StreamB -> DBB: planner.GenerateShallow(AdminRequest)
DBB --> StreamB: plan (shallow booking plan de Pair B)
StreamB -> StreamA: PublishCommon(nil, user, DID_A, ProtocolSendPlanner, planJSON)
StreamA -> NodeA: json.Encode(Event{plan de B})
NodeA -> NATSA: (forwardé à AppA via SEARCH_EVENT ou PLANNER event)
NATSA -> AppA: Plan de Pair B
else evt.Payload non vide (mise à jour planner)
StreamB -> StreamB: m["peer_id"] = evt.From (DID_A)
StreamB -> NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_PLANNER, peer_id:DID_A, payload:plan})
NATSB -> DBB: (oc-booking traite le plan sur NATS B)
end
' Fermeture session planner
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_CLOSE_PLANNER, peer_id:PeerID_B})
NATSA -> NodeA: ListenNATS → PB_CLOSE_PLANNER
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B}
NodeA -> StreamA: Mu.Lock()
NodeA -> StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
NodeA -> StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
NodeA -> StreamA: Mu.Unlock()
note over StreamA,NodeB: Stream planner fermé — session terminée
@enduml

View File

@@ -0,0 +1,61 @@
@startuml
title Native Indexer — Boucles background (offload, DHT refresh, GC streams)
participant "Indexer A (enregistré)" as IndexerA
participant "Indexer B (enregistré)" as IndexerB
participant "Native Indexer" as Native
participant "DHT Kademlia" as DHT
participant "Node A (responsible peer)" as NodeA
note over Native: runOffloadLoop — toutes les 30s
loop Toutes les 30s
Native -> Native: len(responsiblePeers) > 0 ?
note over Native: responsiblePeers = peers pour lesquels\nle native a fait selfDelegate (aucun indexer dispo)
alt Des responsible peers existent (ex: Node A)
Native -> Native: reachableLiveIndexers()
note over Native: Filtre liveIndexers par TTL\nping PeerIsAlive pour chaque candidat
alt Indexers A et B maintenant joignables
Native -> Native: responsiblePeers = {} (libère Node A et autres)
note over Native: Node A se reconnectera\nau prochain ConnectToNatives
else Toujours aucun indexer
note over Native: Node A reste sous la responsabilité du native
end
end
end
note over Native: refreshIndexersFromDHT — toutes les 30s
loop Toutes les 30s
Native -> Native: Collecter tous les knownPeerIDs\n= {PeerID_A, PeerID_B, ...}
loop Pour chaque PeerID connu
Native -> Native: liveIndexers[PeerID] encore frais ?
alt Entrée manquante ou expirée
Native -> DHT: SearchValue(ctx 5s, "/indexer/"+PeerID)
DHT --> Native: channel de bytes
loop Pour chaque résultat DHT
Native -> Native: Unmarshal → liveIndexerEntry
Native -> Native: Garder le meilleur (ExpiresAt le plus récent, valide)
end
Native -> Native: liveIndexers[PeerID] = best entry
note over Native: "native: refreshed indexer from DHT"
end
end
end
note over Native: LongLivedStreamRecordedService GC — toutes les 30s
loop Toutes les 30s
Native -> Native: gc() — lock StreamRecords[Heartbeat]
loop Pour chaque StreamRecord (Indexer A, B, ...)
Native -> Native: now > rec.Expiry ?\nOU timeSince(LastSeen) > 2×TTL restant ?
alt Pair périmé (ex: Indexer B disparu)
Native -> Native: Supprimer Indexer B de TOUS les maps de protocoles
note over Native: Stream heartbeat fermé\nliveIndexers[PeerID_B] expirera naturellement
end
end
end
note over IndexerA: Indexer A continue à heartbeater normalement\net reste dans StreamRecords + liveIndexers
@enduml

View File

@@ -0,0 +1,49 @@
@startuml 15_archi_config_nominale
skinparam componentStyle rectangle
skinparam backgroundColor white
skinparam defaultTextAlignment center
title C1 — Topologie nominale\n2 natifs · 2 indexeurs · 2 nœuds
package "Couche 1 — Mesh natif" #E8F4FD {
component "Native A\n(hub autoritaire)" as NA #AED6F1
component "Native B\n(hub autoritaire)" as NB #AED6F1
NA <--> NB : heartbeat /opencloud/heartbeat/1.0 (20s)\n+ gossip PubSub oc-indexer-registry
}
package "Couche 2 — Indexeurs" #E9F7EF {
component "Indexer A\n(DHT server)" as IA #A9DFBF
component "Indexer B\n(DHT server)" as IB #A9DFBF
}
package "Couche 3 — Nœuds" #FEFBD8 {
component "Node 1" as N1 #FAF0BE
component "Node 2" as N2 #FAF0BE
}
' Enregistrements (one-shot, 60s)
IA -[#117A65]--> NA : subscribe signé (60s)\n/opencloud/native/subscribe/1.0
IA -[#117A65]--> NB : subscribe signé (60s)
IB -[#117A65]--> NA : subscribe signé (60s)
IB -[#117A65]--> NB : subscribe signé (60s)
' Heartbeats indexeurs → natifs (long-lived, 20s)
IA -[#27AE60]..> NA : heartbeat (20s)
IA -[#27AE60]..> NB : heartbeat (20s)
IB -[#27AE60]..> NA : heartbeat (20s)
IB -[#27AE60]..> NB : heartbeat (20s)
' Heartbeats nœuds → indexeurs (long-lived, 20s)
N1 -[#E67E22]--> IA : heartbeat long-lived (20s)\n/opencloud/heartbeat/1.0
N1 -[#E67E22]--> IB : heartbeat long-lived (20s)
N2 -[#E67E22]--> IA : heartbeat long-lived (20s)
N2 -[#E67E22]--> IB : heartbeat long-lived (20s)
note as Legend
Légende :
──► enregistrement one-shot (signé)
···► heartbeat long-lived (20s)
──► heartbeat nœud → indexeur (20s)
end note
@enduml

View File

@@ -0,0 +1,38 @@
@startuml 16_archi_config_seed
skinparam componentStyle rectangle
skinparam backgroundColor white
skinparam defaultTextAlignment center
title C2 — Mode seed (sans natif)\nIndexerAddresses seuls · AdmittedAt = zero
package "Couche 2 — Indexeurs seeds" #E9F7EF {
component "Indexer A\n(seed, AdmittedAt=0)" as IA #A9DFBF
component "Indexer B\n(seed, AdmittedAt=0)" as IB #A9DFBF
}
package "Couche 3 — Nœuds" #FEFBD8 {
component "Node 1" as N1 #FAF0BE
component "Node 2" as N2 #FAF0BE
}
note as NNative #FFDDDD
Aucun natif configuré.
AdmittedAt = zero → IsStableVoter() = false
Phase 2 sans votants : Phase 1 conservée directement.
Risque D20 : circularité du trust (seeds se valident mutuellement).
end note
' Heartbeats nœuds → indexeurs seeds
N1 -[#E67E22]--> IA : heartbeat long-lived (20s)
N1 -[#E67E22]--> IB : heartbeat long-lived (20s)
N2 -[#E67E22]--> IA : heartbeat long-lived (20s)
N2 -[#E67E22]--> IB : heartbeat long-lived (20s)
note bottom of IA
Après 2s : goroutine async
fetchNativeFromIndexers → ?
Si natif trouvé → ConnectToNatives (upgrade vers C1)
Si non → mode indexeur pur (D20 actif)
end note
@enduml

View File

@@ -0,0 +1,63 @@
@startuml 17_startup_consensus_phase1_phase2
title Démarrage avec natifs — Phase 1 (admission) + Phase 2 (vivacité)
participant "Node / Indexer\n(appelant)" as Caller
participant "Native A" as NA
participant "Native B" as NB
participant "Indexer A" as IA
participant "Indexer B" as IB
note over Caller: ConnectToNatives()\nNativeIndexerAddresses configuré
== Étape 0 : heartbeat vers le mesh natif ==
Caller -> NA: SendHeartbeat /opencloud/heartbeat/1.0 (goroutine longue durée)
Caller -> NB: SendHeartbeat /opencloud/heartbeat/1.0 (goroutine longue durée)
== Étape 1 : fetch pool en parallèle ==
par Fetch parallèle (timeout 6s)
Caller -> NA: GET /opencloud/native/indexers/1.0\nGetIndexersRequest{Count: max, FillRates demandés}
NA -> NA: reachableLiveIndexers()\ntri par w(F) = fillRate×(1fillRate)
NA --> Caller: GetIndexersResponse{Indexers:[IA,IB], FillRates:{IA:0.3, IB:0.6}}
else
Caller -> NB: GET /opencloud/native/indexers/1.0
NB -> NB: reachableLiveIndexers()
NB --> Caller: GetIndexersResponse{Indexers:[IA,IB], FillRates:{IA:0.3, IB:0.6}}
end par
note over Caller: Fusion + dédup → candidates = [IA, IB]\nisFallback = false
== Étape 2a : Phase 1 — Admission native (clientSideConsensus) ==
par Consensus parallèle (timeout 3s par natif, 4s total)
Caller -> NA: /opencloud/native/consensus/1.0\nConsensusRequest{Candidates:[IA,IB]}
NA -> NA: croiser avec liveIndexers
NA --> Caller: ConsensusResponse{Trusted:[IA,IB], Suggestions:[]}
else
Caller -> NB: /opencloud/native/consensus/1.0\nConsensusRequest{Candidates:[IA,IB]}
NB -> NB: croiser avec liveIndexers
NB --> Caller: ConsensusResponse{Trusted:[IA], Suggestions:[IC]}
end par
note over Caller: IA → 2/2 votes → confirmé ✓\nIB → 1/2 vote → refusé ✗\nadmittedAt = time.Now()
== Étape 2b : Phase 2 — Liveness vote (indexerLivenessVote) ==
note over Caller: Cherche votants stables dans StaticIndexerMeta\n(AdmittedAt != zero, age >= MinStableAge=2min)
alt Votants stables disponibles
par Phase 2 parallèle (timeout 3s)
Caller -> IA: /opencloud/indexer/consensus/1.0\nIndexerConsensusRequest{Candidates:[IA]}
IA -> IA: vérifier StreamRecords[ProtocolHB][candidate]\nLastSeen ≤ 2×60s && LastScore ≥ 30
IA --> Caller: IndexerConsensusResponse{Alive:[IA]}
end par
note over Caller: IA confirmé vivant par quorum > 0.5
else Aucun votant stable (premier démarrage)
note over Caller: Phase 1 conservée directement\n(aucun votant MinStableAge atteint)
end
== Étape 3 : remplacement StaticIndexers ==
Caller -> Caller: replaceStaticIndexers(pool={IA}, admittedAt)\nStaticIndexerMeta[IA].AdmittedAt = time.Now()
== Étape 4 : heartbeat long-lived vers pool ==
Caller -> IA: SendHeartbeat /opencloud/heartbeat/1.0 (goroutine longue durée)
note over Caller: Pool actif. NudgeIndexerHeartbeat()
@enduml

View File

@@ -0,0 +1,51 @@
@startuml 18_startup_seed_discovers_native
title C2 → C1 — Seed découvre un natif (upgrade async)
participant "Node / Indexer\\n(seed mode)" as Caller
participant "Indexer A\\n(seed)" as IA
participant "Indexer B\\n(seed)" as IB
participant "Native A\\n(découvert)" as NA
note over Caller: Démarrage sans NativeIndexerAddresses\\nStaticIndexers = [IA, IB] (AdmittedAt=0)
== Phase initiale seed ==
Caller -> IA: SendHeartbeat /opencloud/heartbeat/1.0 (goroutine longue durée)
Caller -> IB: SendHeartbeat /opencloud/heartbeat/1.0 (goroutine longue durée)
note over Caller: Pool actif en mode seed.\\nIsStableVoter() = false (AdmittedAt=0)\\nPhase 2 sans votants → Phase 1 conservée.
== Goroutine async après 2s ==
note over Caller: time.Sleep(2s)\\nfetchNativeFromIndexers()
Caller -> IA: GET /opencloud/indexer/natives/1.0
IA --> Caller: GetNativesResponse{Natives:[NA]}
note over Caller: Natif découvert : NA\\nAppel ConnectToNatives([NA])
== Upgrade vers mode nominal (ConnectToNatives) ==
Caller -> NA: SendHeartbeat /opencloud/heartbeat/1.0 (goroutine longue durée)
par Fetch pool depuis natif (timeout 6s)
Caller -> NA: GET /opencloud/native/indexers/1.0\\nGetIndexersRequest{Count: max}
NA -> NA: reachableLiveIndexers()\\ntri par w(F) = fillRate×(1fillRate)
NA --> Caller: GetIndexersResponse{Indexers:[IA,IB], FillRates:{IA:0.4, IB:0.6}}
end par
note over Caller: candidates = [IA, IB], isFallback = false
par Consensus Phase 1 (timeout 3s)
Caller -> NA: /opencloud/native/consensus/1.0\\nConsensusRequest{Candidates:[IA,IB]}
NA -> NA: croiser avec liveIndexers
NA --> Caller: ConsensusResponse{Trusted:[IA,IB], Suggestions:[]}
end par
note over Caller: IA ✓ IB ✓ (1/1 vote)\\nadmittedAt = time.Now()
note over Caller: Aucun votant stable (AdmittedAt vient d'être posé)\\nPhase 2 sautée → Phase 1 conservée directement
== Remplacement pool ==
Caller -> Caller: replaceStaticIndexers(pool={IA,IB}, admittedAt)\\nStaticIndexerMeta[IA].AdmittedAt = time.Now()\\nStaticIndexerMeta[IB].AdmittedAt = time.Now()
note over Caller: Pool upgradé dans la map partagée StaticIndexers.\\nLa goroutine heartbeat existante (démarrée en mode seed)\\ndétecte les nouveaux membres sur le prochain tick (20s).\\nAucune nouvelle goroutine créée.\\nIsStableVoter() deviendra true après MinStableAge (2min).\\nD20 (circularité seeds) éliminé.
@enduml

View File

@@ -0,0 +1,55 @@
@startuml failure_indexer_crash
title Indexer Failure → replenish from a Native
participant "Node" as N
participant "Indexer A (alive)" as IA
participant "Indexer B (crashed)" as IB
participant "Native A" as NA
participant "Native B" as NB
note over N: Active Pool : Indexers = [IA, IB]\\nActive Heartbeat long-lived from IA & IB
== IB Failure ==
IB ->x N: heartbeat fails (sendHeartbeat err)
note over N: doTick() dans SendHeartbeat triggers failure\\n→ delete(Indexers[IB])\\n→ delete(IndexerMeta[IB])\\nUnique heartbeat goroutine continue
N -> N: go replenishIndexersFromNative(need=1)
note over N: Reduced Pool to 1 indexers.\\nReplenish triggers with goroutine.
== Replenish from natives ==
par Fetch pool (timeout 6s)
N -> NA: GET /opencloud/native/indexers/1.0\\nGetIndexersRequest{Count: max}
NA -> NA: reachableLiveIndexers()\\n(IB absent because of a expired heartbeat)
NA --> N: GetIndexersResponse{Indexers:[IA,IC], FillRates:{IA:0.4,IC:0.2}}
else
N -> NB: GET /opencloud/native/indexers/1.0
NB --> N: GetIndexersResponse{Indexers:[IA,IC]}
end par
note over N: Fusion + duplication → candidates = [IA, IC]\\n(IA already in pool → IC new candidate)
par Consensus Phase 1 (timeout 4s)
N -> NA: /opencloud/native/consensus/1.0\\nConsensusRequest{Candidates:[IA,IC]}
NA --> N: ConsensusResponse{Trusted:[IA,IC]}
else
N -> NB: /opencloud/native/consensus/1.0
NB --> N: ConsensusResponse{Trusted:[IA,IC]}
end par
note over N: IC → 2/2 votes → admit\\nadmittedAt = time.Now()
par Phase 2 — liveness vote (if stable voters )
N -> IA: /opencloud/indexer/consensus/1.0\\nIndexerConsensusRequest{Candidates:[IC]}
IA -> IA: StreamRecords[ProtocolHB][IC]\\nLastSeen ≤ 120s && LastScore ≥ 30
IA --> N: IndexerConsensusResponse{Alive:[IC]}
end par
note over N: IC confirmed alive → add to pool
N -> N: replaceStaticIndexers(pool={IA,IC})
N -> IC: SendHeartbeat /opencloud/heartbeat/1.0 (goroutine long-live)
note over N: Pool restaured to 2 indexers.
@enduml

View File

@@ -0,0 +1,51 @@
@startuml failure_indexers_native_falback
title indexers failures → native IsSelfFallback
participant "Node" as N
participant "Indexer A (crashed)" as IA
participant "Indexer B (crashed)" as IB
participant "Native A" as NA
participant "Native B" as NB
note over N: Active Pool : Indexers = [IA, IB]
== Successive Failures on IA & IB ==
IA ->x N: heartbeat failure (sendHeartbeat err)
IB ->x N: heartbeat failure (sendHeartbeat err)
note over N: doTick() in SendHeartbeat triggers failures\\n→ delete(StaticIndexers[IA]), delete(StaticIndexers[IB])\\n→ delete(StaticIndexerMeta[IA/IB])\\n unique heartbeat goroutine continue.
N -> N: go replenishIndexersFromNative(need=2)
== Replenish attempt — natives switches to self-fallback mode ==
par Fetch from natives (timeout 6s)
N -> NA: GET /opencloud/native/indexers/1.0
NA -> NA: reachableLiveIndexers() → 0 alive indexer\\nFallback : included as himself(IsSelfFallback=true)
NA --> N: GetIndexersResponse{Indexers:[NA_addr], IsSelfFallback:true}
else
N -> NB: GET /opencloud/native/indexers/1.0
NB --> N: GetIndexersResponse{Indexers:[NB_addr], IsSelfFallback:true}
end par
note over N: isFallback=true → resolvePool avoids consensus\\nadmittedAt = time.Time{} (zero)\\nStaticIndexers = {NA_addr} (native as fallback)
N -> NA: SendHeartbeat /opencloud/heartbeat/1.0\\n(native as temporary fallback indexers)
note over NA: responsiblePeers[N] registered.\\nrunOffloadLoop look after real indexers.
== Reprise IA → runOffloadLoop native side ==
IA -> NA: /opencloud/native/subscribe/1.0\\nIndexerRegistration{FillRate: 0}
note over NA: liveIndexers[IA] updated.\\nrunOffloadLoop triggers a real available indexer\\migrate from N to IA.
== Replenish on next heartbeat tick ==
N -> NA: GET /opencloud/native/indexers/1.0
NA --> N: GetIndexersResponse{Indexers:[IA], IsSelfFallback:false}
note over N: isFallback=false → Classic Phase 1 + Phase 2
N -> N: replaceStaticIndexers(pool={IA}, admittedAt)
N -> IA: SendHeartbeat /opencloud/heartbeat/1.0
note over N: Pool restaured. Native self extracted as indexer.
@enduml

View File

@@ -0,0 +1,46 @@
@startuml failure_native_one_down
title Native failure, with one still alive
participant "Indexer A" as IA
participant "Indexer B" as IB
participant "Native A (crashed)" as NA
participant "Native B (alive)" as NB
participant "Node" as N
note over IA, NB: Native State : IA, IB heartbeats to NA & NB
== Native A Failure ==
NA ->x IA: stream reset
NA ->x IB: stream reset
NA ->x N: stream reset (heartbeat Node → NA)
== Indexers side : replenishNativesFromPeers ==
note over IA: SendHeartbeat(NA) détecte reset\\nAfterDelete(NA)\\nStaticNatives = [NB] (still 1)
IA -> IA: replenishNativesFromPeers()\\nphase 1 : fetchNativeFromNatives
IA -> NB: GET /opencloud/native/peers/1.0
NB --> IA: GetPeersResponse{Peers:[NC]} /' new native if one known '/
alt NC disponible
IA -> NC: SendHeartbeat /opencloud/heartbeat/1.0\\nSubscribe /opencloud/native/subscribe/1.0
note over IA: StaticNatives = [NB, NC]\\nNative Pool restored.
else Aucun peer natif
IA -> IA: fetchNativeFromIndexers()\\nAsk to any indexers their natives
IB --> IA: GetNativesResponse{Natives:[]} /' IB also only got NB '/
note over IA: Impossible to find a 2e native.\\nStaticNatives = [NB] (degraded but alive).
end
== Node side : alive indexers pool ==
note over N: Node heartbeats to IA & IB.\\nNA Failure does not affect indexers pool.\\nFuture Consensus did not use NB (1/1 vote = quorum OK).
N -> NB: /opencloud/native/consensus/1.0\\nConsensusRequest{Candidates:[IA,IB]}
NB --> N: ConsensusResponse{Trusted:[IA,IB]}
note over N: Consensus 1/1 alive natif → admit.\\nAuto downgrade of the consensus floor (alive majority).
== NB side : heartbeat to NA fails ==
note over NB: EnsureNativePeers / SendHeartbeat to NA\\nfail (sendHeartbeat err)\\n→ delete(StaticNatives[NA])\\nreplenishNativesFromPeers(NA) triggers
note over NB: Mesh natif downgraded to NB alone.\\Downgraded but functionnal.
@enduml

View File

@@ -0,0 +1,60 @@
@startuml 22_failure_both_natives
title F4 — Panne des 2 natifs → fallback pool pré-validé
participant "Node" as N
participant "Indexer A\\n(vivant)" as IA
participant "Indexer B\\n(vivant)" as IB
participant "Native A\\n(crashé)" as NA
participant "Native B\\n(crashé)" as NB
note over N: Pool actif : StaticIndexers = [IA, IB]\\nStaticNatives = [NA, NB]\\nAdmittedAt[IA] et AdmittedAt[IB] posés (stables)
== Panne simultanée NA et NB ==
NA ->x N: stream reset
NB ->x N: stream reset
N -> N: AfterDelete(NA) + AfterDelete(NB)\\nStaticNatives = {} (vide)
== replenishNativesFromPeers (sans résultat) ==
N -> N: fetchNativeFromNatives() → aucun natif vivant
N -> IA: GET /opencloud/indexer/natives/1.0
IA --> N: GetNativesResponse{Natives:[NA,NB]}
note over N: NA et NB connus mais non joignables.\\nAucun nouveau natif trouvé.
== Fallback : pool d'indexeurs conservé ==
note over N: isFallback = true\\nStaticIndexers conservé tel quel [IA, IB]\\n(dernier pool validé avec AdmittedAt != zero)\\nRisque D19 atténué : quorum natif = 0 → fallback accepté
note over N: Heartbeats IA et IB continuent normalement.\\nPool d'indexeurs opérationnel sans natifs.
N -> IA: SendHeartbeat /opencloud/heartbeat/1.0 (continue)
N -> IB: SendHeartbeat /opencloud/heartbeat/1.0 (continue)
== retryLostNative (30s ticker) ==
loop toutes les 30s
N -> N: retryLostNative()\\ntente reconnexion NA et NB
N -> NA: dial (échec)
N -> NB: dial (échec)
note over N: Retry sans résultat.\\nPool indexeurs maintenu en fallback.
end
== Reprise natifs ==
NA -> NA: redémarrage
NB -> NB: redémarrage
N -> NA: dial (succès)
N -> NA: SendHeartbeat /opencloud/heartbeat/1.0
N -> NB: SendHeartbeat /opencloud/heartbeat/1.0
note over N: StaticNatives = [NA, NB] restauré\\nisFallback = false
== Re-consensus pool indexeurs (optionnel) ==
par Consensus Phase 1
N -> NA: /opencloud/native/consensus/1.0\\nConsensusRequest{Candidates:[IA,IB]}
NA --> N: ConsensusResponse{Trusted:[IA,IB]}
else
N -> NB: /opencloud/native/consensus/1.0
NB --> N: ConsensusResponse{Trusted:[IA,IB]}
end par
note over N: Pool [IA,IB] reconfirmé.\\nisFallback = false. AdmittedAt[IA,IB] rafraîchi.
@enduml

View File

@@ -0,0 +1,63 @@
@startuml 23_failure_native_plus_indexer
title F5 — Panne combinée : 1 natif + 1 indexeur
participant "Node" as N
participant "Indexer A\\n(vivant)" as IA
participant "Indexer B\\n(crashé)" as IB
participant "Native A\\n(vivant)" as NA
participant "Native B\\n(crashé)" as NB
note over N: Pool nominal : StaticIndexers=[IA,IB], StaticNatives=[NA,NB]
== Pannes simultanées NB + IB ==
NB ->x N: stream reset
IB ->x N: stream reset
N -> N: AfterDelete(NB) — StaticNatives = [NA]
N -> N: AfterDelete(IB) — StaticIndexers = [IA]
== Replenish natif (1 vivant) ==
N -> N: replenishNativesFromPeers()
N -> NA: GET /opencloud/native/peers/1.0
NA --> N: GetPeersResponse{Peers:[]} /' NB seul pair, disparu '/
note over N: Aucun natif alternatif.\\nStaticNatives = [NA] — dégradé.
== Replenish indexeur depuis NA ==
par Fetch pool (timeout 6s)
N -> NA: GET /opencloud/native/indexers/1.0
NA -> NA: reachableLiveIndexers()\\n(IB absent — heartbeat expiré)
NA --> N: GetIndexersResponse{Indexers:[IA,IC], FillRates:{IA:0.5,IC:0.3}}
end par
note over N: candidates = [IA, IC]
par Consensus Phase 1 — 1 seul natif vivant (timeout 3s)
N -> NA: /opencloud/native/consensus/1.0\\nConsensusRequest{Candidates:[IA,IC]}
NA --> N: ConsensusResponse{Trusted:[IA,IC]}
end par
note over N: IC → 1/1 vote → admis (quorum sur vivants)\\nadmittedAt = time.Now()
par Phase 2 liveness vote
N -> IA: /opencloud/indexer/consensus/1.0\\nIndexerConsensusRequest{Candidates:[IC]}
IA -> IA: StreamRecords[ProtocolHB][IC]\\nLastSeen ≤ 120s && LastScore ≥ 30
IA --> N: IndexerConsensusResponse{Alive:[IC]}
end par
N -> N: replaceStaticIndexers(pool={IA,IC})
N -> IC: SendHeartbeat /opencloud/heartbeat/1.0
note over N: Pool restauré à [IA,IC].\\nMode dégradé : 1 natif seulement.\\nretryLostNative(NB) actif (30s ticker).
== retryLostNative pour NB ==
loop toutes les 30s
N -> NB: dial (échec)
end
NB -> NB: redémarrage
NB -> NA: heartbeat (mesh natif reconstruit)
N -> NB: dial (succès)
N -> NB: SendHeartbeat /opencloud/heartbeat/1.0
note over N: StaticNatives = [NA,NB] restauré.\\nMode nominal retrouvé.
@enduml

View File

@@ -0,0 +1,45 @@
@startuml 24_failure_retry_lost_native
title F6 — retryLostNative : reconnexion natif après panne réseau
participant "Node / Indexer" as Caller
participant "Native A\\n(vivant)" as NA
participant "Native B\\n(réseau instable)" as NB
note over Caller: StaticNatives = [NA, NB]\\nHeartbeats actifs vers NA et NB
== Panne réseau transitoire vers NB ==
NB ->x Caller: stream reset (timeout réseau)
Caller -> Caller: AfterDelete(NB)\\nStaticNatives = [NA]\\nlostNatives.Store(NB.addr)
== replenishNativesFromPeers — phase 1 ==
Caller -> NA: GET /opencloud/native/peers/1.0
NA --> Caller: GetPeersResponse{Peers:[NB]}
note over Caller: NB connu de NA, tentative de reconnexion directe
Caller -> NB: dial (échec — réseau toujours coupé)
note over Caller: Connexion impossible.\\nPassage en retryLostNative()
== retryLostNative : ticker 30s ==
loop toutes les 30s tant que NB absent
Caller -> Caller: retryLostNative()\\nParcourt lostNatives
Caller -> NB: StartNativeRegistration (dial + heartbeat + subscribe)
NB --> Caller: dial échoue
note over Caller: Retry loggé. Prochain essai dans 30s.
end
== Réseau rétabli ==
note over NB: Réseau rétabli\\nNB de nouveau joignable
Caller -> NB: StartNativeRegistration\\ndial (succès)
Caller -> NB: SendHeartbeat /opencloud/heartbeat/1.0 (goroutine longue durée)
Caller -> NB: /opencloud/native/subscribe/1.0\\nIndexerRegistration{FillRate: fillRateFn()}
NB --> Caller: subscribe ack
Caller -> Caller: lostNatives.Delete(NB.addr)\\nStaticNatives = [NA, NB] restauré
note over Caller: Mode nominal retrouvé.\\nnativeHeartbeatOnce non utilisé (goroutine déjà active pour NA).\\nNouvelle goroutine SendHeartbeat pour NB uniquement.
@enduml

View File

@@ -0,0 +1,42 @@
@startuml 25_failure_node_gc
title F7 — Crash nœud → GC indexeur + AfterDelete
participant "Node\\n(crashé)" as N
participant "Indexer A" as IA
participant "Indexer B" as IB
participant "Native A" as NA
note over N, NA: État nominal : N heartbeatait vers IA et IB
== Crash Node ==
N ->x IA: stream reset (heartbeat coupé)
N ->x IB: stream reset (heartbeat coupé)
== GC côté Indexer A ==
note over IA: HandleHeartbeat : stream reset détecté\\nStreamRecords[ProtocolHB][N].LastSeen figé
loop ticker GC (30s) — StartGC(30*time.Second)
IA -> IA: gc()\\nnow.After(Expiry) où Expiry = lastHBTime + 2min\\n→ si 2min sans heartbeat → éviction
IA -> IA: delete(StreamRecords[ProtocolHB][N])\\nAfterDelete(N, name, did) appelé hors lock
note over IA: N retiré du registre vivant.\\nFillRate recalculé (n-1 / maxNodes).
end
== Impact sur le scoring / fill rate ==
note over IA: FillRate diminue\\nProchain subscribe vers NA inclura FillRate mis à jour
IA -> NA: /opencloud/native/subscribe/1.0\\nIndexerRegistration{FillRate: 0.3} /' était 0.5 '/
NA -> NA: liveIndexerEntry[IA].FillRate = 0.3\\nPriorité de routage recalculée : w(0.3) = 0.21
== Impact sur la Phase 2 (indexerLivenessVote) ==
note over IA: Si un autre nœud demande consensus,\\nN n'est plus dans StreamRecords.\\nN absent de la réponse Alive[].
note over IB: Même GC effectué côté IB.\\nN retiré de StreamRecords[ProtocolHB].
== Reconnexion éventuelle du nœud ==
N -> N: redémarrage
N -> IA: SendHeartbeat /opencloud/heartbeat/1.0\\nHeartbeat{Score: X, IndexersBinded: 2}
IA -> IA: HandleHeartbeat → nouveau UptimeTracker(FirstSeen=now)\\nStreamRecords[ProtocolHB][N] recréé
note over IA: N de retour avec FirstSeen frais.\\ndynamicMinScore élevé tant que age < 24h.
@enduml

75
docs/diagrams/README.md Normal file
View File

@@ -0,0 +1,75 @@
# OC-Discovery — Diagrammes d'architecture et de séquence
Tous les fichiers sont au format [PlantUML](https://plantuml.com/).
Rendu possible via VS Code (extension PlantUML), IntelliJ, ou [plantuml.com/plantuml](https://www.plantuml.com/plantuml/uml/).
## Diagrammes de séquence (flux internes)
| Fichier | Description |
|---------|-------------|
| `01_node_init.puml` | Initialisation complète d'un Node (libp2p host, GossipSub, indexers, StreamService, PubSubService, NATS) |
| `02_node_claim.puml` | Enregistrement du nœud auprès des indexeurs (`claimInfo` + `publishPeerRecord`) |
| `03_indexer_heartbeat.puml` | Protocole heartbeat avec score 5 composants (U/B/D/L/F), UptimeTracker, dynamicMinScore |
| `04_indexer_publish.puml` | Publication d'un `PeerRecord` vers l'indexeur → DHT |
| `05_indexer_get.puml` | Résolution d'un pair via l'indexeur (`GetPeerRecord` + `handleNodeGet` + DHT) |
| `06_native_registration.puml` | Enregistrement d'un indexeur auprès du Native (FillRate, signature, TTL 90s, unsubscribe) |
| `07_native_get_consensus.puml` | `ConnectToNatives` : fetch pool + Phase 1 (clientSideConsensus) + Phase 2 (indexerLivenessVote) |
| `08_nats_create_resource.puml` | Handler NATS `CREATE_RESOURCE` : connexion/déconnexion d'un partner |
| `09_nats_propagation.puml` | Handler NATS `PROPALGATION_EVENT` : delete, considers, planner, search |
| `10_pubsub_search.puml` | Recherche gossip globale (type `"all"`) via GossipSub |
| `11_stream_search.puml` | Recherche directe par stream (type `"known"` ou `"partner"`) |
| `12_partner_heartbeat.puml` | Heartbeat partner + propagation CRUD vers les partenaires |
| `13_planner_flow.puml` | Session planner (ouverture, échange, fermeture) |
| `14_native_offload_gc.puml` | Boucles background du Native Indexer (offload, DHT refresh, GC) |
## Diagrammes de topologie et flux de panne
### Configurations réseau
| Fichier | Description |
|---------|-------------|
| `15_archi_config_nominale.puml` | C1 — Topologie nominale : 2 natifs · 2 indexeurs · 2 nœuds, tous flux |
| `16_archi_config_seed.puml` | C2 — Mode seed sans natif : indexeurs à AdmittedAt=0, risque D20 actif |
### Flux de démarrage
| Fichier | Description |
|---------|-------------|
| `17_startup_consensus_phase1_phase2.puml` | Démarrage nominal : Phase 1 (admission native) + Phase 2 (liveness vote) |
| `18_startup_seed_discovers_native.puml` | Upgrade seed → nominal : goroutine async découvre un natif via l'indexeur |
### Flux de panne
| Fichier | Code | Description |
|---------|------|-------------|
| `19_failure_indexer_crash.puml` | F1 | Panne 1 indexeur → replenish depuis natif → IC admis |
| `20_failure_both_indexers_selfdelegate.puml` | F2 | Panne 2 indexeurs → natif `IsSelfFallback=true`, runOffloadLoop |
| `21_failure_native_one_down.puml` | F3 | Panne 1 natif → quorum 1/1 suffisant, mode dégradé |
| `22_failure_both_natives.puml` | F4 | Panne 2 natifs → fallback pool pré-validé, retryLostNative |
| `23_failure_native_plus_indexer.puml` | F5 | Panne combinée : 1 natif + 1 indexeur → double replenish |
| `24_failure_retry_lost_native.puml` | F6 | Panne réseau transitoire → retryLostNative (30s ticker) |
| `25_failure_node_gc.puml` | F7 | Crash nœud → GC indexeur (120s), AfterDelete, fill rate recalculé |
## Protocoles libp2p utilisés (référence complète)
| Protocole | Description |
|-----------|-------------|
| `/opencloud/heartbeat/1.0` | Heartbeat universel : node→indexeur, indexeur→native, native→native (long-lived) |
| `/opencloud/probe/1.0` | Sonde de bande passante (echo, mesure latence + débit) |
| `/opencloud/resource/heartbeat/partner/1.0` | Heartbeat node ↔ partner (long-lived) |
| `/opencloud/record/publish/1.0` | Publication `PeerRecord` vers indexeur |
| `/opencloud/record/get/1.0` | Requête `GetPeerRecord` vers indexeur |
| `/opencloud/native/subscribe/1.0` | Enregistrement indexeur auprès du native (+ FillRate) |
| `/opencloud/native/unsubscribe/1.0` | Désenregistrement explicite indexeur → native |
| `/opencloud/native/indexers/1.0` | Requête de pool d'indexeurs au native (tri par w(F)=F×(1-F)) |
| `/opencloud/native/consensus/1.0` | Phase 1 : validation de pool d'indexeurs (vote majoritaire natifs) |
| `/opencloud/native/peers/1.0` | Demande de pairs natifs connus (replenish mesh natif) |
| `/opencloud/indexer/natives/1.0` | Demande d'adresses de natifs connus par un indexeur |
| `/opencloud/indexer/consensus/1.0` | Phase 2 : liveness vote (LastSeen ≤ 120s && LastScore ≥ 30) |
| `/opencloud/resource/search/1.0` | Recherche de ressources entre peers |
| `/opencloud/resource/create/1.0` | Propagation création ressource vers partner |
| `/opencloud/resource/update/1.0` | Propagation mise à jour ressource vers partner |
| `/opencloud/resource/delete/1.0` | Propagation suppression ressource vers partner |
| `/opencloud/resource/planner/1.0` | Session planner (booking) |
| `/opencloud/resource/verify/1.0` | Vérification signature ressource |
| `/opencloud/resource/considers/1.0` | Transmission d'un "considers" d'exécution |

191
go.mod
View File

@@ -1,64 +1,177 @@
module oc-discovery module oc-discovery
go 1.22.0 go 1.25.0
require ( require (
cloud.o-forge.io/core/oc-lib v0.0.0-20240902132116-fba1608edb70 cloud.o-forge.io/core/oc-lib v0.0.0-20260304145747-e03a0d3dd0aa
github.com/beego/beego v1.12.13 github.com/libp2p/go-libp2p v0.47.0
github.com/beego/beego/v2 v2.3.0 github.com/libp2p/go-libp2p-record v0.3.1
github.com/go-redis/redis v6.15.9+incompatible github.com/multiformats/go-multiaddr v0.16.1
github.com/goraz/onion v0.1.3 )
github.com/smartystreets/goconvey v1.7.2
github.com/tidwall/gjson v1.17.3 require (
github.com/beego/beego/v2 v2.3.8 // indirect
github.com/benbjohnson/clock v1.3.5 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/dunglas/httpsfv v1.1.0 // indirect
github.com/emicklei/go-restful/v3 v3.12.2 // indirect
github.com/filecoin-project/go-clock v0.1.0 // indirect
github.com/flynn/noise v1.1.0 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/gnostic-models v0.7.0 // indirect
github.com/google/gopacket v1.1.19 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/huin/goupnp v1.3.0 // indirect
github.com/ipfs/boxo v0.35.2 // indirect
github.com/ipfs/go-cid v0.6.0 // indirect
github.com/ipfs/go-datastore v0.9.0 // indirect
github.com/ipfs/go-log/v2 v2.9.1 // indirect
github.com/ipld/go-ipld-prime v0.21.0 // indirect
github.com/jackpal/go-nat-pmp v1.0.2 // indirect
github.com/jbenet/go-temp-err-catcher v0.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/koron/go-ssdp v0.0.6 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-cidranger v1.1.0 // indirect
github.com/libp2p/go-flow-metrics v0.3.0 // indirect
github.com/libp2p/go-libp2p-asn-util v0.4.1 // indirect
github.com/libp2p/go-libp2p-kbucket v0.8.0 // indirect
github.com/libp2p/go-libp2p-routing-helpers v0.7.5 // indirect
github.com/libp2p/go-msgio v0.3.0 // indirect
github.com/libp2p/go-netroute v0.4.0 // indirect
github.com/libp2p/go-reuseport v0.4.0 // indirect
github.com/libp2p/go-yamux/v5 v5.0.1 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd // indirect
github.com/miekg/dns v1.1.68 // indirect
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b // indirect
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr-dns v0.4.1 // indirect
github.com/multiformats/go-multiaddr-fmt v0.1.0 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multicodec v0.10.0 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-multistream v0.6.1 // indirect
github.com/multiformats/go-varint v0.1.0 // indirect
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect
github.com/pion/datachannel v1.5.10 // indirect
github.com/pion/dtls/v2 v2.2.12 // indirect
github.com/pion/dtls/v3 v3.0.6 // indirect
github.com/pion/ice/v4 v4.0.10 // indirect
github.com/pion/interceptor v0.1.40 // indirect
github.com/pion/logging v0.2.3 // indirect
github.com/pion/mdns/v2 v2.0.7 // indirect
github.com/pion/randutil v0.1.0 // indirect
github.com/pion/rtcp v1.2.15 // indirect
github.com/pion/rtp v1.8.19 // indirect
github.com/pion/sctp v1.8.39 // indirect
github.com/pion/sdp/v3 v3.0.13 // indirect
github.com/pion/srtp/v3 v3.0.6 // indirect
github.com/pion/stun v0.6.1 // indirect
github.com/pion/stun/v3 v3.0.0 // indirect
github.com/pion/transport/v2 v2.2.10 // indirect
github.com/pion/transport/v3 v3.0.7 // indirect
github.com/pion/turn/v4 v4.0.2 // indirect
github.com/pion/webrtc/v4 v4.1.2 // indirect
github.com/polydawn/refmt v0.89.0 // indirect
github.com/quic-go/qpack v0.6.0 // indirect
github.com/quic-go/quic-go v0.59.0 // indirect
github.com/quic-go/webtransport-go v0.10.0 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 // indirect
github.com/wlynxg/anet v0.0.5 // indirect
github.com/x448/float16 v0.8.4 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect
go.opentelemetry.io/otel/metric v1.39.0 // indirect
go.opentelemetry.io/otel/trace v1.39.0 // indirect
go.uber.org/dig v1.19.0 // indirect
go.uber.org/fx v1.24.0 // indirect
go.uber.org/mock v0.5.2 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.1 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 // indirect
golang.org/x/mod v0.32.0 // indirect
golang.org/x/oauth2 v0.32.0 // indirect
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2 // indirect
golang.org/x/term v0.39.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.org/x/tools v0.41.0 // indirect
gonum.org/v1/gonum v0.17.0 // indirect
gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/api v0.35.1 // indirect
k8s.io/apimachinery v0.35.1 // indirect
k8s.io/client-go v0.35.1 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 // indirect
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect
lukechampine.com/blake3 v1.4.1 // indirect
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect
sigs.k8s.io/yaml v1.6.0 // indirect
) )
require ( require (
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/biter777/countries v1.7.5 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.5 // indirect github.com/gabriel-vasile/mimetype v1.4.10 // indirect
github.com/go-playground/locales v0.14.1 // indirect github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.22.0 // indirect github.com/go-playground/validator/v10 v10.27.0 // indirect
github.com/golang/protobuf v1.5.4 // indirect github.com/golang/snappy v1.0.0 // indirect
github.com/golang/snappy v0.0.4 // indirect github.com/google/uuid v1.6.0
github.com/google/uuid v1.6.0 // indirect github.com/goraz/onion v0.1.3 // indirect
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 // indirect
github.com/hashicorp/golang-lru v1.0.2 // indirect github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/jtolds/gls v4.20.0+incompatible // indirect github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/kr/pretty v0.3.1 // indirect
github.com/leodido/go-urn v1.4.0 // indirect github.com/leodido/go-urn v1.4.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect github.com/libp2p/go-libp2p-kad-dht v0.37.1
github.com/libp2p/go-libp2p-pubsub v0.15.0
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect github.com/mattn/go-isatty v0.0.20 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/montanaflynn/stats v0.7.1 // indirect github.com/montanaflynn/stats v0.7.1 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nats-io/nats.go v1.37.0 // indirect github.com/nats-io/nats.go v1.43.0 // indirect
github.com/nats-io/nkeys v0.4.7 // indirect github.com/nats-io/nkeys v0.4.11 // indirect
github.com/nats-io/nuid v1.0.1 // indirect github.com/nats-io/nuid v1.0.1 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/prometheus/client_golang v1.23.2 // indirect
github.com/prometheus/client_golang v1.20.2 // indirect github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/client_model v0.6.1 // indirect github.com/prometheus/common v0.66.1 // indirect
github.com/prometheus/common v0.57.0 // indirect github.com/prometheus/procfs v0.17.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect github.com/rs/zerolog v1.34.0 // indirect
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/rs/zerolog v1.33.0 // indirect
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 // indirect github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 // indirect
github.com/smartystreets/assertions v1.2.0 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.1 // indirect
github.com/xdg-go/pbkdf2 v1.0.0 // indirect github.com/xdg-go/pbkdf2 v1.0.0 // indirect
github.com/xdg-go/scram v1.1.2 // indirect github.com/xdg-go/scram v1.1.2 // indirect
github.com/xdg-go/stringprep v1.0.4 // indirect github.com/xdg-go/stringprep v1.0.4 // indirect
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
go.mongodb.org/mongo-driver v1.16.1 // indirect go.mongodb.org/mongo-driver v1.17.4 // indirect
golang.org/x/crypto v0.26.0 // indirect golang.org/x/crypto v0.47.0 // indirect
golang.org/x/net v0.28.0 // indirect golang.org/x/net v0.49.0 // indirect
golang.org/x/sync v0.8.0 // indirect golang.org/x/sync v0.19.0 // indirect
golang.org/x/sys v0.24.0 // indirect golang.org/x/sys v0.40.0 // indirect
golang.org/x/text v0.17.0 // indirect golang.org/x/text v0.33.0 // indirect
google.golang.org/protobuf v1.34.2 // indirect google.golang.org/protobuf v1.36.11 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
) )

667
go.sum
View File

@@ -1,230 +1,343 @@
cloud.o-forge.io/core/oc-lib v0.0.0-20240830131445-af18dba5563c h1:4ZoM9ONJiaeLHSi0s8gsCe4lHuRHXkfK+eDSnTCspa0= cloud.o-forge.io/core/oc-lib v0.0.0-20260224130821-ce8ef70516f7 h1:p9uJjMY+QkE4neA+xRmIRtAm9us94EKZqgajDdLOd0Y=
cloud.o-forge.io/core/oc-lib v0.0.0-20240830131445-af18dba5563c/go.mod h1:FIJD0taWLJ5pjQLJ6sfE2KlTkvbmk5SMcyrxdjsaVz0= cloud.o-forge.io/core/oc-lib v0.0.0-20260224130821-ce8ef70516f7/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20240902132116-fba1608edb70 h1:xHxxRDtMG2/AAc7immArZfsnVF+KfJqoyUeUENmF6DA= cloud.o-forge.io/core/oc-lib v0.0.0-20260226084851-959fce48ef6c h1:FTUu9tdEfib6J+fuc7e5wYTe++EIlB70bVNpOeFjnyU=
cloud.o-forge.io/core/oc-lib v0.0.0-20240902132116-fba1608edb70/go.mod h1:FIJD0taWLJ5pjQLJ6sfE2KlTkvbmk5SMcyrxdjsaVz0= cloud.o-forge.io/core/oc-lib v0.0.0-20260226084851-959fce48ef6c/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260226085754-f4e2d8057df0 h1:lvrRF4ToIMl/5k1q4AiPEy6ycjwRtOaDhWnQ/LrW1ZA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260226085754-f4e2d8057df0/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260226091217-cb3771c17a31 h1:hvkvJibS9NmImw73j79Ov5VpIYs4WbP4SYGlK/XO82Q=
cloud.o-forge.io/core/oc-lib v0.0.0-20260226091217-cb3771c17a31/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260302152414-542b0b73aba5 h1:h+Fkyj6cfwAirc0QGCBEkZSSrgcyThXswg7ytOLm948=
cloud.o-forge.io/core/oc-lib v0.0.0-20260302152414-542b0b73aba5/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260304143917-340f2a6301b7 h1:RZGV3ttkfoKIigUb7T+M5Kq+YtqW/td45EmNYeW5u8k=
cloud.o-forge.io/core/oc-lib v0.0.0-20260304143917-340f2a6301b7/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260304145747-e03a0d3dd0aa h1:1wCpI4dwN1pj6MlpJ7/WifhHVHmCE4RU+9klwqgo/bk=
cloud.o-forge.io/core/oc-lib v0.0.0-20260304145747-e03a0d3dd0aa/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Knetic/govaluate v3.0.0+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0= github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/beego/beego/v2 v2.3.8 h1:wplhB1pF4TxR+2SS4PUej8eDoH4xGfxuHfS7wAk9VBc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/beego/beego/v2 v2.3.8/go.mod h1:8vl9+RrXqvodrl9C8yivX1e6le6deCK6RWeq8R7gTTg=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/benbjohnson/clock v1.3.5 h1:VvXlSJBzZpA/zum6Sj74hxwYI2DIxRWuNIoXAzHZz5o=
github.com/alicebob/gopher-json v0.0.0-20180125190556-5a6b3ba71ee6/go.mod h1:SGnFV6hVsYE877CKEZ6tDNTjaSXYUk6QqoIK6PrAtcc= github.com/benbjohnson/clock v1.3.5/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/alicebob/miniredis v2.5.0+incompatible/go.mod h1:8HZjEj4yU0dwhYHky+DxYx+6BMjkBbe5ONFIF1MXffk=
github.com/beego/beego v1.12.11 h1:MWKcnpavb7iAIS0m6uuEq6pHKkYvGNw/5umIUKqL7jM=
github.com/beego/beego v1.12.11/go.mod h1:QURFL1HldOcCZAxnc1cZ7wrplsYR5dKPHFjmk6WkLAs=
github.com/beego/beego v1.12.13 h1:g39O1LGLTiPejWVqQKK/TFGrroW9BCZQz6/pf4S8IRM=
github.com/beego/beego v1.12.13/go.mod h1:QURFL1HldOcCZAxnc1cZ7wrplsYR5dKPHFjmk6WkLAs=
github.com/beego/beego/v2 v2.0.7 h1:9KNnUM40tn3pbCOFfe6SJ1oOL0oTi/oBS/C/wCEdAXA=
github.com/beego/beego/v2 v2.0.7/go.mod h1:f0uOEkmJWgAuDTlTxUdgJzwG3PDSIf3UWF3NpMohbFE=
github.com/beego/beego/v2 v2.3.0 h1:iECVwzm6egw6iw6tkWrEDqXG4NQtKLQ6QBSYqlM6T/I=
github.com/beego/beego/v2 v2.3.0/go.mod h1:Ob/5BJ9fIKZLd4s9ZV3o9J6odkkIyL83et+p98gyYXo=
github.com/beego/goyaml2 v0.0.0-20130207012346-5545475820dd/go.mod h1:1b+Y/CofkYwXMUU0OhQqGvsY2Bvgr4j6jfT699wyZKQ=
github.com/beego/x2j v0.0.0-20131220205130-a0352aadc542/go.mod h1:kSeGC/p1AbBiEp5kat81+DSQrZenVBZXklMLaELspWU=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bradfitz/gomemcache v0.0.0-20180710155616-bc664df96737/go.mod h1:PmM6Mmwb0LSuEubjR8N7PtNe1KxZLtOUHtbeikc5h60= github.com/biter777/countries v1.7.5 h1:MJ+n3+rSxWQdqVJU8eBy9RqcdH6ePPn4PJHocVWUa+Q=
github.com/casbin/casbin v1.7.0/go.mod h1:c67qKN6Oum3UF5Q1+BByfFxkwKvhwW57ITjqwtzR1KE= github.com/biter777/countries v1.7.5/go.mod h1:1HSpZ526mYqKJcpT5Ti1kcGQ0L0SrXWIaptUWjFfv2E=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cloudflare/golz4 v0.0.0-20150217214814-ef862a3cdc58/go.mod h1:EOBUe0h4xcZ5GoxqC5SDxFQ8gwyZPKQoEzownBlhI80=
github.com/coreos/etcd v3.3.17+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= github.com/coreos/etcd v3.3.17+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk= github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/couchbase/go-couchbase v0.0.0-20201216133707-c04035124b17/go.mod h1:+/bddYDxXsf9qt0xpDUtRR47A2GjaXmGGAqQ/k3GJ8A= github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/couchbase/gomemcached v0.1.2-0.20201224031647-c432ccf49f32/go.mod h1:mxliKQxOv84gQ0bJWbI+w9Wxdpt9HjDvgW9MjCym5Vo=
github.com/couchbase/goutils v0.0.0-20210118111533-e33d3ffb5401/go.mod h1:BQwMFlJzDjFDG3DJUdU0KORxn88UlsOULuxLExMh3Hs=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cupcake/rdb v0.0.0-20161107195141-43ba34106c76/go.mod h1:vYwsqCOLxGiisLwp9rITslkFNpZD5rz43tf41QFkTWY=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/edsrzf/mmap-go v0.0.0-20170320065105-0bce6a688712/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M= github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c h1:pFUpOrbxDR6AkioZ1ySsx5yxlDQZ8stG2b88gTPxgJU=
github.com/elastic/go-elasticsearch/v6 v6.8.5/go.mod h1:UwaDJsD3rWLM5rKNFzv9hgox93HoX8utj1kxD9aFUcI= github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c/go.mod h1:6UhI8N9EjYm1c2odKpFpAYeR8dsBeM7PtzQhRgxRr9U=
github.com/elazarl/go-bindata-assetfs v1.0.0/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4= github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8=
github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
github.com/dunglas/httpsfv v1.1.0 h1:Jw76nAyKWKZKFrpMMcL76y35tOpYHqQPzHQiwDvpe54=
github.com/dunglas/httpsfv v1.1.0/go.mod h1:zID2mqw9mFsnt7YC3vYQ9/cjq30q41W+1AnDwH8TiMg=
github.com/elazarl/go-bindata-assetfs v1.0.1 h1:m0kkaHRKEu7tUIUFVwhGGGYClXvyl4RE03qmvRTNfbw= github.com/elazarl/go-bindata-assetfs v1.0.1 h1:m0kkaHRKEu7tUIUFVwhGGGYClXvyl4RE03qmvRTNfbw=
github.com/elazarl/go-bindata-assetfs v1.0.1/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4= github.com/elazarl/go-bindata-assetfs v1.0.1/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4=
github.com/emicklei/go-restful/v3 v3.12.2 h1:DhwDP0vY3k8ZzE0RunuJy8GhNpPL6zqLkDf9B/a0/xU=
github.com/emicklei/go-restful/v3 v3.12.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/etcd-io/etcd v3.3.17+incompatible/go.mod h1:cdZ77EstHBwVtD6iTgzgvogwcjo9m4iOqoijouPJ4bs= github.com/etcd-io/etcd v3.3.17+incompatible/go.mod h1:cdZ77EstHBwVtD6iTgzgvogwcjo9m4iOqoijouPJ4bs=
github.com/filecoin-project/go-clock v0.1.0 h1:SFbYIM75M8NnFm1yMHhN9Ahy3W5bEZV9gd6MPfXbKVU=
github.com/filecoin-project/go-clock v0.1.0/go.mod h1:4uB/O4PvOjlx1VCMdZ9MyDZXRm//gkj1ELEbxfI1AZs=
github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg=
github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/gabriel-vasile/mimetype v1.4.4 h1:QjV6pZ7/XZ7ryI2KuyeEDE8wnh7fHP9YnQy+R0LnH8I= github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
github.com/gabriel-vasile/mimetype v1.4.4/go.mod h1:JwLei5XPtWdGiMFB5Pjle1oEeoSeEuJfJE+TtfvdB/s= github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/gabriel-vasile/mimetype v1.4.5 h1:J7wGKdGu33ocBOhGy0z653k/lFKLFDPJMG8Gql0kxn4= github.com/gabriel-vasile/mimetype v1.4.10 h1:zyueNbySn/z8mJZHLt6IPw0KoZsiQNszIpU+bX4+ZK0=
github.com/gabriel-vasile/mimetype v1.4.5/go.mod h1:ibHel+/kbxn9x2407k1izTA1S81ku1z/DlgOW2QE0M4= github.com/gabriel-vasile/mimetype v1.4.10/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
github.com/glendc/gopher-json v0.0.0-20170414221815-dc4743023d0c/go.mod h1:Gja1A+xZ9BoviGJNA2E9vFkPjjsl+CoJxSXiQM1UXtw= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA= github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.22.0 h1:k6HsTZ0sTnROkhS//R0O+55JgM8C4Bx7ia+JlgcnOao= github.com/go-playground/validator/v10 v10.27.0 h1:w8+XrWVMhGkxOaaowyKH35gFydVHOvC0/uWoy2Fzwn4=
github.com/go-playground/validator/v10 v10.22.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM= github.com/go-playground/validator/v10 v10.27.0/go.mod h1:I5QpIEbmr8On7W0TktmJAumgzX4CA1XNl4ZmDuVHKKo=
github.com/go-redis/redis v6.14.2+incompatible h1:UE9pLhzmWf+xHNmZsoccjXosPicuiNaInPgym8nzfg0= github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-redis/redis v6.14.2+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA= github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/go-redis/redis v6.15.9+incompatible h1:K0pv1D7EQUjfyoMql+r/jZqCLizCGKFlFgcHWWmHQjg= github.com/go-yaml/yaml v2.1.0+incompatible/go.mod h1:w2MrLa16VYP0jy6N7M5kHaCkaLENm+P+Tv+MfurjSw0=
github.com/go-redis/redis v6.15.9+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk= github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8= github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs= github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/golang/snappy v0.0.0-20170215233205-553a64147049/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/gomodule/redigo v2.0.0+incompatible/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gopacket v1.1.19 h1:ves8RnFZPGiFnTS0uPQStjwru6uO6h+nlr9j6fL7kF8=
github.com/google/gopacket v1.1.19/go.mod h1:iJ8V8n6KS+z2U1A8pUwu8bW5SyEMkXJB8Yo/Vo+TKTo=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v0.0.0-20190430165422-3e4dfb77656c h1:7lF+Vz0LqiRidnzC1Oq86fpX1q/iEv2KJdrCtttYjT4=
github.com/gopherjs/gopherjs v0.0.0-20190430165422-3e4dfb77656c/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/goraz/onion v0.1.3 h1:KhyvbDA2b70gcz/d5izfwTiOH8SmrvV43AsVzpng3n0= github.com/goraz/onion v0.1.3 h1:KhyvbDA2b70gcz/d5izfwTiOH8SmrvV43AsVzpng3n0=
github.com/goraz/onion v0.1.3/go.mod h1:XEmz1XoBz+wxTgWB8NwuvRm4RAu3vKxvrmYtzK+XCuQ= github.com/goraz/onion v0.1.3/go.mod h1:XEmz1XoBz+wxTgWB8NwuvRm4RAu3vKxvrmYtzK+XCuQ=
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc= github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/hashicorp/golang-lru v1.0.2 h1:dV3g9Z/unq5DpblPpw+Oqcv4dU/1omnb4Ok8iPY6p1c= github.com/hashicorp/golang-lru v1.0.2 h1:dV3g9Z/unq5DpblPpw+Oqcv4dU/1omnb4Ok8iPY6p1c=
github.com/hashicorp/golang-lru v1.0.2/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4= github.com/hashicorp/golang-lru v1.0.2/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI= github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc=
github.com/huin/goupnp v1.3.0/go.mod h1:gnGPsThkYa7bFi/KWmEysQRf48l2dvR5bxr2OFckNX8=
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/ipfs/boxo v0.35.2 h1:0QZJJh6qrak28abENOi5OA8NjBnZM4p52SxeuIDqNf8=
github.com/ipfs/boxo v0.35.2/go.mod h1:bZn02OFWwJtY8dDW9XLHaki59EC5o+TGDECXEbe1w8U=
github.com/ipfs/go-block-format v0.2.3 h1:mpCuDaNXJ4wrBJLrtEaGFGXkferrw5eqVvzaHhtFKQk=
github.com/ipfs/go-block-format v0.2.3/go.mod h1:WJaQmPAKhD3LspLixqlqNFxiZ3BZ3xgqxxoSR/76pnA=
github.com/ipfs/go-cid v0.6.0 h1:DlOReBV1xhHBhhfy/gBNNTSyfOM6rLiIx9J7A4DGf30=
github.com/ipfs/go-cid v0.6.0/go.mod h1:NC4kS1LZjzfhK40UGmpXv5/qD2kcMzACYJNntCUiDhQ=
github.com/ipfs/go-datastore v0.9.0 h1:WocriPOayqalEsueHv6SdD4nPVl4rYMfYGLD4bqCZ+w=
github.com/ipfs/go-datastore v0.9.0/go.mod h1:uT77w/XEGrvJWwHgdrMr8bqCN6ZTW9gzmi+3uK+ouHg=
github.com/ipfs/go-detect-race v0.0.1 h1:qX/xay2W3E4Q1U7d9lNs1sU9nvguX0a7319XbyQ6cOk=
github.com/ipfs/go-detect-race v0.0.1/go.mod h1:8BNT7shDZPo99Q74BpGMK+4D8Mn4j46UU0LZ723meps=
github.com/ipfs/go-log/v2 v2.9.1 h1:3JXwHWU31dsCpvQ+7asz6/QsFJHqFr4gLgQ0FWteujk=
github.com/ipfs/go-log/v2 v2.9.1/go.mod h1:evFx7sBiohUN3AG12mXlZBw5hacBQld3ZPHrowlJYoo=
github.com/ipfs/go-test v0.2.3 h1:Z/jXNAReQFtCYyn7bsv/ZqUwS6E7iIcSpJ2CuzCvnrc=
github.com/ipfs/go-test v0.2.3/go.mod h1:QW8vSKkwYvWFwIZQLGQXdkt9Ud76eQXRQ9Ao2H+cA1o=
github.com/ipld/go-ipld-prime v0.21.0 h1:n4JmcpOlPDIxBcY037SVfpd1G+Sj1nKZah0m6QH9C2E=
github.com/ipld/go-ipld-prime v0.21.0/go.mod h1:3RLqy//ERg/y5oShXXdx5YIp50cFGOanyMctpPjsvxQ=
github.com/jackpal/go-nat-pmp v1.0.2 h1:KzKSgb7qkJvOUTqYl9/Hg/me3pWgBmERKrTGD7BdWus=
github.com/jackpal/go-nat-pmp v1.0.2/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jbenet/go-temp-err-catcher v0.1.0 h1:zpb3ZH6wIE8Shj2sKS+khgRvf7T7RABoLk/+KKHggpk=
github.com/jbenet/go-temp-err-catcher v0.1.0/go.mod h1:0kJRvmDZXNMIiJirNPEYfhpPwbGVtZVWC34vc5WLsDk=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo= github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw= github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/koron/go-ssdp v0.0.6 h1:Jb0h04599eq/CY7rB5YEqPS83HmRfHP2azkxMN2rFtU=
github.com/koron/go-ssdp v0.0.6/go.mod h1:0R9LfRJGek1zWTjN3JUNlm5INCDYGpRDfAptnct63fI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/ledisdb/ledisdb v0.0.0-20200510135210-d35789ec47e6/go.mod h1:n931TsDuKuq+uX4v1fulaMbA/7ZLLhjc85h7chZGBCQ= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
github.com/libp2p/go-cidranger v1.1.0 h1:ewPN8EZ0dd1LSnrtuwd4709PXVcITVeuwbag38yPW7c=
github.com/libp2p/go-cidranger v1.1.0/go.mod h1:KWZTfSr+r9qEo9OkI9/SIEeAtw+NNoU0dXIXt15Okic=
github.com/libp2p/go-flow-metrics v0.3.0 h1:q31zcHUvHnwDO0SHaukewPYgwOBSxtt830uJtUx6784=
github.com/libp2p/go-flow-metrics v0.3.0/go.mod h1:nuhlreIwEguM1IvHAew3ij7A8BMlyHQJ279ao24eZZo=
github.com/libp2p/go-libp2p v0.47.0 h1:qQpBjSCWNQFF0hjBbKirMXE9RHLtSuzTDkTfr1rw0yc=
github.com/libp2p/go-libp2p v0.47.0/go.mod h1:s8HPh7mMV933OtXzONaGFseCg/BE//m1V34p3x4EUOY=
github.com/libp2p/go-libp2p-asn-util v0.4.1 h1:xqL7++IKD9TBFMgnLPZR6/6iYhawHKHl950SO9L6n94=
github.com/libp2p/go-libp2p-asn-util v0.4.1/go.mod h1:d/NI6XZ9qxw67b4e+NgpQexCIiFYJjErASrYW4PFDN8=
github.com/libp2p/go-libp2p-kad-dht v0.37.1 h1:jtX8bQIXVCs6/allskNB4m5n95Xvwav7wHAhopGZfS0=
github.com/libp2p/go-libp2p-kad-dht v0.37.1/go.mod h1:Uwokdh232k9Y1uMy2yJOK5zb7hpMHn4P8uWS4s9i05Q=
github.com/libp2p/go-libp2p-kbucket v0.8.0 h1:QAK7RzKJpYe+EuSEATAaaHYMYLkPDGC18m9jxPLnU8s=
github.com/libp2p/go-libp2p-kbucket v0.8.0/go.mod h1:JMlxqcEyKwO6ox716eyC0hmiduSWZZl6JY93mGaaqc4=
github.com/libp2p/go-libp2p-pubsub v0.15.0 h1:cG7Cng2BT82WttmPFMi50gDNV+58K626m/wR00vGL1o=
github.com/libp2p/go-libp2p-pubsub v0.15.0/go.mod h1:lr4oE8bFgQaifRcoc2uWhWWiK6tPdOEKpUuR408GFN4=
github.com/libp2p/go-libp2p-record v0.3.1 h1:cly48Xi5GjNw5Wq+7gmjfBiG9HCzQVkiZOUZ8kUl+Fg=
github.com/libp2p/go-libp2p-record v0.3.1/go.mod h1:T8itUkLcWQLCYMqtX7Th6r7SexyUJpIyPgks757td/E=
github.com/libp2p/go-libp2p-routing-helpers v0.7.5 h1:HdwZj9NKovMx0vqq6YNPTh6aaNzey5zHD7HeLJtq6fI=
github.com/libp2p/go-libp2p-routing-helpers v0.7.5/go.mod h1:3YaxrwP0OBPDD7my3D0KxfR89FlcX/IEbxDEDfAmj98=
github.com/libp2p/go-libp2p-testing v0.12.0 h1:EPvBb4kKMWO29qP4mZGyhVzUyR25dvfUIK5WDu6iPUA=
github.com/libp2p/go-libp2p-testing v0.12.0/go.mod h1:KcGDRXyN7sQCllucn1cOOS+Dmm7ujhfEyXQL5lvkcPg=
github.com/libp2p/go-msgio v0.3.0 h1:mf3Z8B1xcFN314sWX+2vOTShIE0Mmn2TXn3YCUQGNj0=
github.com/libp2p/go-msgio v0.3.0/go.mod h1:nyRM819GmVaF9LX3l03RMh10QdOroF++NBbxAb0mmDM=
github.com/libp2p/go-netroute v0.4.0 h1:sZZx9hyANYUx9PZyqcgE/E1GUG3iEtTZHUEvdtXT7/Q=
github.com/libp2p/go-netroute v0.4.0/go.mod h1:Nkd5ShYgSMS5MUKy/MU2T57xFoOKvvLR92Lic48LEyA=
github.com/libp2p/go-reuseport v0.4.0 h1:nR5KU7hD0WxXCJbmw7r2rhRYruNRl2koHw8fQscQm2s=
github.com/libp2p/go-reuseport v0.4.0/go.mod h1:ZtI03j/wO5hZVDFo2jKywN6bYKWLOy8Se6DrI2E1cLU=
github.com/libp2p/go-yamux/v5 v5.0.1 h1:f0WoX/bEF2E8SbE4c/k1Mo+/9z0O4oC/hWEA+nfYRSg=
github.com/libp2p/go-yamux/v5 v5.0.1/go.mod h1:en+3cdX51U0ZslwRdRLrvQsdayFt3TSUKvBGErzpWbU=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/marcopolo/simnet v0.0.4 h1:50Kx4hS9kFGSRIbrt9xUS3NJX33EyPqHVmpXvaKLqrY=
github.com/marcopolo/simnet v0.0.4/go.mod h1:tfQF1u2DmaB6WHODMtQaLtClEf3a296CKQLq5gAsIS0=
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd h1:br0buuQ854V8u83wA0rVZ8ttrq5CpaPZdvrK0LP2lOk=
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd/go.mod h1:QuCEs1Nt24+FYQEqAAncTDPJIuGs+LxK1MCiFL25pMU=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg= github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v2.0.3+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= github.com/miekg/dns v1.1.68 h1:jsSRkNozw7G/mnmXULynzMNIsgY2dHC8LO6U6Ij2JEA=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/miekg/dns v1.1.68/go.mod h1:fujopn7TB3Pu3JM69XaawiU0wqjpL9/8xGop5UrTPps=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo= github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c h1:bzE/A84HN25pxAuk9Eej1Kz9OUelF97nAc82bDquQI8=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4= github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c/go.mod h1:0SQS9kMwD2VsyFEB++InYyBJroV/FRmBgcydeSUcJms=
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b h1:z78hV3sbSMAUoyUMM0I83AUIT6Hu17AWfgjzIbtrYFc=
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b/go.mod h1:lxPUiZwKoFL8DUUmalo2yJJUCxbPKtm8OKfqr2/FTNU=
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc h1:PTfri+PuQmWDqERdnNMiD9ZejrlswWrCpBEZgWOiTrc=
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc/go.mod h1:cGKTAVKx4SxOuR/czcZ/E2RSJ3sfHs8FpHhQ5CWMf9s=
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1/go.mod h1:pD8RvIylQ358TN4wwqatJ8rNavkEINozVn9DtGI3dfQ=
github.com/minio/sha256-simd v0.1.1-0.20190913151208-6de447530771/go.mod h1:B5e1o+1/KgNmWrSQK08Y6Z1Vb5pwIktudl0J58iy0KM=
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE= github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow= github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
github.com/mr-tron/base58 v1.1.2/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
github.com/mr-tron/base58 v1.2.0/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/multiformats/go-base32 v0.1.0 h1:pVx9xoSPqEIQG8o+UbAe7DNi51oej1NtK+aGkbLYxPE=
github.com/multiformats/go-base32 v0.1.0/go.mod h1:Kj3tFY6zNr+ABYMqeUNeGvkIC/UYgtWibDcT0rExnbI=
github.com/multiformats/go-base36 v0.2.0 h1:lFsAbNOGeKtuKozrtBsAkSVhv1p9D0/qedU9rQyccr0=
github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4=
github.com/multiformats/go-multiaddr v0.1.1/go.mod h1:aMKBKNEYmzmDmxfX88/vz+J5IU55txyt0p4aiWVohjo=
github.com/multiformats/go-multiaddr v0.16.1 h1:fgJ0Pitow+wWXzN9do+1b8Pyjmo8m5WhGfzpL82MpCw=
github.com/multiformats/go-multiaddr v0.16.1/go.mod h1:JSVUmXDjsVFiW7RjIFMP7+Ev+h1DTbiJgVeTV/tcmP0=
github.com/multiformats/go-multiaddr-dns v0.4.1 h1:whi/uCLbDS3mSEUMb1MsoT4uzUeZB0N32yzufqS0i5M=
github.com/multiformats/go-multiaddr-dns v0.4.1/go.mod h1:7hfthtB4E4pQwirrz+J0CcDUfbWzTqEzVyYKKIKpgkc=
github.com/multiformats/go-multiaddr-fmt v0.1.0 h1:WLEFClPycPkp4fnIzoFoV9FVd49/eQsuaL3/CWe167E=
github.com/multiformats/go-multiaddr-fmt v0.1.0/go.mod h1:hGtDIW4PU4BqJ50gW2quDuPVjyWNZxToGUh/HwTZYJo=
github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g=
github.com/multiformats/go-multibase v0.2.0/go.mod h1:bFBZX4lKCA/2lyOFSAoKH5SS6oPyjtnzK/XTFDPkNuk=
github.com/multiformats/go-multicodec v0.10.0 h1:UpP223cig/Cx8J76jWt91njpK3GTAO1w02sdcjZDSuc=
github.com/multiformats/go-multicodec v0.10.0/go.mod h1:wg88pM+s2kZJEQfRCKBNU+g32F5aWBEjyFHXvZLTcLI=
github.com/multiformats/go-multihash v0.0.8/go.mod h1:YSLudS+Pi8NHE7o6tb3D8vrpKa63epEDmG8nTduyAew=
github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U=
github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM=
github.com/multiformats/go-multistream v0.6.1 h1:4aoX5v6T+yWmc2raBHsTvzmFhOI8WVOer28DeBBEYdQ=
github.com/multiformats/go-multistream v0.6.1/go.mod h1:ksQf6kqHAb6zIsyw7Zm+gAuVo57Qbq84E27YlYqavqw=
github.com/multiformats/go-varint v0.1.0 h1:i2wqFp4sdl3IcIxfAonHQV9qU5OsZ4Ts9IOoETFs5dI=
github.com/multiformats/go-varint v0.1.0/go.mod h1:5KVAVXegtfmNQQm/lCY+ATvDzvJJhSkUlGQV9wgObdI=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/nats-io/nats.go v1.43.0 h1:uRFZ2FEoRvP64+UUhaTokyS18XBCR/xM2vQZKO4i8ug=
github.com/nats-io/nats.go v1.37.0 h1:07rauXbVnnJvv1gfIyghFEo6lUcYRY0WXc3x7x0vUxE= github.com/nats-io/nats.go v1.43.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
github.com/nats-io/nats.go v1.37.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8= github.com/nats-io/nkeys v0.4.11 h1:q44qGV008kYd9W1b1nEBkNzvnWxtRSQ7A8BoqRrcfa0=
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI= github.com/nats-io/nkeys v0.4.11/go.mod h1:szDimtgmfOi9n25JpfIdGw12tZFYXqhGxjhVxsatHVE=
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw= github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c= github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/ogier/pflag v0.0.1/go.mod h1:zkFki7tvTa0tafRvTBIZTvzYyAu6kQhPZFnshFFPE+g= github.com/ogier/pflag v0.0.1/go.mod h1:zkFki7tvTa0tafRvTBIZTvzYyAu6kQhPZFnshFFPE+g=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.12.0 h1:Iw5WCbBcaAAd0fpRb1c9r5YCylv4XDoCSigm1zLevwU= github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns=
github.com/onsi/ginkgo v1.12.0/go.mod h1:oUhWkIvk5aDxtKvDDuw8gItl8pKl42LzjC9KZE0HfGg= github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
github.com/onsi/gomega v1.7.1 h1:K0jcRCwNQM3vFGh1ppMtDh/+7ApJrjldlX8fA0jDTLQ= github.com/onsi/gomega v1.38.2 h1:eZCjf2xjZAqe+LeWvKb5weQ+NcPwX84kqJ0cZNxok2A=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= github.com/onsi/gomega v1.38.2/go.mod h1:W2MJcYxRGV63b418Ai34Ud0hEdTVXq9NW9+Sx6uXf3k=
github.com/pelletier/go-toml v1.0.1/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys= github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/peterh/liner v1.0.1-0.20171122030339-3681c2a91233/go.mod h1:xIteQHvHuaLYG9IFj6mSxM0fCKrs34IrEQUhOYuGPHc= github.com/pion/datachannel v1.5.10 h1:ly0Q26K1i6ZkGf42W7D4hQYR90pZwzFOjTq5AuCKk4o=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= github.com/pion/datachannel v1.5.10/go.mod h1:p/jJfC9arb29W7WrxyKbepTU20CFgyx5oLo8Rs4Py/M=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pion/dtls/v2 v2.2.7/go.mod h1:8WiMkebSHFD0T+dIU+UeBaoV7kDhOW5oDCzZ7WZ/F9s=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pion/dtls/v2 v2.2.12 h1:KP7H5/c1EiVAAKUmXyCzPiQe5+bCJrpOeKg/L05dunk=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pion/dtls/v2 v2.2.12/go.mod h1:d9SYc9fch0CqK90mRk1dC7AkzzpwJj6u2GU3u+9pqFE=
github.com/pion/dtls/v3 v3.0.6 h1:7Hkd8WhAJNbRgq9RgdNh1aaWlZlGpYTzdqjy9x9sK2E=
github.com/pion/dtls/v3 v3.0.6/go.mod h1:iJxNQ3Uhn1NZWOMWlLxEEHAN5yX7GyPvvKw04v9bzYU=
github.com/pion/ice/v4 v4.0.10 h1:P59w1iauC/wPk9PdY8Vjl4fOFL5B+USq1+xbDcN6gT4=
github.com/pion/ice/v4 v4.0.10/go.mod h1:y3M18aPhIxLlcO/4dn9X8LzLLSma84cx6emMSu14FGw=
github.com/pion/interceptor v0.1.40 h1:e0BjnPcGpr2CFQgKhrQisBU7V3GXK6wrfYrGYaU6Jq4=
github.com/pion/interceptor v0.1.40/go.mod h1:Z6kqH7M/FYirg3frjGJ21VLSRJGBXB/KqaTIrdqnOic=
github.com/pion/logging v0.2.2/go.mod h1:k0/tDVsRCX2Mb2ZEmTqNa7CWsQPc+YYCB7Q+5pahoms=
github.com/pion/logging v0.2.3 h1:gHuf0zpoh1GW67Nr6Gj4cv5Z9ZscU7g/EaoC/Ke/igI=
github.com/pion/logging v0.2.3/go.mod h1:z8YfknkquMe1csOrxK5kc+5/ZPAzMxbKLX5aXpbpC90=
github.com/pion/mdns/v2 v2.0.7 h1:c9kM8ewCgjslaAmicYMFQIde2H9/lrZpjBkN8VwoVtM=
github.com/pion/mdns/v2 v2.0.7/go.mod h1:vAdSYNAT0Jy3Ru0zl2YiW3Rm/fJCwIeM0nToenfOJKA=
github.com/pion/randutil v0.1.0 h1:CFG1UdESneORglEsnimhUjf33Rwjubwj6xfiOXBa3mA=
github.com/pion/randutil v0.1.0/go.mod h1:XcJrSMMbbMRhASFVOlj/5hQial/Y8oH/HVo7TBZq+j8=
github.com/pion/rtcp v1.2.15 h1:LZQi2JbdipLOj4eBjK4wlVoQWfrZbh3Q6eHtWtJBZBo=
github.com/pion/rtcp v1.2.15/go.mod h1:jlGuAjHMEXwMUHK78RgX0UmEJFV4zUKOFHR7OP+D3D0=
github.com/pion/rtp v1.8.19 h1:jhdO/3XhL/aKm/wARFVmvTfq0lC/CvN1xwYKmduly3c=
github.com/pion/rtp v1.8.19/go.mod h1:bAu2UFKScgzyFqvUKmbvzSdPr+NGbZtv6UB2hesqXBk=
github.com/pion/sctp v1.8.39 h1:PJma40vRHa3UTO3C4MyeJDQ+KIobVYRZQZ0Nt7SjQnE=
github.com/pion/sctp v1.8.39/go.mod h1:cNiLdchXra8fHQwmIoqw0MbLLMs+f7uQ+dGMG2gWebE=
github.com/pion/sdp/v3 v3.0.13 h1:uN3SS2b+QDZnWXgdr69SM8KB4EbcnPnPf2Laxhty/l4=
github.com/pion/sdp/v3 v3.0.13/go.mod h1:88GMahN5xnScv1hIMTqLdu/cOcUkj6a9ytbncwMCq2E=
github.com/pion/srtp/v3 v3.0.6 h1:E2gyj1f5X10sB/qILUGIkL4C2CqK269Xq167PbGCc/4=
github.com/pion/srtp/v3 v3.0.6/go.mod h1:BxvziG3v/armJHAaJ87euvkhHqWe9I7iiOy50K2QkhY=
github.com/pion/stun v0.6.1 h1:8lp6YejULeHBF8NmV8e2787BogQhduZugh5PdhDyyN4=
github.com/pion/stun v0.6.1/go.mod h1:/hO7APkX4hZKu/D0f2lHzNyvdkTGtIy3NDmLR7kSz/8=
github.com/pion/stun/v3 v3.0.0 h1:4h1gwhWLWuZWOJIJR9s2ferRO+W3zA/b6ijOI6mKzUw=
github.com/pion/stun/v3 v3.0.0/go.mod h1:HvCN8txt8mwi4FBvS3EmDghW6aQJ24T+y+1TKjB5jyU=
github.com/pion/transport/v2 v2.2.1/go.mod h1:cXXWavvCnFF6McHTft3DWS9iic2Mftcz1Aq29pGcU5g=
github.com/pion/transport/v2 v2.2.4/go.mod h1:q2U/tf9FEfnSBGSW6w5Qp5PFWRLRj3NjLhCCgpRK4p0=
github.com/pion/transport/v2 v2.2.10 h1:ucLBLE8nuxiHfvkFKnkDQRYWYfp8ejf4YBOPfaQpw6Q=
github.com/pion/transport/v2 v2.2.10/go.mod h1:sq1kSLWs+cHW9E+2fJP95QudkzbK7wscs8yYgQToO5E=
github.com/pion/transport/v3 v3.0.7 h1:iRbMH05BzSNwhILHoBoAPxoB9xQgOaJk+591KC9P1o0=
github.com/pion/transport/v3 v3.0.7/go.mod h1:YleKiTZ4vqNxVwh77Z0zytYi7rXHl7j6uPLGhhz9rwo=
github.com/pion/turn/v4 v4.0.2 h1:ZqgQ3+MjP32ug30xAbD6Mn+/K4Sxi3SdNOTFf+7mpps=
github.com/pion/turn/v4 v4.0.2/go.mod h1:pMMKP/ieNAG/fN5cZiN4SDuyKsXtNTr0ccN7IToA1zs=
github.com/pion/webrtc/v4 v4.1.2 h1:mpuUo/EJ1zMNKGE79fAdYNFZBX790KE7kQQpLMjjR54=
github.com/pion/webrtc/v4 v4.1.2/go.mod h1:xsCXiNAmMEjIdFxAYU0MbB3RwRieJsegSB2JZsGN+8U=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/polydawn/refmt v0.89.0 h1:ADJTApkvkeBZsN0tBTx8QjpD9JkmxbKp0cxfr9qszm4=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/polydawn/refmt v0.89.0/go.mod h1:/zvteZs/GwLtCgZ4BL6CBsk9IKIlexP43ObX9AxTqTw=
github.com/prometheus/client_golang v1.7.0/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw= github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y= github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_golang v1.20.2 h1:5ctymQzZlyOON1666svgwn3s6IKWgfbjsejTMiXIyjg= github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/client_golang v1.20.2/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE= github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4= github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8=
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w= github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E= github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY= github.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/quic-go/webtransport-go v0.10.0 h1:LqXXPOXuETY5Xe8ITdGisBzTYmUOy5eSj+9n4hLTjHI=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= github.com/quic-go/webtransport-go v0.10.0/go.mod h1:LeGIXr5BQKE3UsynwVBeQrU1TPrbh73MGoC6jd+V7ow=
github.com/prometheus/common v0.41.0 h1:npo01n6vUlRViIj5fgwiK8vlNIh8bnoxqh3gypKsyAw= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/prometheus/common v0.41.0/go.mod h1:xBwqVerjNdUDjgODMpudtOMwlOwf2SaTr1yjz4b7Zbc= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/prometheus/common v0.57.0 h1:Ro/rKjwdq9mZn1K5QPctzh+MA4Lp0BuYk5ZZEVhoNcY= github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/prometheus/common v0.57.0/go.mod h1:7uRPFSUTbfZWsJ7MHY56sqt7hLQu3bxXHDnNhl8E9qI= github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJfhI=
github.com/prometheus/procfs v0.9.0/go.mod h1:+pB4zwohETzFnmlpe6yd2lSc+0/46IYZRB/chUwxUZY=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8=
github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
github.com/shiena/ansicolor v0.0.0-20151119151921-a422bbe96644/go.mod h1:nkxAfR/5quYxwPZhyDxgasBMnRtBZd0FCEpawpjMUFg=
github.com/shiena/ansicolor v0.0.0-20200904210342-c7312218db18 h1:DAYUYH5869yV94zvCES9F51oYtN5oGlwjxJJz7ZCnik=
github.com/shiena/ansicolor v0.0.0-20200904210342-c7312218db18/go.mod h1:nkxAfR/5quYxwPZhyDxgasBMnRtBZd0FCEpawpjMUFg=
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 h1:v9ezJDHA1XGxViAUSIoO/Id7Fl63u6d0YmsAm+/p2hs= github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 h1:v9ezJDHA1XGxViAUSIoO/Id7Fl63u6d0YmsAm+/p2hs=
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02/go.mod h1:RF16/A3L0xSa0oSERcnhd8Pu3IXSDZSK2gmGIMsttFE= github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02/go.mod h1:RF16/A3L0xSa0oSERcnhd8Pu3IXSDZSK2gmGIMsttFE=
github.com/siddontang/go v0.0.0-20170517070808-cb568a3e5cc0/go.mod h1:3yhqj7WBBfRhbBlzyOC3gUxftwsU0u8gqevxwIHQpMw= github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/siddontang/goredis v0.0.0-20150324035039-760763f78400/go.mod h1:DDcKzU3qCuvj/tPnimWSsZZzvk9qvkvrIL5naVBPh5s=
github.com/siddontang/rdb v0.0.0-20150307021120-fc89ed2e418d/go.mod h1:AMEsy7v5z92TR1JKMkLLoaOQk++LVnOKL3ScbJ8GNGA=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/skarademir/naturalsort v0.0.0-20150715044055-69a5d87bef62/go.mod h1:oIdVclZaltY1Nf7OQUkg1/2jImBJ+ZfKZuDIRSwk3p0= github.com/skarademir/naturalsort v0.0.0-20150715044055-69a5d87bef62/go.mod h1:oIdVclZaltY1Nf7OQUkg1/2jImBJ+ZfKZuDIRSwk3p0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/assertions v1.2.0 h1:42S6lae5dvLc7BrLu/0ugRtcFVjoJNMC/N3yZFZkDFs= github.com/smartystreets/assertions v1.2.0 h1:42S6lae5dvLc7BrLu/0ugRtcFVjoJNMC/N3yZFZkDFs=
@@ -232,143 +345,213 @@ github.com/smartystreets/assertions v1.2.0/go.mod h1:tcbTF8ujkAEcZ8TElKY+i30BzYl
github.com/smartystreets/goconvey v0.0.0-20190731233626-505e41936337/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= github.com/smartystreets/goconvey v0.0.0-20190731233626-505e41936337/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/smartystreets/goconvey v1.7.2 h1:9RBaZCeXEQ3UselpuwUQHltGVXvdwm6cv1hgR6gDIPg= github.com/smartystreets/goconvey v1.7.2 h1:9RBaZCeXEQ3UselpuwUQHltGVXvdwm6cv1hgR6gDIPg=
github.com/smartystreets/goconvey v1.7.2/go.mod h1:Vw0tHAZW6lzCRk3xgdin6fKYcG+G3Pg9vgXWeJpQFMM= github.com/smartystreets/goconvey v1.7.2/go.mod h1:Vw0tHAZW6lzCRk3xgdin6fKYcG+G3Pg9vgXWeJpQFMM=
github.com/ssdb/gossdb v0.0.0-20180723034631-88f6b59b84ec/go.mod h1:QBvMkMya+gXctz3kmljlUCu/yB3GZ6oee+dUozsezQE= github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/syndtr/goleveldb v0.0.0-20160425020131-cfa635847112/go.mod h1:Z4AUp2Km+PwemOoO/VB5AOx9XSsIItzFjoJlOSiYmn0= github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/tidwall/gjson v1.14.4 h1:uo0p8EbA09J7RQaflQ1aBRffTR7xedD2bcIVSYxLnkM= github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/tidwall/gjson v1.14.4/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/tidwall/gjson v1.17.3 h1:bwWLZU7icoKRG+C+0PNwIKC6FCJO/Q3p2pZvuP0jN94= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/tidwall/gjson v1.17.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA= github.com/urfave/cli v1.22.10/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= github.com/warpfork/go-wish v0.0.0-20220906213052-39a1cc7a02d0 h1:GDDkbFiaK8jsSDJfjId/PEGEShv6ugrt4kYsC5UIDaQ=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs= github.com/warpfork/go-wish v0.0.0-20220906213052-39a1cc7a02d0/go.mod h1:x6AKhvSSexNrVSrViXSHUEbICjmGXhtgABaHIySUSGw=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 h1:EKhdznlJHPMoKr0XTrX+IlJs1LH3lyx2nfr1dOlZ79k=
github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4= github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1/go.mod h1:8UvriyWtv5Q5EOgjHaSseUEdkQfvwFv1I/In/O2M9gc=
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= github.com/wlynxg/anet v0.0.3/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/ugorji/go v0.0.0-20171122102828-84cb69a8af83/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ= github.com/wlynxg/anet v0.0.5 h1:J3VJGi1gvo0JwZ/P1/Yc/8p63SoW98B5dHkYDmpgvvU=
github.com/wendal/errors v0.0.0-20181209125328-7f31f4b264ec/go.mod h1:Q12BUT7DqIlHRmgv3RskH+UCM/4eqVMgI0EMmlSpAXc= github.com/wlynxg/anet v0.0.5/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c= github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI= github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY= github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4= github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8= github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM= github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
github.com/youmark/pkcs8 v0.0.0-20240424034433-3c2c7870ae76 h1:tBiBTKHnIjovYoLX/TPkcf+OjqqKGQrPtGT3Foz+Pgo=
github.com/youmark/pkcs8 v0.0.0-20240424034433-3c2c7870ae76/go.mod h1:SQliXeA7Dhkt//vS29v3zpbEwoa+zb2Cn5xj5uO4K5U=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM= github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI= github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/yuin/gopher-lua v0.0.0-20171031051903-609c9cd26973/go.mod h1:aEV29XrmTYFr3CiRxZeGHpkvbwq+prZduBqMaascyCU= go.mongodb.org/mongo-driver v1.17.4 h1:jUorfmVzljjr0FLzYQsGP8cgN/qzzxlY9Vh0C9KFXVw=
go.mongodb.org/mongo-driver v1.16.0 h1:tpRsfBJMROVHKpdGyc1BBEzzjDUWjItxbVSZ8Ls4BQ4= go.mongodb.org/mongo-driver v1.17.4/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
go.mongodb.org/mongo-driver v1.16.0/go.mod h1:oB6AhJQvFQL4LEHyXi6aJzQJtBiTQHiAd83l0GdFaiw= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.mongodb.org/mongo-driver v1.16.1 h1:rIVLL3q0IHM39dvE+z2ulZLp9ENZKThVfuvN/IiN4l8= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.mongodb.org/mongo-driver v1.16.1/go.mod h1:oB6AhJQvFQL4LEHyXi6aJzQJtBiTQHiAd83l0GdFaiw= go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
go.uber.org/dig v1.19.0 h1:BACLhebsYdpQ7IROQ1AGPjrXcP5dF80U3gKoFzbaq/4=
go.uber.org/dig v1.19.0/go.mod h1:Us0rSJiThwCv2GteUN0Q7OKvU7n5J4dxZ9JKUXozFdE=
go.uber.org/fx v1.24.0 h1:wE8mruvpg2kiiL1Vqd0CC+tr0/24XIB10Iwp2lLWzkg=
go.uber.org/fx v1.24.0/go.mod h1:AmDeGyS+ZARGKM4tlH4FY2Jr63VjbEDJHtqXTGP5hbo=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/mock v0.5.2 h1:LbtPTcP8A5k9WPXj54PPPbjcI4Y6lhyOZXn+VS7wNko=
go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191112222119-e1110fd1c708/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20191112222119-e1110fd1c708/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I= golang.org/x/crypto v0.0.0-20200602180216-279210d13fed/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.25.0 h1:ypSNr+bnYL2YhwoMt2zPxHFmbAN1KZs/njMG3hxUp30= golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
golang.org/x/crypto v0.25.0/go.mod h1:T+wALwcMOSE0kXgUAnPAHqTLW+XHgcELELW8VaDgm/M= golang.org/x/crypto v0.12.0/go.mod h1:NF0Gs7EO5K4qLn+Ylc+fih8BSTeIjAP05siRnAh98yw=
golang.org/x/crypto v0.26.0 h1:RrRspgV4mU+YwB4FYnuBoKsUapNIL5cohGAmSH3azsw= golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
golang.org/x/crypto v0.26.0/go.mod h1:GY7jblb9wI+FOo5y8/S2oY4zWP07AkOJ4+jxCqdqn54= golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96/go.mod h1:nzimsREAkjBCIEFtHiYkrJyT+2uy9YZJB7H1k68CXZU=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.27.0 h1:5K3Njcw06/l2y9vpGCSdcxWOYHOUk3dVNGDXN+FvAys= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE= golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
golang.org/x/net v0.28.0 h1:a9JDOJc5GMUJ0+UDqmLT86WiEy7iWyIhz8gz8E4e5hE= golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.28.0/go.mod h1:yqtgsTWOOnlGLG9GFRrK3++bGOUEkNBoHZc8MEDWPNg= golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
golang.org/x/oauth2 v0.32.0 h1:jsCblLleRMDrxMN29H3z/k1KliIvpLgCkE6R8FXXNgY=
golang.org/x/oauth2 v0.32.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ= golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200602225109-6fdc65e7d980/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI= golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.24.0 h1:Twjiwq9dn6R1fQcyiK+wQyHWfaz/BJB+YIpzU/Cv3Xg= golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.24.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2 h1:O1cMQHRfwNpDfDJerqRoE2oD+AFlyid87D40L/OkkJo=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2/go.mod h1:b7fPSJ0pKZ3ccUh8gnTONJxhn3c/PS6tyzQvyqw4iA8=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.11.0/go.mod h1:zC9APTIj3jG3FdV/Ons+XE1riIZXG4aZ4GTHiPZJPIU=
golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY=
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
golang.org/x/term v0.39.0/go.mod h1:yxzUCTP/U+FzoxfdKmLaA0RV1WgE0VY7hXBwKtY/4ww=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.17.0 h1:XtiM5bkSOt+ewxlOE/aE/AKEHibwj/6gvWMl9Rsh0Qc= golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.17.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.41.0 h1:a9b8iMweWG+S0OBnlU36rzLp20z1Rp10w+IY2czHTQc=
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE= google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo= google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg=
google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4= gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/mgo.v2 v2.0.0-20190816093944-a6b53ec6cb22/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/api v0.35.1 h1:0PO/1FhlK/EQNVK5+txc4FuhQibV25VLSdLMmGpDE/Q=
k8s.io/api v0.35.1/go.mod h1:28uR9xlXWml9eT0uaGo6y71xK86JBELShLy4wR1XtxM=
k8s.io/apimachinery v0.35.1 h1:yxO6gV555P1YV0SANtnTjXYfiivaTPvCTKX6w6qdDsU=
k8s.io/apimachinery v0.35.1/go.mod h1:jQCgFZFR1F4Ik7hvr2g84RTJSZegBc8yHgFWKn//hns=
k8s.io/client-go v0.35.1 h1:+eSfZHwuo/I19PaSxqumjqZ9l5XiTEKbIaJ+j1wLcLM=
k8s.io/client-go v0.35.1/go.mod h1:1p1KxDt3a0ruRfc/pG4qT/3oHmUj1AhSHEcxNSGg+OA=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 h1:Y3gxNAuB0OBLImH611+UDZcmKS3g6CthxToOb37KgwE=
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ=
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck=
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
lukechampine.com/blake3 v1.4.1 h1:I3Smz7gso8w4/TunLKec6K2fn+kyKtDxr/xcQEN84Wg=
lukechampine.com/blake3 v1.4.1/go.mod h1:QFosUxmjB8mnrWFSNwKmvxHpfY72bmD2tQ0kBMM3kwo=
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg=
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=

66
main.go
View File

@@ -1,44 +1,56 @@
package main package main
import ( import (
"oc-discovery/models" "context"
_ "oc-discovery/routers" "log"
"oc-discovery/conf"
"oc-discovery/daemons/node"
"os"
"os/signal"
"strings"
"syscall"
oclib "cloud.o-forge.io/core/oc-lib" oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/logs"
"cloud.o-forge.io/core/oc-lib/tools"
beego "github.com/beego/beego/v2/server/web"
) )
const appname = "oc-discovery" const appname = "oc-discovery"
func main() { func main() {
// Init the oc-lib // Init the oc-lib
oclib.Init(appname, "", "") oclib.InitDaemon(appname)
// get the right config file // get the right config file
o := tools.GetConfLoader() o := oclib.GetConfLoader(appname)
models.GetConfig().Port = o.GetIntDefault("port", 8080) conf.GetConfig().Name = o.GetStringDefault("NAME", "opencloud-demo")
models.GetConfig().LokiUrl = o.GetStringDefault("lokiurl", "") conf.GetConfig().Hostname = o.GetStringDefault("HOSTNAME", "127.0.0.1")
models.GetConfig().RedisUrl = o.GetStringDefault("redisurl", "localhost:6379") conf.GetConfig().PSKPath = o.GetStringDefault("PSK_PATH", "./psk/psk.key")
models.GetConfig().RedisPassword = o.GetStringDefault("redispassword", "") conf.GetConfig().NodeEndpointPort = o.GetInt64Default("NODE_ENDPOINT_PORT", 4001)
models.GetConfig().ZincUrl = o.GetStringDefault("zincurl", "http://localhost:4080") conf.GetConfig().IndexerAddresses = o.GetStringDefault("INDEXER_ADDRESSES", "")
models.GetConfig().ZincLogin = o.GetStringDefault("zinclogin", "admin") conf.GetConfig().NativeIndexerAddresses = o.GetStringDefault("NATIVE_INDEXER_ADDRESSES", "")
models.GetConfig().ZincPassword = o.GetStringDefault("zincpassword", "admin")
models.GetConfig().IdentityFile = o.GetStringDefault("identityfile", "./identity.json")
models.GetConfig().Defaultpeers = o.GetStringDefault("defaultpeers", "./peers.json")
// set oc-lib logger conf.GetConfig().PeerIDS = o.GetStringDefault("PEER_IDS", "")
if models.GetConfig().LokiUrl != "" {
logs.CreateLogger(appname, models.GetConfig().LokiUrl) conf.GetConfig().NodeMode = o.GetStringDefault("NODE_MODE", "node")
conf.GetConfig().MinIndexer = o.GetIntDefault("MIN_INDEXER", 1)
conf.GetConfig().MaxIndexer = o.GetIntDefault("MAX_INDEXER", 5)
ctx, stop := signal.NotifyContext(
context.Background(),
os.Interrupt,
syscall.SIGTERM,
)
defer stop()
isNode := strings.Contains(conf.GetConfig().NodeMode, "node")
isIndexer := strings.Contains(conf.GetConfig().NodeMode, "indexer")
isNativeIndexer := strings.Contains(conf.GetConfig().NodeMode, "native-indexer")
if n, err := node.InitNode(isNode, isIndexer, isNativeIndexer); err != nil {
panic(err)
} else {
<-ctx.Done() // the only blocking point
log.Println("shutting down")
n.Close()
} }
// Normal beego init
beego.BConfig.AppName = appname
beego.BConfig.Listen.HTTPPort = models.GetConfig().Port
beego.BConfig.WebConfig.DirectoryIndex = true
beego.BConfig.WebConfig.StaticDir["/swagger"] = "swagger"
beego.Run()
} }

View File

@@ -1,25 +0,0 @@
package models
import "sync"
type Config struct {
Port int
LokiUrl string
ZincUrl string
ZincLogin string
ZincPassword string
RedisUrl string
RedisPassword string
IdentityFile string
Defaultpeers string
}
var instance *Config
var once sync.Once
func GetConfig() *Config {
once.Do(func() {
instance = &Config{}
})
return instance
}

56
models/event.go Normal file
View File

@@ -0,0 +1,56 @@
package models
import (
"encoding/json"
"time"
"cloud.o-forge.io/core/oc-lib/tools"
"github.com/libp2p/go-libp2p/core/crypto"
)
type Event struct {
Type string `json:"type"`
From string `json:"from"` // peerID
User string
DataType int64 `json:"datatype"`
Timestamp int64 `json:"ts"`
Payload []byte `json:"payload"`
Signature []byte `json:"sig"`
}
func NewEvent(name string, from string, dt *tools.DataType, user string, payload []byte, priv crypto.PrivKey) *Event {
evt := &Event{
Type: name,
From: from,
User: user,
Timestamp: time.Now().UTC().Unix(),
Payload: payload,
}
if dt != nil {
evt.DataType = int64(dt.EnumIndex())
} else {
evt.DataType = -1
}
body, _ := json.Marshal(evt)
sig, _ := priv.Sign(body)
evt.Signature = sig
return evt
}
func (e *Event) RawEvent() *Event {
return &Event{
Type: e.Type,
From: e.From,
User: e.User,
DataType: e.DataType,
Timestamp: e.Timestamp,
Payload: e.Payload,
}
}
func (e *Event) ToRawByte() ([]byte, error) {
return json.Marshal(e.RawEvent())
}

View File

@@ -1,44 +0,0 @@
package models
import (
"encoding/json"
"os"
"github.com/beego/beego/logs"
)
var (
Me Identity
)
func init() {
content, err := os.ReadFile("./identity.json")
if err != nil {
logs.Error("Error when opening file: ", err)
}
err = json.Unmarshal(content, &Me)
if err != nil {
logs.Error("Error during Unmarshal(): ", err)
}
}
type Identity struct {
Id string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
PrivateKey string `json:"private_key,omitempty"`
PublicAttributes Peer `json:"public_attributes,omitempty"`
}
func GetIdentity() (u *Identity) {
return &Me
}
func UpdateIdentity(uu *Identity) error {
Me = *uu
jsonBytes, err := json.Marshal(uu)
if err != nil {
return err
}
os.WriteFile("./identity.json", jsonBytes, 0600)
return nil
}

View File

@@ -1,88 +0,0 @@
package models
import (
"encoding/json"
"io/ioutil"
"time"
"github.com/beego/beego/logs"
)
var (
Peers []Peer
Store Storage
)
type Peer struct {
PeerId string `json:"peer_id,omitempty"`
Name string `json:"name,omitempty"`
EntityName string `json:"entity_name,omitempty"`
EntityType string `json:"entity_type,omitempty"`
Description string `json:"description,omitempty"`
Website string `json:"website,omitempty"`
Address string `json:"address,omitempty"`
Postcode string `json:"postcode,omitempty"`
City string `json:"city,omitempty"`
Country string `json:"country,omitempty"`
Phone string `json:"phone,omitempty"`
Email string `json:"email,omitempty"`
Activity string `json:"activity,omitempty"`
Keywords []string `json:"keywords,omitempty"`
ApiUrl string `json:"api_url,omitempty"`
PublicKey string `json:"public_key,omitempty"`
// internal use
Score int64 `json:"score,omitempty"`
LastSeenOnline time.Time `json:"last_seen_online,omitempty"`
ApiVersion string `json:"api_version,omitempty"`
}
func init() {
c := GetConfig()
Store = Storage{c.ZincUrl, c.ZincLogin, c.ZincPassword, c.RedisUrl, c.RedisPassword}
Store = Storage{"http://localhost:4080", "admin", "admin", "localhost:6379", ""}
//p := Peer{uuid.New().String(), 0, []string{"car", "highway", "images", "video"}, time.Now(), "1", "asf", ""}
// pa := []Peer{p}
// byteArray, err := json.Marshal(pa)
// if err != nil {
// log.Fatal(err)
// }
// ioutil.WriteFile("./peers.json", byteArray, 0644)
content, err := ioutil.ReadFile("./peers.json")
if err != nil {
logs.Error("Error when opening file: ", err)
}
err = json.Unmarshal(content, &Peers)
if err != nil {
logs.Error("Error during Unmarshal(): ", err)
}
Store.ImportData(LoadPeersJson("./peers.json"))
}
func AddPeers(peers []Peer) (status string) {
err := Store.ImportData(peers)
if err != nil {
logs.Error("Error during Unmarshal(): ", err)
return "error"
}
return "ok"
}
func FindPeers(query string) (peers []Peer, err error) {
result, err := Store.FindPeers(query)
if err != nil {
return nil, err
}
return result, nil
}
func GetPeer(uid string) (*Peer, error) {
return Store.GetPeer(uid)
}
func Delete(PeerId string) error {
err := Store.DeletePeer(PeerId)
if err != nil {
return err
}
return nil
}

View File

@@ -1,206 +0,0 @@
package models
import (
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
"strings"
"github.com/beego/beego/logs"
"github.com/go-redis/redis"
"github.com/tidwall/gjson"
)
type Storage struct {
ZincUrl string
ZincLogin string
ZincPassword string
RedisUrl string
RedisPassword string
}
func LoadPeersJson(filename string) []Peer {
var peers []Peer
content, err := os.ReadFile("./peers.json")
if err != nil {
logs.Error("Error when opening file: ", err)
}
err = json.Unmarshal(content, &peers)
if err != nil {
logs.Error("Error during Unmarshal(): ", err)
}
return peers
}
func (s *Storage) ImportData(peers []Peer) error {
rdb := redis.NewClient(&redis.Options{
Addr: s.RedisUrl,
Password: s.RedisPassword, // no password set
DB: 0, // use default DB
})
var indexedPeers []map[string]interface{}
for _, p := range peers {
// Creating data block for indexing
indexedPeer := make(map[string]interface{})
indexedPeer["_id"] = p.PeerId
indexedPeer["name"] = p.Name
indexedPeer["keywords"] = p.Keywords
indexedPeer["name"] = p.Name
indexedPeer["entityname"] = p.EntityName
indexedPeer["entitytype"] = p.EntityType
indexedPeer["activity"] = p.Activity
indexedPeer["address"] = p.Address
indexedPeer["postcode"] = p.Postcode
indexedPeer["city"] = p.City
indexedPeer["country"] = p.Country
indexedPeer["description"] = p.Description
indexedPeer["apiurl"] = p.ApiUrl
indexedPeer["website"] = p.Website
indexedPeers = append(indexedPeers, indexedPeer)
// Adding peer to Redis (fast retieval and status updates)
jsonp, err := json.Marshal(p)
if err != nil {
return err
}
err = rdb.Set("peer:"+p.PeerId, jsonp, 0).Err()
if err != nil {
return err
}
}
bulk := map[string]interface{}{"index": "peers", "records": indexedPeers}
raw, err := json.Marshal(bulk)
if err != nil {
return err
}
req, err := http.NewRequest("POST", s.ZincUrl+"/api/_bulkv2", strings.NewReader(string(raw)))
if err != nil {
return err
}
req.SetBasicAuth(s.ZincLogin, s.ZincPassword)
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
log.Println(resp.StatusCode)
body, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
fmt.Println(string(body))
return nil
}
func (s *Storage) FindPeers(queryString string) ([]Peer, error) {
var peers []Peer
query := `{
"search_type": "match",
"query":
{
"term": "` + queryString + `",
"start_time": "2020-06-02T14:28:31.894Z",
"end_time": "2029-12-02T15:28:31.894Z"
},
"from": 0,
"max_results": 100,
"_source": []
}`
req, err := http.NewRequest("POST", s.ZincUrl+"/api/peers/_search", strings.NewReader(query))
if err != nil {
log.Fatal(err)
}
req.SetBasicAuth(s.ZincLogin, s.ZincPassword)
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36")
resp, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
log.Println(resp.StatusCode)
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
value := gjson.Get(string(body), "hits.hits")
rdb := redis.NewClient(&redis.Options{
Addr: s.RedisUrl,
Password: s.RedisPassword, // no password set
DB: 0, // use default DB
})
for _, v := range value.Array() {
peerBytes, err := rdb.Get("peer:" + v.Get("_id").Str).Bytes()
if err != nil {
logs.Error(err)
} else {
var p Peer
err = json.Unmarshal(peerBytes, &p)
if err != nil {
return nil, err
}
peers = append(peers, p)
}
}
return peers, nil
}
func (s *Storage) GetPeer(uid string) (*Peer, error) {
var peer Peer
rdb := redis.NewClient(&redis.Options{
Addr: s.RedisUrl,
Password: s.RedisPassword, // no password set
DB: 0, // use default DB
})
peerBytes, err := rdb.Get("peer:" + uid).Bytes()
if err != nil {
return nil, err
} else {
err = json.Unmarshal(peerBytes, &peer)
if err != nil {
return nil, err
}
return &peer, nil
}
}
func (s *Storage) DeletePeer(uid string) error {
// Removing from Redis
rdb := redis.NewClient(&redis.Options{
Addr: s.RedisUrl,
Password: s.RedisPassword, // no password set
DB: 0, // use default DB
})
err := rdb.Unlink("peer:" + uid).Err()
if err != nil {
return err
}
// Removing from Index
req, err := http.NewRequest("DELETE", s.ZincUrl+"/api/peers/_doc"+uid, nil)
if err != nil {
return err
}
req.SetBasicAuth(s.ZincLogin, s.ZincPassword)
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
log.Println(resp.StatusCode)
body, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
fmt.Println(string(body))
return nil
}

BIN
oc-k8s Executable file

Binary file not shown.

View File

@@ -1,54 +0,0 @@
[
{
"peer_id": "a50d3697-7ede-4fe5-a385-e9d01ebc1002",
"name": "ASF",
"keywords": [
"car",
"highway",
"images",
"video"
],
"last_seen_online": "2023-03-07T11:57:13.378707853+01:00",
"api_version": "1",
"api_url": "http://127.0.0.1:49618/v1"
},
{
"peer_id": "a50d3697-7ede-4fe5-a385-e9d01ebc1003",
"name": "IT",
"keywords": [
"car",
"highway",
"images",
"video"
],
"last_seen_online": "2023-03-07T11:57:13.378707853+01:00",
"api_version": "1",
"api_url": "https://it.irtse.com/oc"
},
{
"peer_id": "a50d3697-7ede-4fe5-a385-e9d01ebc1004",
"name": "Centre de traitement des amendes",
"keywords": [
"car",
"highway",
"images",
"video"
],
"last_seen_online": "2023-03-07T11:57:13.378707853+01:00",
"api_version": "1",
"api_url": "https://impots.irtse.com/oc"
},
{
"peer_id": "a50d3697-7ede-4fe5-a385-e9d01ebc1005",
"name": "Douanes",
"keywords": [
"car",
"highway",
"images",
"video"
],
"last_seen_online": "2023-03-07T11:57:13.378707853+01:00",
"api_version": "1",
"api_url": "https://douanes.irtse.com/oc"
}
]

3
pem/private1.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIK2oBaOtGNchE09MBRtPd5oEOUcVUQG2ndym5wKExj7R
-----END PRIVATE KEY-----

3
pem/private10.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIPc7D3Mgb1U2Ipyb/85hA4Ew7dC8zHDEuQYSjqzzRgLK
-----END PRIVATE KEY-----

3
pem/private2.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIE58GDazCyF1jp796ivSmHiCepbkC8TpzliIaQ7eGEpu
-----END PRIVATE KEY-----

3
pem/private3.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIAeX4O7ldwehRSnPkbzuE6csyo63vjvqAcNNujENOKUC
-----END PRIVATE KEY-----

3
pem/private4.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIEkgqINXDLnxIJZs2LEK9O4vdsqk43dwbULGUE25AWuR
-----END PRIVATE KEY-----

3
pem/private5.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIK2oBaOtGNchE09MBRtPd5oEOUcVUQG2ndym5wKExj7R
-----END PRIVATE KEY-----

3
pem/private6.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIE58GDazCyF1jp796ivSmHiCepbkC8TpzliIaQ7eGEpu
-----END PRIVATE KEY-----

3
pem/private7.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIAeX4O7ldwehRSnPkbzuE6csyo63vjvqAcNNujENOKUC
-----END PRIVATE KEY-----

3
pem/private8.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIEkgqINXDLnxIJZs2LEK9O4vdsqk43dwbULGUE25AWuR
-----END PRIVATE KEY-----

3
pem/private9.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIBcflxGlZYyUVJoExC94rHZbIyKMwZ+Oh7EDkb0qUlxd
-----END PRIVATE KEY-----

3
pem/public1.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAZ2nLJBL8a5opfa8nFeVj0SZToW8pl4+zgcSUkeZFRO4=
-----END PUBLIC KEY-----

3
pem/public10.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAEomuEQGmGsYVw35C6DB5tfY8LI8jm359ceAxRX8eQ0o=
-----END PUBLIC KEY-----

3
pem/public2.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAIQVeSGwsjPjyepPTnzzYqVxIxviSEjZXU7C7zuNTui4=
-----END PUBLIC KEY-----

3
pem/public3.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAG95Ettl3jTi41HM8le1A9WDmOEq0ANEqpLF7zTZrfXA=
-----END PUBLIC KEY-----

3
pem/public4.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEA/ymOIb0sJ0qCWrf3mKz7ACCvsMXLog/EK533JfNXZTM=
-----END PUBLIC KEY-----

3
pem/public5.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAZ2nLJBL8a5opfa8nFeVj0SZToW8pl4+zgcSUkeZFRO4=
-----END PUBLIC KEY-----

3
pem/public6.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAIQVeSGwsjPjyepPTnzzYqVxIxviSEjZXU7C7zuNTui4=
-----END PUBLIC KEY-----

3
pem/public7.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAG95Ettl3jTi41HM8le1A9WDmOEq0ANEqpLF7zTZrfXA=
-----END PUBLIC KEY-----

3
pem/public8.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEA/ymOIb0sJ0qCWrf3mKz7ACCvsMXLog/EK533JfNXZTM=
-----END PUBLIC KEY-----

Some files were not shown because too many files have changed in this diff Show More