Compare commits
52 Commits
chart
...
ef3d998ead
| Author | SHA1 | Date | |
|---|---|---|---|
| ef3d998ead | |||
| 79aa3cc2b3 | |||
| 779e36aaef | |||
| 572da29fd4 | |||
| 3eae5791a1 | |||
| 88fd05066c | |||
| 0250c3b339 | |||
| 6a5ffb9a92 | |||
| fa914958b6 | |||
| 1c0b2b4312 | |||
| 631e2846fe | |||
| d985d8339a | |||
| ea14ad3933 | |||
| 2e31df89c2 | |||
| 425cbdfe7d | |||
| 8ee5b84e21 | |||
| 552bb17e2b | |||
| 88e29073a2 | |||
| b429ee9816 | |||
| c716225283 | |||
| 3bc01c3a04 | |||
| 1ebbb54dd1 | |||
| c958d106b7 | |||
| 5442d625c6 | |||
| 60ed7048cd | |||
| 7b68a608dd | |||
| 1c2ea9ca96 | |||
| 0ff21c0818 | |||
| 6ca762abbf | |||
| 0ffe98045e | |||
| c3352499fa | |||
| 562d86125e | |||
| d50e5d56f7 | |||
| 38cd862947 | |||
| 7fd258dc9d | |||
| 0ed2fc0f15 | |||
| ea5320d4be | |||
| 25184deecb | |||
| 27379cb392 | |||
| 66d228f143 | |||
| f1444c8046 | |||
| 0aef66207f | |||
| 0d9d7c9931 | |||
| 90c24a9e05 | |||
| fdeb933e26 | |||
| 5aae779e74 | |||
| 3731d48f81 | |||
|
|
4915b3e4cf | ||
| 50758a7efe | |||
|
|
2f2a3bf250 | ||
|
|
6dddb43590 | ||
|
|
af4b62f63b |
47
.drone.yml
47
.drone.yml
@@ -3,18 +3,37 @@ kind: pipeline
|
|||||||
name: unit
|
name: unit
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: build
|
# -------------------- tests (host arch only) --------------------
|
||||||
image: golang
|
- name: test
|
||||||
commands:
|
image: golang:alpine
|
||||||
- go test
|
pull: if-not-exists
|
||||||
- go build
|
commands:
|
||||||
|
- go test ./...
|
||||||
|
|
||||||
|
# -------------------- build + push multi-arch image --------------------
|
||||||
|
- name: publish
|
||||||
|
image: plugins/docker:latest
|
||||||
|
settings:
|
||||||
|
username:
|
||||||
|
from_secret: docker-user
|
||||||
|
password:
|
||||||
|
from_secret: docker-pw
|
||||||
|
#repo:
|
||||||
|
# from_secret: docker-repo
|
||||||
|
repo: opencloudregistry/oc-discovery
|
||||||
|
|
||||||
|
# build context & dockerfile
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
|
||||||
|
# enable buildx / multi-arch
|
||||||
|
buildx: true
|
||||||
|
platforms:
|
||||||
|
- linux/amd64
|
||||||
|
- linux/arm64
|
||||||
|
- linux/arm/v7
|
||||||
|
|
||||||
|
# tags to push (all as a single multi-arch manifest)
|
||||||
|
tags:
|
||||||
|
- latest
|
||||||
|
|
||||||
- name: publish
|
|
||||||
image: plugins/docker
|
|
||||||
settings:
|
|
||||||
username:
|
|
||||||
from_secret: docker-user
|
|
||||||
password:
|
|
||||||
from_secret: docker-pw
|
|
||||||
repo:
|
|
||||||
from_secret: docker-repo
|
|
||||||
|
|||||||
495
ARCHITECTURE.md
Normal file
495
ARCHITECTURE.md
Normal file
@@ -0,0 +1,495 @@
|
|||||||
|
# oc-discovery — Architecture et analyse technique
|
||||||
|
|
||||||
|
> **Convention de lecture**
|
||||||
|
> Les points marqués ✅ ont été corrigés dans le code. Les points marqués ⚠️ restent ouverts.
|
||||||
|
|
||||||
|
## Table des matières
|
||||||
|
|
||||||
|
1. [Vue d'ensemble](#1-vue-densemble)
|
||||||
|
2. [Hiérarchie des rôles](#2-hiérarchie-des-rôles)
|
||||||
|
3. [Mécanismes principaux](#3-mécanismes-principaux)
|
||||||
|
- 3.1 Heartbeat long-lived (node → indexer)
|
||||||
|
- 3.2 Scoring de confiance
|
||||||
|
- 3.3 Enregistrement auprès des natifs (indexer → native)
|
||||||
|
- 3.4 Pool d'indexeurs : fetch + consensus
|
||||||
|
- 3.5 Self-delegation et offload loop
|
||||||
|
- 3.6 Résilience du mesh natif
|
||||||
|
- 3.7 DHT partagée
|
||||||
|
- 3.8 PubSub gossip (indexer registry)
|
||||||
|
- 3.9 Streams applicatifs (node ↔ node)
|
||||||
|
4. [Tableau récapitulatif](#4-tableau-récapitulatif)
|
||||||
|
5. [Risques et limites globaux](#5-risques-et-limites-globaux)
|
||||||
|
6. [Pistes d'amélioration](#6-pistes-damélioration)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Vue d'ensemble
|
||||||
|
|
||||||
|
`oc-discovery` est un service de découverte P2P pour le réseau OpenCloud. Il repose sur
|
||||||
|
**libp2p** (transport TCP + PSK réseau privé) et une **DHT Kademlia** (préfixe `oc`)
|
||||||
|
pour indexer les pairs. L'architecture est intentionnellement hiérarchique : des _natifs_
|
||||||
|
stables servent de hubs autoritaires auxquels des _indexeurs_ s'enregistrent, et des _nœuds_
|
||||||
|
ordinaires découvrent des indexeurs via ces natifs.
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────┐ heartbeat ┌──────────────────┐
|
||||||
|
│ Node │ ───────────────────► │ Indexer │
|
||||||
|
│ (libp2p) │ ◄─────────────────── │ (DHT server) │
|
||||||
|
└──────────────┘ stream applicatif └────────┬─────────┘
|
||||||
|
│ subscribe / heartbeat
|
||||||
|
▼
|
||||||
|
┌──────────────────┐
|
||||||
|
│ Native Indexer │◄──► autres natifs
|
||||||
|
│ (hub autoritaire│ (mesh)
|
||||||
|
└──────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Tous les participants partagent une **clé pré-partagée (PSK)** qui isole le réseau
|
||||||
|
des connexions libp2p externes non autorisées.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Hiérarchie des rôles
|
||||||
|
|
||||||
|
| Rôle | Binaire | Responsabilité |
|
||||||
|
|---|---|---|
|
||||||
|
| **Node** | `node_mode=node` | Se fait indexer, publie/consulte des records DHT |
|
||||||
|
| **Indexer** | `node_mode=indexer` | Reçoit les heartbeats, écrit en DHT, s'enregistre auprès des natifs |
|
||||||
|
| **Native Indexer** | `node_mode=native` | Hub : tient le registre des indexeurs vivants, évalue le consensus, sert de fallback |
|
||||||
|
|
||||||
|
Un même processus peut cumuler les rôles node+indexer ou indexer+native.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Mécanismes principaux
|
||||||
|
|
||||||
|
### 3.1 Heartbeat long-lived (node → indexer)
|
||||||
|
|
||||||
|
**Fonctionnement**
|
||||||
|
|
||||||
|
Un stream libp2p **persistant** (`/opencloud/heartbeat/1.0`) est ouvert depuis le nœud
|
||||||
|
vers chaque indexeur de son pool (`StaticIndexers`). Toutes les 20 secondes, le nœud
|
||||||
|
envoie un `Heartbeat` JSON sur ce stream. L'indexeur répond en enregistrant le peer dans
|
||||||
|
`StreamRecords[ProtocolHeartbeat]` avec une expiry de 2 min.
|
||||||
|
|
||||||
|
Si `sendHeartbeat` échoue (stream reset, EOF, timeout), le peer est retiré de
|
||||||
|
`StaticIndexers` et `replenishIndexersFromNative` est déclenché.
|
||||||
|
|
||||||
|
**Avantages**
|
||||||
|
- Détection rapide de déconnexion (erreur sur le prochain encode).
|
||||||
|
- Un seul stream par pair réduit la pression sur les connexions TCP.
|
||||||
|
- Le channel de nudge (`indexerHeartbeatNudge`) permet un reconnect immédiat sans
|
||||||
|
attendre le ticker de 20 s.
|
||||||
|
|
||||||
|
**Limites / risques**
|
||||||
|
- ⚠️ Un seul stream persistant : si la couche TCP reste ouverte mais "gelée" (middlebox,
|
||||||
|
NAT silencieux), l'erreur peut ne pas remonter avant plusieurs minutes.
|
||||||
|
- ⚠️ `StaticIndexers` est une map partagée globale : si deux goroutines appellent
|
||||||
|
`replenishIndexersFromNative` simultanément (cas de perte multiple), on peut avoir
|
||||||
|
des écritures concurrentes non protégées hors des sections critiques.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.2 Scoring de confiance
|
||||||
|
|
||||||
|
**Fonctionnement**
|
||||||
|
|
||||||
|
Avant d'enregistrer un heartbeat dans `StreamRecords`, l'indexeur vérifie un **score
|
||||||
|
minimum** calculé par `CheckHeartbeat` :
|
||||||
|
|
||||||
|
```
|
||||||
|
Score = (0.4 × uptime_ratio + 0.4 × bpms + 0.2 × diversity) × 100
|
||||||
|
```
|
||||||
|
|
||||||
|
- `uptime_ratio` : durée de présence du peer / durée depuis le démarrage de l'indexeur.
|
||||||
|
- `bpms` : débit mesuré via un stream dédié (`/opencloud/probe/1.0`) normalisé par 50 Mbps.
|
||||||
|
- `diversity` : ratio d'IP /24 distincts parmi les indexeurs que le peer déclare.
|
||||||
|
|
||||||
|
Deux seuils sont appliqués selon l'état du peer :
|
||||||
|
- **Premier heartbeat** (peer absent de `StreamRecords`, uptime = 0) : seuil à **40**.
|
||||||
|
- **Heartbeats suivants** (uptime accumulé) : seuil à **75**.
|
||||||
|
|
||||||
|
**Avantages**
|
||||||
|
- Décourage les peers éphémères ou lents d'encombrer le registre.
|
||||||
|
- La diversité réseau réduit le risque de concentration sur un seul sous-réseau.
|
||||||
|
- Le stream de probe dédié évite de polluer le stream JSON heartbeat avec des données binaires.
|
||||||
|
- Le double seuil permet aux nouveaux peers d'être admis dès leur première connexion.
|
||||||
|
|
||||||
|
**Limites / risques**
|
||||||
|
- ✅ **Deadlock logique de démarrage corrigé** : avec uptime = 0 le score maximal était 60,
|
||||||
|
en-dessous du seuil de 75. Les nouveaux peers étaient silencieusement rejetés à jamais.
|
||||||
|
→ Seuil abaissé à **40** pour le premier heartbeat (`isFirstHeartbeat`), 75 ensuite.
|
||||||
|
- ⚠️ Les seuils (40 / 75) restent câblés en dur, sans possibilité de configuration.
|
||||||
|
- ⚠️ La mesure de bande passante envoie entre 512 et 2048 octets par heartbeat : à 20 s
|
||||||
|
d'intervalle et 500 nœuds max, cela représente ~50 KB/s de trafic probe en continu.
|
||||||
|
- ⚠️ `diversity` est calculé sur les adresses que le nœud *déclare* avoir — ce champ est
|
||||||
|
auto-rapporté et non vérifié, facilement falsifiable.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.3 Enregistrement auprès des natifs (indexer → native)
|
||||||
|
|
||||||
|
**Fonctionnement**
|
||||||
|
|
||||||
|
Chaque indexeur (non-natif) envoie périodiquement (toutes les 60 s) une
|
||||||
|
`IndexerRegistration` JSON sur un stream one-shot (`/opencloud/native/subscribe/1.0`)
|
||||||
|
vers chaque natif configuré. Le natif :
|
||||||
|
|
||||||
|
1. Stocke l'entrée en cache local avec un TTL de **90 s** (`IndexerTTL`).
|
||||||
|
2. Gossipe le `PeerID` sur le topic PubSub `oc-indexer-registry` aux autres natifs.
|
||||||
|
3. Persiste l'entrée en DHT de manière asynchrone (retry jusqu'à succès).
|
||||||
|
|
||||||
|
**Avantages**
|
||||||
|
- Stream jetable : pas de ressource longue durée côté natif pour les enregistrements.
|
||||||
|
- Le cache local est immédiatement disponible pour `handleNativeGetIndexers` sans
|
||||||
|
attendre la DHT.
|
||||||
|
- La dissémination PubSub permet à d'autres natifs de connaître l'indexeur sans
|
||||||
|
qu'il ait besoin de s'y enregistrer directement.
|
||||||
|
|
||||||
|
**Limites / risques**
|
||||||
|
- ✅ **TTL trop serré corrigé** : le TTL de 66 s n'était que 10 % au-dessus de l'intervalle
|
||||||
|
de 60 s — un léger retard réseau pouvait expirer un indexeur sain entre deux renewals.
|
||||||
|
→ `IndexerTTL` porté à **90 s** (+50 %).
|
||||||
|
- ⚠️ Si le `PutValue` DHT échoue définitivement (réseau partitionné), le natif possède
|
||||||
|
l'entrée mais les autres natifs qui n'ont pas reçu le message PubSub ne la connaissent
|
||||||
|
jamais — incohérence silencieuse.
|
||||||
|
- ⚠️ `RegisterWithNative` ignore les adresses en `127.0.0.1`, mais ne gère pas
|
||||||
|
les adresses privées (RFC1918) qui seraient non routables depuis d'autres hôtes.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.4 Pool d'indexeurs : fetch + consensus
|
||||||
|
|
||||||
|
**Fonctionnement**
|
||||||
|
|
||||||
|
Lors de `ConnectToNatives` (démarrage ou replenish), le nœud/indexeur :
|
||||||
|
|
||||||
|
1. **Fetch** : envoie `GetIndexersRequest` au premier natif répondant
|
||||||
|
(`/opencloud/native/indexers/1.0`), reçoit une liste de candidats.
|
||||||
|
2. **Consensus (round 1)** : interroge **tous** les natifs configurés en parallèle
|
||||||
|
(`/opencloud/native/consensus/1.0`, timeout 3 s, collecte sur 4 s).
|
||||||
|
Un indexeur est confirmé si **strictement plus de 50 %** des natifs répondants
|
||||||
|
le considèrent vivant.
|
||||||
|
3. **Consensus (round 2)** : si le pool est insuffisant, les suggestions des natifs
|
||||||
|
(indexeurs qu'ils connaissent mais qui n'étaient pas dans les candidats initiaux)
|
||||||
|
sont soumises à un second round.
|
||||||
|
|
||||||
|
**Avantages**
|
||||||
|
- La règle de majorité absolue empêche un natif compromis ou désynchronisé d'injecter
|
||||||
|
des indexeurs fantômes.
|
||||||
|
- Le double round permet de compléter le pool avec des alternatives connues des natifs
|
||||||
|
sans sacrifier la vérification.
|
||||||
|
- Si le fetch retourne un **fallback** (natif comme indexeur), le consensus est skippé —
|
||||||
|
cohérent car il n'y a qu'une seule source.
|
||||||
|
|
||||||
|
**Limites / risques**
|
||||||
|
- ⚠️ Avec **un seul natif** configuré (très courant en dev/test), le consensus est trivial
|
||||||
|
(100 % d'un seul vote) — la règle de majorité ne protège rien dans ce cas.
|
||||||
|
- ⚠️ `fetchIndexersFromNative` s'arrête au **premier natif répondant** (séquentiellement) :
|
||||||
|
si ce natif a un cache périmé ou partiel, le nœud obtient un pool sous-optimal sans
|
||||||
|
consulter les autres.
|
||||||
|
- ⚠️ Le timeout de collecte global (4 s) est fixe : sur un réseau lent ou géographiquement
|
||||||
|
distribué, des natifs valides peuvent être éliminés faute de réponse à temps.
|
||||||
|
- ⚠️ `replaceStaticIndexers` **ajoute** sans jamais retirer d'anciens indexeurs expirés :
|
||||||
|
le pool peut accumuler des entrées mortes que seul le heartbeat purge ensuite.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.5 Self-delegation et offload loop
|
||||||
|
|
||||||
|
**Fonctionnement**
|
||||||
|
|
||||||
|
Si un natif ne dispose d'aucun indexeur vivant lors d'un `handleNativeGetIndexers`,
|
||||||
|
il se désigne lui-même comme indexeur temporaire (`selfDelegate`) : il retourne sa propre
|
||||||
|
adresse multiaddr et ajoute le demandeur dans `responsiblePeers`, dans la limite de
|
||||||
|
`maxFallbackPeers` (50). Au-delà, la délégation est refusée et une réponse vide est
|
||||||
|
retournée pour que le nœud tente un autre natif.
|
||||||
|
|
||||||
|
Toutes les 30 s, `runOffloadLoop` vérifie si des indexeurs réels sont de nouveau
|
||||||
|
disponibles. Si oui, pour chaque peer responsable :
|
||||||
|
- **Stream présent** : `Reset()` du stream heartbeat — le peer reçoit une erreur,
|
||||||
|
déclenche `replenishIndexersFromNative` et migre vers de vrais indexeurs.
|
||||||
|
- **Stream absent** (peer jamais admis par le scoring) : `ClosePeer()` sur la connexion
|
||||||
|
réseau — le peer reconnecte et re-demande ses indexeurs au natif.
|
||||||
|
|
||||||
|
**Avantages**
|
||||||
|
- Continuité de service : un nœud n'est jamais bloqué en l'absence temporaire d'indexeurs.
|
||||||
|
- La migration est automatique et transparente pour le nœud.
|
||||||
|
- `Reset()` (vs `Close()`) interrompt les deux sens du stream, garantissant que le peer
|
||||||
|
reçoit bien une erreur.
|
||||||
|
- La limite de 50 empêche le natif de se retrouver surchargé lors de pénuries prolongées.
|
||||||
|
|
||||||
|
**Limites / risques**
|
||||||
|
- ✅ **Offload sans stream corrigé** : si le heartbeat n'avait jamais été enregistré dans
|
||||||
|
`StreamRecords` (score < seuil — cas amplifié par le bug de scoring), l'offload
|
||||||
|
échouait silencieusement et le peer restait dans `responsiblePeers` indéfiniment.
|
||||||
|
→ Branche `else` : `ClosePeer()` + suppression de `responsiblePeers`.
|
||||||
|
- ✅ **`responsiblePeers` illimité corrigé** : le natif acceptait un nombre arbitraire
|
||||||
|
de peers en self-delegation, devenant lui-même un indexeur surchargé.
|
||||||
|
→ `selfDelegate` vérifie `len(responsiblePeers) >= maxFallbackPeers` et retourne
|
||||||
|
`false` si saturé.
|
||||||
|
- ⚠️ La délégation reste non coordonnée entre natifs : un natif surchargé refuse (retourne
|
||||||
|
vide) mais ne redirige pas explicitement vers un natif voisin qui aurait de la capacité.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.6 Résilience du mesh natif
|
||||||
|
|
||||||
|
**Fonctionnement**
|
||||||
|
|
||||||
|
Quand le heartbeat vers un natif échoue, `replenishNativesFromPeers` tente de trouver
|
||||||
|
un remplaçant dans cet ordre :
|
||||||
|
|
||||||
|
1. `fetchNativeFromNatives` : demande à chaque natif vivant (`/opencloud/native/peers/1.0`)
|
||||||
|
une adresse de natif inconnue.
|
||||||
|
2. `fetchNativeFromIndexers` : demande à chaque indexeur connu
|
||||||
|
(`/opencloud/indexer/natives/1.0`) ses natifs configurés.
|
||||||
|
3. Si aucun remplaçant et `remaining ≤ 1` : `retryLostNative` relance un ticker de 30 s
|
||||||
|
qui retente la connexion directe au natif perdu.
|
||||||
|
|
||||||
|
`EnsureNativePeers` maintient des heartbeats de natif à natif via `ProtocolHeartbeat`,
|
||||||
|
avec une **unique goroutine** couvrant toute la map `StaticNatives`.
|
||||||
|
|
||||||
|
**Avantages**
|
||||||
|
- Le gossip multi-hop via indexeurs permet de retrouver un natif même si aucun pair
|
||||||
|
direct ne le connaît.
|
||||||
|
- `retryLostNative` gère le cas d'un seul natif (déploiement minimal).
|
||||||
|
- La reconnexion automatique (`retryLostNative`) déclenche `replenishIndexersIfNeeded`
|
||||||
|
pour restaurer aussi le pool d'indexeurs.
|
||||||
|
|
||||||
|
**Limites / risques**
|
||||||
|
- ✅ **Goroutines heartbeat multiples corrigé** : `EnsureNativePeers` démarrait une
|
||||||
|
goroutine `SendHeartbeat` par adresse native (N natifs → N goroutines → N² heartbeats
|
||||||
|
par tick). → Utilisation de `nativeMeshHeartbeatOnce` : une seule goroutine itère sur
|
||||||
|
`StaticNatives`.
|
||||||
|
- ⚠️ `retryLostNative` tourne indéfiniment sans condition d'arrêt liée à la vie du processus
|
||||||
|
(pas de `context.Context`). Si le binaire est gracefully shutdown, cette goroutine
|
||||||
|
peut bloquer.
|
||||||
|
- ⚠️ La découverte transitoire (natif → indexeur → natif) est à sens unique : un indexeur
|
||||||
|
ne connaît que les natifs de sa propre config, pas les nouveaux natifs qui auraient
|
||||||
|
rejoint après son démarrage.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.7 DHT partagée
|
||||||
|
|
||||||
|
**Fonctionnement**
|
||||||
|
|
||||||
|
Tous les indexeurs et natifs participent à une DHT Kademlia (préfixe `oc`, mode
|
||||||
|
`ModeServer`). Deux namespaces sont utilisés :
|
||||||
|
|
||||||
|
- `/node/<DID>` → `PeerRecord` JSON signé (publié par les indexeurs sur heartbeat de nœud).
|
||||||
|
- `/indexer/<PeerID>` → `liveIndexerEntry` JSON avec TTL (publié par les natifs).
|
||||||
|
|
||||||
|
Chaque natif lance `refreshIndexersFromDHT` (toutes les 30 s) qui ré-hydrate son cache
|
||||||
|
local depuis la DHT pour les PeerIDs connus (`knownPeerIDs`) dont l'entrée locale a expiré.
|
||||||
|
|
||||||
|
**Avantages**
|
||||||
|
- Persistance décentralisée : un record survit à la perte d'un seul natif ou indexeur.
|
||||||
|
- Validation des entrées : `PeerRecordValidator` et `IndexerRecordValidator` rejettent
|
||||||
|
les records malformés ou expirés au moment du `PutValue`.
|
||||||
|
- L'index secondaire `/name/<name>` permet la résolution par nom humain.
|
||||||
|
|
||||||
|
**Limites / risques**
|
||||||
|
- ⚠️ La DHT Kademlia en réseau privé (PSK) est fonctionnelle mais les nœuds bootstrap
|
||||||
|
ne sont pas configurés explicitement : la découverte dépend de connexions déjà établies,
|
||||||
|
ce qui peut ralentir la convergence au démarrage.
|
||||||
|
- ⚠️ `PutValue` est réessayé en boucle infinie si `"failed to find any peer in table"` —
|
||||||
|
une panne de réseau prolongée génère des goroutines bloquées.
|
||||||
|
- ⚠️ Si la PSK est compromise, un attaquant peut écrire dans la DHT ; les `liveIndexerEntry`
|
||||||
|
d'indexeurs ne sont pas signées, contrairement aux `PeerRecord`.
|
||||||
|
- ⚠️ `refreshIndexersFromDHT` prune `knownPeerIDs` si la DHT n'a aucune entrée fraîche,
|
||||||
|
mais ne prune pas `liveIndexers` — une entrée expirée reste en mémoire jusqu'au GC
|
||||||
|
ou au prochain refresh.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.8 PubSub gossip (indexer registry)
|
||||||
|
|
||||||
|
**Fonctionnement**
|
||||||
|
|
||||||
|
Quand un indexeur s'enregistre auprès d'un natif, ce dernier publie l'adresse sur le
|
||||||
|
topic GossipSub `oc-indexer-registry`. Les autres natifs abonnés mettent à jour leur
|
||||||
|
`knownPeerIDs` sans attendre la DHT.
|
||||||
|
|
||||||
|
Le `TopicValidator` rejette tout message dont le contenu n'est pas un multiaddr
|
||||||
|
parseable valide avant qu'il n'atteigne la boucle de traitement.
|
||||||
|
|
||||||
|
**Avantages**
|
||||||
|
- Dissémination quasi-instantanée entre natifs connectés.
|
||||||
|
- Complément utile à la DHT pour les registrations récentes qui n'ont pas encore
|
||||||
|
été persistées.
|
||||||
|
- Le filtre syntaxique bloque les messages malformés avant propagation dans le mesh.
|
||||||
|
|
||||||
|
**Limites / risques**
|
||||||
|
- ✅ **`TopicValidator` sans validation corrigé** : le validateur acceptait systématiquement
|
||||||
|
tous les messages (`return true`), permettant à un natif compromis de gossiper
|
||||||
|
n'importe quelle donnée.
|
||||||
|
→ Le validateur vérifie désormais que le message est un multiaddr parseable
|
||||||
|
(`pp.AddrInfoFromString`).
|
||||||
|
- ⚠️ La validation reste syntaxique uniquement : l'origine du message (l'émetteur
|
||||||
|
est-il un natif légitime ?) n'est pas vérifiée.
|
||||||
|
- ⚠️ Si le natif redémarre, il perd son abonnement et manque les messages publiés
|
||||||
|
pendant son absence. La re-hydratation depuis la DHT compense, mais avec un délai
|
||||||
|
pouvant aller jusqu'à 30 s.
|
||||||
|
- ⚠️ Le gossip ne porte que le `Addr` de l'indexeur, pas sa TTL ni sa signature.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3.9 Streams applicatifs (node ↔ node)
|
||||||
|
|
||||||
|
**Fonctionnement**
|
||||||
|
|
||||||
|
`StreamService` gère les streams entre nœuds partenaires (relations `PARTNER` stockées
|
||||||
|
en base) via des protocols dédiés (`/opencloud/resource/*`). Un heartbeat partenaire
|
||||||
|
(`ProtocolHeartbeatPartner`) maintient les connexions actives. Les events sont routés
|
||||||
|
via `handleEvent` et le système NATS en parallèle.
|
||||||
|
|
||||||
|
**Avantages**
|
||||||
|
- TTL par protocol (`PersistantStream`, `WaitResponse`) adapte le comportement au
|
||||||
|
type d'échange (longue durée pour le planner, courte pour les CRUDs).
|
||||||
|
- La GC (`gc()` toutes les 8 s, démarrée une seule fois dans `InitStream`) libère
|
||||||
|
rapidement les streams expirés.
|
||||||
|
|
||||||
|
**Limites / risques**
|
||||||
|
- ✅ **Fuite de goroutines GC corrigée** : `HandlePartnerHeartbeat` appelait
|
||||||
|
`go s.StartGC(30s)` à chaque heartbeat reçu (~20 s), créant un nouveau ticker
|
||||||
|
goroutine infini à chaque appel.
|
||||||
|
→ Appel supprimé ; la GC lancée par `InitStream` est suffisante.
|
||||||
|
- ✅ **Boucle infinie sur EOF corrigée** : `readLoop` effectuait `s.Stream.Close();
|
||||||
|
continue` après une erreur de décodage, re-tentant indéfiniment de lire un stream
|
||||||
|
fermé.
|
||||||
|
→ Remplacé par `return` ; les defers (`Close`, `delete`) nettoient correctement.
|
||||||
|
- ⚠️ La récupération de partenaires depuis `conf.PeerIDS` est marquée `TO REMOVE` :
|
||||||
|
présence de code provisoire en production.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Tableau récapitulatif
|
||||||
|
|
||||||
|
| Mécanisme | Protocole | Avantage principal | État du risque |
|
||||||
|
|---|---|---|---|
|
||||||
|
| Heartbeat node→indexer | `/opencloud/heartbeat/1.0` | Détection rapide de perte | ⚠️ Stream TCP gelé non détecté |
|
||||||
|
| Scoring de confiance | (inline dans heartbeat) | Filtre les pairs instables | ✅ Deadlock corrigé (seuil 40/75) |
|
||||||
|
| Enregistrement natif | `/opencloud/native/subscribe/1.0` | TTL ample, cache immédiat | ✅ TTL porté à 90 s |
|
||||||
|
| Fetch pool d'indexeurs | `/opencloud/native/indexers/1.0` | Prend le 1er natif répondant | ⚠️ Natif au cache périmé possible |
|
||||||
|
| Consensus | `/opencloud/native/consensus/1.0` | Majorité absolue | ⚠️ Trivial avec 1 seul natif |
|
||||||
|
| Self-delegation + offload | (in-memory) | Disponibilité sans indexeur | ✅ Limite 50 peers + ClosePeer |
|
||||||
|
| Mesh natif | `/opencloud/native/peers/1.0` | Gossip multi-hop | ✅ Goroutines dédupliquées |
|
||||||
|
| DHT | `/oc/kad/1.0.0` | Persistance décentralisée | ⚠️ Retry infini, pas de bootstrap |
|
||||||
|
| PubSub registry | `oc-indexer-registry` | Dissémination rapide | ✅ Validation multiaddr |
|
||||||
|
| Streams applicatifs | `/opencloud/resource/*` | TTL par protocol | ✅ Fuite GC + EOF corrigés |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Risques et limites globaux
|
||||||
|
|
||||||
|
### Sécurité
|
||||||
|
|
||||||
|
- ⚠️ **Adresses auto-rapportées non vérifiées** : le champ `IndexersBinded` dans le heartbeat
|
||||||
|
est auto-déclaré par le nœud et sert à calculer la diversité. Un pair malveillant peut
|
||||||
|
gonfler son score en déclarant de fausses adresses.
|
||||||
|
- ⚠️ **PSK comme seule barrière d'entrée** : si la PSK est compromise (elle est statique et
|
||||||
|
fichier-based), tout l'isolement réseau saute. Il n'y a pas de rotation de clé ni
|
||||||
|
d'authentification supplémentaire par pair.
|
||||||
|
- ⚠️ **DHT sans ACL sur les entrées indexeur** : la signature des `PeerRecord` est vérifiée
|
||||||
|
à la lecture, mais les `liveIndexerEntry` ne sont pas signées. La validation PubSub
|
||||||
|
bloque les multiaddrs invalides mais pas les adresses d'indexeurs légitimes usurpées.
|
||||||
|
|
||||||
|
### Disponibilité
|
||||||
|
|
||||||
|
- ⚠️ **Single point of failure natif** : avec un seul natif, la perte de celui-ci stoppe
|
||||||
|
toute attribution d'indexeurs. `retryLostNative` pallie, mais sans indexeurs, les nœuds
|
||||||
|
ne peuvent pas publier.
|
||||||
|
- ⚠️ **Bootstrap DHT** : sans nœuds bootstrap explicites, la DHT met du temps à converger
|
||||||
|
si les connexions initiales sont peu nombreuses.
|
||||||
|
|
||||||
|
### Cohérence
|
||||||
|
|
||||||
|
- ⚠️ **`replaceStaticIndexers` n'efface jamais** : d'anciens indexeurs morts restent dans
|
||||||
|
`StaticIndexers` jusqu'à ce que le heartbeat échoue. Un nœud peut avoir un pool
|
||||||
|
surévalué contenant des entrées inatteignables.
|
||||||
|
- ⚠️ **`TimeWatcher` global** : défini une seule fois au démarrage de `ConnectToIndexers`.
|
||||||
|
Si l'indexeur tourne depuis longtemps, les nouveaux nœuds auront un `uptime_ratio`
|
||||||
|
durablement faible. Le seuil abaissé à 40 pour le premier heartbeat atténue l'impact
|
||||||
|
initial, mais les heartbeats suivants devront accumuler un uptime suffisant.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Pistes d'amélioration
|
||||||
|
|
||||||
|
Les pistes déjà implémentées sont marquées ✅. Les pistes ouvertes restent à traiter.
|
||||||
|
|
||||||
|
### ✅ Score : double seuil pour les nouveaux peers
|
||||||
|
~~Remplacer le seuil binaire~~ — **Implémenté** : seuil à 40 pour le premier heartbeat
|
||||||
|
(peer absent de `StreamRecords`), 75 pour les suivants. Un peer peut désormais être admis
|
||||||
|
dès sa première connexion sans bloquer sur l'uptime nul.
|
||||||
|
_Fichier : `common/common_stream.go`, `CheckHeartbeat`_
|
||||||
|
|
||||||
|
### ✅ TTL indexeur aligné avec l'intervalle de renouvellement
|
||||||
|
~~TTL de 66 s trop proche de 60 s~~ — **Implémenté** : `IndexerTTL` passé à **90 s**.
|
||||||
|
_Fichier : `indexer/native.go`_
|
||||||
|
|
||||||
|
### ✅ Limite de la self-delegation
|
||||||
|
~~`responsiblePeers` illimité~~ — **Implémenté** : `selfDelegate` retourne `false` quand
|
||||||
|
`len(responsiblePeers) >= maxFallbackPeers` (50). Le site d'appel retourne une réponse
|
||||||
|
vide et logue un warning.
|
||||||
|
_Fichier : `indexer/native.go`_
|
||||||
|
|
||||||
|
### ✅ Validation PubSub des adresses gossipées
|
||||||
|
~~`TopicValidator` accepte tout~~ — **Implémenté** : le validateur vérifie que le message
|
||||||
|
est un multiaddr parseable via `pp.AddrInfoFromString`.
|
||||||
|
_Fichier : `indexer/native.go`, `subscribeIndexerRegistry`_
|
||||||
|
|
||||||
|
### ✅ Goroutines heartbeat dédupliquées dans `EnsureNativePeers`
|
||||||
|
~~Une goroutine par adresse native~~ — **Implémenté** : `nativeMeshHeartbeatOnce`
|
||||||
|
garantit qu'une seule goroutine `SendHeartbeat` couvre toute la map `StaticNatives`.
|
||||||
|
_Fichier : `common/native_stream.go`_
|
||||||
|
|
||||||
|
### ✅ Fuite de goroutines GC dans `HandlePartnerHeartbeat`
|
||||||
|
~~`go s.StartGC(30s)` à chaque heartbeat~~ — **Implémenté** : appel supprimé ; la GC
|
||||||
|
de `InitStream` est suffisante.
|
||||||
|
_Fichier : `stream/service.go`_
|
||||||
|
|
||||||
|
### ✅ Boucle infinie sur EOF dans `readLoop`
|
||||||
|
~~`continue` après `Stream.Close()`~~ — **Implémenté** : remplacé par `return` pour
|
||||||
|
laisser les defers nettoyer proprement.
|
||||||
|
_Fichier : `stream/service.go`_
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ⚠️ Fetch pool : interroger tous les natifs en parallèle
|
||||||
|
|
||||||
|
`fetchIndexersFromNative` s'arrête au premier natif répondant. Interroger tous les natifs
|
||||||
|
en parallèle et fusionner les listes (similairement à `clientSideConsensus`) éviterait
|
||||||
|
qu'un natif au cache périmé fournisse un pool sous-optimal.
|
||||||
|
|
||||||
|
### ⚠️ Consensus avec quorum configurable
|
||||||
|
|
||||||
|
Le seuil de confirmation (`count*2 > total`) est câblé en dur. Le rendre configurable
|
||||||
|
(ex. `consensus_quorum: 0.67`) permettrait de durcir la règle sur des déploiements
|
||||||
|
à 3+ natifs sans modifier le code.
|
||||||
|
|
||||||
|
### ⚠️ Désenregistrement explicite
|
||||||
|
|
||||||
|
Ajouter un protocole `/opencloud/native/unsubscribe/1.0` : quand un indexeur s'arrête
|
||||||
|
proprement, il notifie les natifs pour invalider son TTL immédiatement plutôt qu'attendre
|
||||||
|
90 s.
|
||||||
|
|
||||||
|
### ⚠️ Bootstrap DHT explicite
|
||||||
|
|
||||||
|
Configurer les natifs comme nœuds bootstrap DHT via `dht.BootstrapPeers` pour accélérer
|
||||||
|
la convergence Kademlia au démarrage.
|
||||||
|
|
||||||
|
### ⚠️ Context propagé dans les goroutines longue durée
|
||||||
|
|
||||||
|
`retryLostNative`, `refreshIndexersFromDHT` et `runOffloadLoop` ne reçoivent aucun
|
||||||
|
`context.Context`. Les passer depuis `InitNative` permettrait un arrêt propre lors du
|
||||||
|
shutdown du processus.
|
||||||
|
|
||||||
|
### ⚠️ Redirection explicite lors du refus de self-delegation
|
||||||
|
|
||||||
|
Quand un natif refuse la self-delegation (pool saturé), retourner vide force le nœud à
|
||||||
|
réessayer sans lui indiquer vers qui se tourner. Une liste de natifs alternatifs dans la
|
||||||
|
réponse (`AlternativeNatives []string`) permettrait au nœud de trouver directement un
|
||||||
|
natif moins chargé.
|
||||||
81
Dockerfile
81
Dockerfile
@@ -1,31 +1,62 @@
|
|||||||
FROM golang:alpine as builder
|
# ========================
|
||||||
|
# Global build arguments
|
||||||
|
# ========================
|
||||||
|
ARG CONF_NUM
|
||||||
|
|
||||||
WORKDIR /app
|
# ========================
|
||||||
|
# Dependencies stage
|
||||||
COPY . .
|
# ========================
|
||||||
|
FROM golang:alpine AS deps
|
||||||
RUN apk add git
|
ARG CONF_NUM
|
||||||
|
|
||||||
RUN go get github.com/beego/bee/v2 && go install github.com/beego/bee/v2@master
|
|
||||||
|
|
||||||
RUN timeout 15 bee run -gendoc=true -downdoc=true -runmode=dev || :
|
|
||||||
|
|
||||||
RUN sed -i 's/http:\/\/127.0.0.1:8080\/swagger\/swagger.json/swagger.json/g' swagger/index.html
|
|
||||||
|
|
||||||
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" .
|
|
||||||
|
|
||||||
RUN ls /app
|
|
||||||
|
|
||||||
FROM scratch
|
|
||||||
|
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
|
|
||||||
COPY --from=builder /app/oc-discovery /usr/bin/
|
COPY go.mod go.sum ./
|
||||||
COPY --from=builder /app/swagger /app/swagger
|
RUN sed -i '/replace/d' go.mod
|
||||||
COPY peers.json /app/
|
RUN go mod download
|
||||||
COPY identity.json /app/
|
|
||||||
COPY docker_discovery.json /etc/oc/discovery.json
|
|
||||||
|
|
||||||
EXPOSE 8080
|
# ========================
|
||||||
|
# Builder stage
|
||||||
|
# ========================
|
||||||
|
FROM golang:alpine AS builder
|
||||||
|
ARG CONF_NUM
|
||||||
|
|
||||||
ENTRYPOINT ["oc-discovery"]
|
WORKDIR /oc-discovery
|
||||||
|
|
||||||
|
# Reuse Go cache
|
||||||
|
COPY --from=deps /go/pkg /go/pkg
|
||||||
|
COPY --from=deps /app/go.mod /app/go.sum ./
|
||||||
|
|
||||||
|
# App sources
|
||||||
|
COPY . .
|
||||||
|
# Clean replace directives again (safety)
|
||||||
|
RUN sed -i '/replace/d' go.mod
|
||||||
|
|
||||||
|
# Build package
|
||||||
|
RUN go install github.com/beego/bee/v2@latest
|
||||||
|
RUN bee pack
|
||||||
|
|
||||||
|
# Extract bundle
|
||||||
|
RUN mkdir -p /app/extracted \
|
||||||
|
&& tar -zxvf oc-discovery.tar.gz -C /app/extracted
|
||||||
|
|
||||||
|
# ========================
|
||||||
|
# Runtime stage
|
||||||
|
# ========================
|
||||||
|
FROM golang:alpine
|
||||||
|
ARG CONF_NUM
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
RUN mkdir ./pem
|
||||||
|
|
||||||
|
COPY --from=builder /app/extracted/pem/private${CONF_NUM:-1}.pem ./pem/private.pem
|
||||||
|
COPY --from=builder /app/extracted/psk ./psk
|
||||||
|
COPY --from=builder /app/extracted/pem/public${CONF_NUM:-1}.pem ./pem/public.pem
|
||||||
|
|
||||||
|
COPY --from=builder /app/extracted/oc-discovery /usr/bin/oc-discovery
|
||||||
|
COPY --from=builder /app/extracted/docker_discovery${CONF_NUM:-1}.json /etc/oc/discovery.json
|
||||||
|
|
||||||
|
EXPOSE 400${CONF_NUM:-1}
|
||||||
|
|
||||||
|
ENTRYPOINT ["oc-discovery"]
|
||||||
9
LICENSE
9
LICENSE
@@ -1,9 +0,0 @@
|
|||||||
MIT License
|
|
||||||
|
|
||||||
Copyright (c) <year> <copyright holders>
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
||||||
660
LICENSE.md
Normal file
660
LICENSE.md
Normal file
@@ -0,0 +1,660 @@
|
|||||||
|
# GNU AFFERO GENERAL PUBLIC LICENSE
|
||||||
|
|
||||||
|
Version 3, 19 November 2007
|
||||||
|
|
||||||
|
Copyright (C) 2007 Free Software Foundation, Inc.
|
||||||
|
<https://fsf.org/>
|
||||||
|
|
||||||
|
Everyone is permitted to copy and distribute verbatim copies of this
|
||||||
|
license document, but changing it is not allowed.
|
||||||
|
|
||||||
|
## Preamble
|
||||||
|
|
||||||
|
The GNU Affero General Public License is a free, copyleft license for
|
||||||
|
software and other kinds of works, specifically designed to ensure
|
||||||
|
cooperation with the community in the case of network server software.
|
||||||
|
|
||||||
|
The licenses for most software and other practical works are designed
|
||||||
|
to take away your freedom to share and change the works. By contrast,
|
||||||
|
our General Public Licenses are intended to guarantee your freedom to
|
||||||
|
share and change all versions of a program--to make sure it remains
|
||||||
|
free software for all its users.
|
||||||
|
|
||||||
|
When we speak of free software, we are referring to freedom, not
|
||||||
|
price. Our General Public Licenses are designed to make sure that you
|
||||||
|
have the freedom to distribute copies of free software (and charge for
|
||||||
|
them if you wish), that you receive source code or can get it if you
|
||||||
|
want it, that you can change the software or use pieces of it in new
|
||||||
|
free programs, and that you know you can do these things.
|
||||||
|
|
||||||
|
Developers that use our General Public Licenses protect your rights
|
||||||
|
with two steps: (1) assert copyright on the software, and (2) offer
|
||||||
|
you this License which gives you legal permission to copy, distribute
|
||||||
|
and/or modify the software.
|
||||||
|
|
||||||
|
A secondary benefit of defending all users' freedom is that
|
||||||
|
improvements made in alternate versions of the program, if they
|
||||||
|
receive widespread use, become available for other developers to
|
||||||
|
incorporate. Many developers of free software are heartened and
|
||||||
|
encouraged by the resulting cooperation. However, in the case of
|
||||||
|
software used on network servers, this result may fail to come about.
|
||||||
|
The GNU General Public License permits making a modified version and
|
||||||
|
letting the public access it on a server without ever releasing its
|
||||||
|
source code to the public.
|
||||||
|
|
||||||
|
The GNU Affero General Public License is designed specifically to
|
||||||
|
ensure that, in such cases, the modified source code becomes available
|
||||||
|
to the community. It requires the operator of a network server to
|
||||||
|
provide the source code of the modified version running there to the
|
||||||
|
users of that server. Therefore, public use of a modified version, on
|
||||||
|
a publicly accessible server, gives the public access to the source
|
||||||
|
code of the modified version.
|
||||||
|
|
||||||
|
An older license, called the Affero General Public License and
|
||||||
|
published by Affero, was designed to accomplish similar goals. This is
|
||||||
|
a different license, not a version of the Affero GPL, but Affero has
|
||||||
|
released a new version of the Affero GPL which permits relicensing
|
||||||
|
under this license.
|
||||||
|
|
||||||
|
The precise terms and conditions for copying, distribution and
|
||||||
|
modification follow.
|
||||||
|
|
||||||
|
## TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
### 0. Definitions.
|
||||||
|
|
||||||
|
"This License" refers to version 3 of the GNU Affero General Public
|
||||||
|
License.
|
||||||
|
|
||||||
|
"Copyright" also means copyright-like laws that apply to other kinds
|
||||||
|
of works, such as semiconductor masks.
|
||||||
|
|
||||||
|
"The Program" refers to any copyrightable work licensed under this
|
||||||
|
License. Each licensee is addressed as "you". "Licensees" and
|
||||||
|
"recipients" may be individuals or organizations.
|
||||||
|
|
||||||
|
To "modify" a work means to copy from or adapt all or part of the work
|
||||||
|
in a fashion requiring copyright permission, other than the making of
|
||||||
|
an exact copy. The resulting work is called a "modified version" of
|
||||||
|
the earlier work or a work "based on" the earlier work.
|
||||||
|
|
||||||
|
A "covered work" means either the unmodified Program or a work based
|
||||||
|
on the Program.
|
||||||
|
|
||||||
|
To "propagate" a work means to do anything with it that, without
|
||||||
|
permission, would make you directly or secondarily liable for
|
||||||
|
infringement under applicable copyright law, except executing it on a
|
||||||
|
computer or modifying a private copy. Propagation includes copying,
|
||||||
|
distribution (with or without modification), making available to the
|
||||||
|
public, and in some countries other activities as well.
|
||||||
|
|
||||||
|
To "convey" a work means any kind of propagation that enables other
|
||||||
|
parties to make or receive copies. Mere interaction with a user
|
||||||
|
through a computer network, with no transfer of a copy, is not
|
||||||
|
conveying.
|
||||||
|
|
||||||
|
An interactive user interface displays "Appropriate Legal Notices" to
|
||||||
|
the extent that it includes a convenient and prominently visible
|
||||||
|
feature that (1) displays an appropriate copyright notice, and (2)
|
||||||
|
tells the user that there is no warranty for the work (except to the
|
||||||
|
extent that warranties are provided), that licensees may convey the
|
||||||
|
work under this License, and how to view a copy of this License. If
|
||||||
|
the interface presents a list of user commands or options, such as a
|
||||||
|
menu, a prominent item in the list meets this criterion.
|
||||||
|
|
||||||
|
### 1. Source Code.
|
||||||
|
|
||||||
|
The "source code" for a work means the preferred form of the work for
|
||||||
|
making modifications to it. "Object code" means any non-source form of
|
||||||
|
a work.
|
||||||
|
|
||||||
|
A "Standard Interface" means an interface that either is an official
|
||||||
|
standard defined by a recognized standards body, or, in the case of
|
||||||
|
interfaces specified for a particular programming language, one that
|
||||||
|
is widely used among developers working in that language.
|
||||||
|
|
||||||
|
The "System Libraries" of an executable work include anything, other
|
||||||
|
than the work as a whole, that (a) is included in the normal form of
|
||||||
|
packaging a Major Component, but which is not part of that Major
|
||||||
|
Component, and (b) serves only to enable use of the work with that
|
||||||
|
Major Component, or to implement a Standard Interface for which an
|
||||||
|
implementation is available to the public in source code form. A
|
||||||
|
"Major Component", in this context, means a major essential component
|
||||||
|
(kernel, window system, and so on) of the specific operating system
|
||||||
|
(if any) on which the executable work runs, or a compiler used to
|
||||||
|
produce the work, or an object code interpreter used to run it.
|
||||||
|
|
||||||
|
The "Corresponding Source" for a work in object code form means all
|
||||||
|
the source code needed to generate, install, and (for an executable
|
||||||
|
work) run the object code and to modify the work, including scripts to
|
||||||
|
control those activities. However, it does not include the work's
|
||||||
|
System Libraries, or general-purpose tools or generally available free
|
||||||
|
programs which are used unmodified in performing those activities but
|
||||||
|
which are not part of the work. For example, Corresponding Source
|
||||||
|
includes interface definition files associated with source files for
|
||||||
|
the work, and the source code for shared libraries and dynamically
|
||||||
|
linked subprograms that the work is specifically designed to require,
|
||||||
|
such as by intimate data communication or control flow between those
|
||||||
|
subprograms and other parts of the work.
|
||||||
|
|
||||||
|
The Corresponding Source need not include anything that users can
|
||||||
|
regenerate automatically from other parts of the Corresponding Source.
|
||||||
|
|
||||||
|
The Corresponding Source for a work in source code form is that same
|
||||||
|
work.
|
||||||
|
|
||||||
|
### 2. Basic Permissions.
|
||||||
|
|
||||||
|
All rights granted under this License are granted for the term of
|
||||||
|
copyright on the Program, and are irrevocable provided the stated
|
||||||
|
conditions are met. This License explicitly affirms your unlimited
|
||||||
|
permission to run the unmodified Program. The output from running a
|
||||||
|
covered work is covered by this License only if the output, given its
|
||||||
|
content, constitutes a covered work. This License acknowledges your
|
||||||
|
rights of fair use or other equivalent, as provided by copyright law.
|
||||||
|
|
||||||
|
You may make, run and propagate covered works that you do not convey,
|
||||||
|
without conditions so long as your license otherwise remains in force.
|
||||||
|
You may convey covered works to others for the sole purpose of having
|
||||||
|
them make modifications exclusively for you, or provide you with
|
||||||
|
facilities for running those works, provided that you comply with the
|
||||||
|
terms of this License in conveying all material for which you do not
|
||||||
|
control copyright. Those thus making or running the covered works for
|
||||||
|
you must do so exclusively on your behalf, under your direction and
|
||||||
|
control, on terms that prohibit them from making any copies of your
|
||||||
|
copyrighted material outside their relationship with you.
|
||||||
|
|
||||||
|
Conveying under any other circumstances is permitted solely under the
|
||||||
|
conditions stated below. Sublicensing is not allowed; section 10 makes
|
||||||
|
it unnecessary.
|
||||||
|
|
||||||
|
### 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||||
|
|
||||||
|
No covered work shall be deemed part of an effective technological
|
||||||
|
measure under any applicable law fulfilling obligations under article
|
||||||
|
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||||
|
similar laws prohibiting or restricting circumvention of such
|
||||||
|
measures.
|
||||||
|
|
||||||
|
When you convey a covered work, you waive any legal power to forbid
|
||||||
|
circumvention of technological measures to the extent such
|
||||||
|
circumvention is effected by exercising rights under this License with
|
||||||
|
respect to the covered work, and you disclaim any intention to limit
|
||||||
|
operation or modification of the work as a means of enforcing, against
|
||||||
|
the work's users, your or third parties' legal rights to forbid
|
||||||
|
circumvention of technological measures.
|
||||||
|
|
||||||
|
### 4. Conveying Verbatim Copies.
|
||||||
|
|
||||||
|
You may convey verbatim copies of the Program's source code as you
|
||||||
|
receive it, in any medium, provided that you conspicuously and
|
||||||
|
appropriately publish on each copy an appropriate copyright notice;
|
||||||
|
keep intact all notices stating that this License and any
|
||||||
|
non-permissive terms added in accord with section 7 apply to the code;
|
||||||
|
keep intact all notices of the absence of any warranty; and give all
|
||||||
|
recipients a copy of this License along with the Program.
|
||||||
|
|
||||||
|
You may charge any price or no price for each copy that you convey,
|
||||||
|
and you may offer support or warranty protection for a fee.
|
||||||
|
|
||||||
|
### 5. Conveying Modified Source Versions.
|
||||||
|
|
||||||
|
You may convey a work based on the Program, or the modifications to
|
||||||
|
produce it from the Program, in the form of source code under the
|
||||||
|
terms of section 4, provided that you also meet all of these
|
||||||
|
conditions:
|
||||||
|
|
||||||
|
- a) The work must carry prominent notices stating that you modified
|
||||||
|
it, and giving a relevant date.
|
||||||
|
- b) The work must carry prominent notices stating that it is
|
||||||
|
released under this License and any conditions added under
|
||||||
|
section 7. This requirement modifies the requirement in section 4
|
||||||
|
to "keep intact all notices".
|
||||||
|
- c) You must license the entire work, as a whole, under this
|
||||||
|
License to anyone who comes into possession of a copy. This
|
||||||
|
License will therefore apply, along with any applicable section 7
|
||||||
|
additional terms, to the whole of the work, and all its parts,
|
||||||
|
regardless of how they are packaged. This License gives no
|
||||||
|
permission to license the work in any other way, but it does not
|
||||||
|
invalidate such permission if you have separately received it.
|
||||||
|
- d) If the work has interactive user interfaces, each must display
|
||||||
|
Appropriate Legal Notices; however, if the Program has interactive
|
||||||
|
interfaces that do not display Appropriate Legal Notices, your
|
||||||
|
work need not make them do so.
|
||||||
|
|
||||||
|
A compilation of a covered work with other separate and independent
|
||||||
|
works, which are not by their nature extensions of the covered work,
|
||||||
|
and which are not combined with it such as to form a larger program,
|
||||||
|
in or on a volume of a storage or distribution medium, is called an
|
||||||
|
"aggregate" if the compilation and its resulting copyright are not
|
||||||
|
used to limit the access or legal rights of the compilation's users
|
||||||
|
beyond what the individual works permit. Inclusion of a covered work
|
||||||
|
in an aggregate does not cause this License to apply to the other
|
||||||
|
parts of the aggregate.
|
||||||
|
|
||||||
|
### 6. Conveying Non-Source Forms.
|
||||||
|
|
||||||
|
You may convey a covered work in object code form under the terms of
|
||||||
|
sections 4 and 5, provided that you also convey the machine-readable
|
||||||
|
Corresponding Source under the terms of this License, in one of these
|
||||||
|
ways:
|
||||||
|
|
||||||
|
- a) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by the
|
||||||
|
Corresponding Source fixed on a durable physical medium
|
||||||
|
customarily used for software interchange.
|
||||||
|
- b) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by a
|
||||||
|
written offer, valid for at least three years and valid for as
|
||||||
|
long as you offer spare parts or customer support for that product
|
||||||
|
model, to give anyone who possesses the object code either (1) a
|
||||||
|
copy of the Corresponding Source for all the software in the
|
||||||
|
product that is covered by this License, on a durable physical
|
||||||
|
medium customarily used for software interchange, for a price no
|
||||||
|
more than your reasonable cost of physically performing this
|
||||||
|
conveying of source, or (2) access to copy the Corresponding
|
||||||
|
Source from a network server at no charge.
|
||||||
|
- c) Convey individual copies of the object code with a copy of the
|
||||||
|
written offer to provide the Corresponding Source. This
|
||||||
|
alternative is allowed only occasionally and noncommercially, and
|
||||||
|
only if you received the object code with such an offer, in accord
|
||||||
|
with subsection 6b.
|
||||||
|
- d) Convey the object code by offering access from a designated
|
||||||
|
place (gratis or for a charge), and offer equivalent access to the
|
||||||
|
Corresponding Source in the same way through the same place at no
|
||||||
|
further charge. You need not require recipients to copy the
|
||||||
|
Corresponding Source along with the object code. If the place to
|
||||||
|
copy the object code is a network server, the Corresponding Source
|
||||||
|
may be on a different server (operated by you or a third party)
|
||||||
|
that supports equivalent copying facilities, provided you maintain
|
||||||
|
clear directions next to the object code saying where to find the
|
||||||
|
Corresponding Source. Regardless of what server hosts the
|
||||||
|
Corresponding Source, you remain obligated to ensure that it is
|
||||||
|
available for as long as needed to satisfy these requirements.
|
||||||
|
- e) Convey the object code using peer-to-peer transmission,
|
||||||
|
provided you inform other peers where the object code and
|
||||||
|
Corresponding Source of the work are being offered to the general
|
||||||
|
public at no charge under subsection 6d.
|
||||||
|
|
||||||
|
A separable portion of the object code, whose source code is excluded
|
||||||
|
from the Corresponding Source as a System Library, need not be
|
||||||
|
included in conveying the object code work.
|
||||||
|
|
||||||
|
A "User Product" is either (1) a "consumer product", which means any
|
||||||
|
tangible personal property which is normally used for personal,
|
||||||
|
family, or household purposes, or (2) anything designed or sold for
|
||||||
|
incorporation into a dwelling. In determining whether a product is a
|
||||||
|
consumer product, doubtful cases shall be resolved in favor of
|
||||||
|
coverage. For a particular product received by a particular user,
|
||||||
|
"normally used" refers to a typical or common use of that class of
|
||||||
|
product, regardless of the status of the particular user or of the way
|
||||||
|
in which the particular user actually uses, or expects or is expected
|
||||||
|
to use, the product. A product is a consumer product regardless of
|
||||||
|
whether the product has substantial commercial, industrial or
|
||||||
|
non-consumer uses, unless such uses represent the only significant
|
||||||
|
mode of use of the product.
|
||||||
|
|
||||||
|
"Installation Information" for a User Product means any methods,
|
||||||
|
procedures, authorization keys, or other information required to
|
||||||
|
install and execute modified versions of a covered work in that User
|
||||||
|
Product from a modified version of its Corresponding Source. The
|
||||||
|
information must suffice to ensure that the continued functioning of
|
||||||
|
the modified object code is in no case prevented or interfered with
|
||||||
|
solely because modification has been made.
|
||||||
|
|
||||||
|
If you convey an object code work under this section in, or with, or
|
||||||
|
specifically for use in, a User Product, and the conveying occurs as
|
||||||
|
part of a transaction in which the right of possession and use of the
|
||||||
|
User Product is transferred to the recipient in perpetuity or for a
|
||||||
|
fixed term (regardless of how the transaction is characterized), the
|
||||||
|
Corresponding Source conveyed under this section must be accompanied
|
||||||
|
by the Installation Information. But this requirement does not apply
|
||||||
|
if neither you nor any third party retains the ability to install
|
||||||
|
modified object code on the User Product (for example, the work has
|
||||||
|
been installed in ROM).
|
||||||
|
|
||||||
|
The requirement to provide Installation Information does not include a
|
||||||
|
requirement to continue to provide support service, warranty, or
|
||||||
|
updates for a work that has been modified or installed by the
|
||||||
|
recipient, or for the User Product in which it has been modified or
|
||||||
|
installed. Access to a network may be denied when the modification
|
||||||
|
itself materially and adversely affects the operation of the network
|
||||||
|
or violates the rules and protocols for communication across the
|
||||||
|
network.
|
||||||
|
|
||||||
|
Corresponding Source conveyed, and Installation Information provided,
|
||||||
|
in accord with this section must be in a format that is publicly
|
||||||
|
documented (and with an implementation available to the public in
|
||||||
|
source code form), and must require no special password or key for
|
||||||
|
unpacking, reading or copying.
|
||||||
|
|
||||||
|
### 7. Additional Terms.
|
||||||
|
|
||||||
|
"Additional permissions" are terms that supplement the terms of this
|
||||||
|
License by making exceptions from one or more of its conditions.
|
||||||
|
Additional permissions that are applicable to the entire Program shall
|
||||||
|
be treated as though they were included in this License, to the extent
|
||||||
|
that they are valid under applicable law. If additional permissions
|
||||||
|
apply only to part of the Program, that part may be used separately
|
||||||
|
under those permissions, but the entire Program remains governed by
|
||||||
|
this License without regard to the additional permissions.
|
||||||
|
|
||||||
|
When you convey a copy of a covered work, you may at your option
|
||||||
|
remove any additional permissions from that copy, or from any part of
|
||||||
|
it. (Additional permissions may be written to require their own
|
||||||
|
removal in certain cases when you modify the work.) You may place
|
||||||
|
additional permissions on material, added by you to a covered work,
|
||||||
|
for which you have or can give appropriate copyright permission.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, for material you
|
||||||
|
add to a covered work, you may (if authorized by the copyright holders
|
||||||
|
of that material) supplement the terms of this License with terms:
|
||||||
|
|
||||||
|
- a) Disclaiming warranty or limiting liability differently from the
|
||||||
|
terms of sections 15 and 16 of this License; or
|
||||||
|
- b) Requiring preservation of specified reasonable legal notices or
|
||||||
|
author attributions in that material or in the Appropriate Legal
|
||||||
|
Notices displayed by works containing it; or
|
||||||
|
- c) Prohibiting misrepresentation of the origin of that material,
|
||||||
|
or requiring that modified versions of such material be marked in
|
||||||
|
reasonable ways as different from the original version; or
|
||||||
|
- d) Limiting the use for publicity purposes of names of licensors
|
||||||
|
or authors of the material; or
|
||||||
|
- e) Declining to grant rights under trademark law for use of some
|
||||||
|
trade names, trademarks, or service marks; or
|
||||||
|
- f) Requiring indemnification of licensors and authors of that
|
||||||
|
material by anyone who conveys the material (or modified versions
|
||||||
|
of it) with contractual assumptions of liability to the recipient,
|
||||||
|
for any liability that these contractual assumptions directly
|
||||||
|
impose on those licensors and authors.
|
||||||
|
|
||||||
|
All other non-permissive additional terms are considered "further
|
||||||
|
restrictions" within the meaning of section 10. If the Program as you
|
||||||
|
received it, or any part of it, contains a notice stating that it is
|
||||||
|
governed by this License along with a term that is a further
|
||||||
|
restriction, you may remove that term. If a license document contains
|
||||||
|
a further restriction but permits relicensing or conveying under this
|
||||||
|
License, you may add to a covered work material governed by the terms
|
||||||
|
of that license document, provided that the further restriction does
|
||||||
|
not survive such relicensing or conveying.
|
||||||
|
|
||||||
|
If you add terms to a covered work in accord with this section, you
|
||||||
|
must place, in the relevant source files, a statement of the
|
||||||
|
additional terms that apply to those files, or a notice indicating
|
||||||
|
where to find the applicable terms.
|
||||||
|
|
||||||
|
Additional terms, permissive or non-permissive, may be stated in the
|
||||||
|
form of a separately written license, or stated as exceptions; the
|
||||||
|
above requirements apply either way.
|
||||||
|
|
||||||
|
### 8. Termination.
|
||||||
|
|
||||||
|
You may not propagate or modify a covered work except as expressly
|
||||||
|
provided under this License. Any attempt otherwise to propagate or
|
||||||
|
modify it is void, and will automatically terminate your rights under
|
||||||
|
this License (including any patent licenses granted under the third
|
||||||
|
paragraph of section 11).
|
||||||
|
|
||||||
|
However, if you cease all violation of this License, then your license
|
||||||
|
from a particular copyright holder is reinstated (a) provisionally,
|
||||||
|
unless and until the copyright holder explicitly and finally
|
||||||
|
terminates your license, and (b) permanently, if the copyright holder
|
||||||
|
fails to notify you of the violation by some reasonable means prior to
|
||||||
|
60 days after the cessation.
|
||||||
|
|
||||||
|
Moreover, your license from a particular copyright holder is
|
||||||
|
reinstated permanently if the copyright holder notifies you of the
|
||||||
|
violation by some reasonable means, this is the first time you have
|
||||||
|
received notice of violation of this License (for any work) from that
|
||||||
|
copyright holder, and you cure the violation prior to 30 days after
|
||||||
|
your receipt of the notice.
|
||||||
|
|
||||||
|
Termination of your rights under this section does not terminate the
|
||||||
|
licenses of parties who have received copies or rights from you under
|
||||||
|
this License. If your rights have been terminated and not permanently
|
||||||
|
reinstated, you do not qualify to receive new licenses for the same
|
||||||
|
material under section 10.
|
||||||
|
|
||||||
|
### 9. Acceptance Not Required for Having Copies.
|
||||||
|
|
||||||
|
You are not required to accept this License in order to receive or run
|
||||||
|
a copy of the Program. Ancillary propagation of a covered work
|
||||||
|
occurring solely as a consequence of using peer-to-peer transmission
|
||||||
|
to receive a copy likewise does not require acceptance. However,
|
||||||
|
nothing other than this License grants you permission to propagate or
|
||||||
|
modify any covered work. These actions infringe copyright if you do
|
||||||
|
not accept this License. Therefore, by modifying or propagating a
|
||||||
|
covered work, you indicate your acceptance of this License to do so.
|
||||||
|
|
||||||
|
### 10. Automatic Licensing of Downstream Recipients.
|
||||||
|
|
||||||
|
Each time you convey a covered work, the recipient automatically
|
||||||
|
receives a license from the original licensors, to run, modify and
|
||||||
|
propagate that work, subject to this License. You are not responsible
|
||||||
|
for enforcing compliance by third parties with this License.
|
||||||
|
|
||||||
|
An "entity transaction" is a transaction transferring control of an
|
||||||
|
organization, or substantially all assets of one, or subdividing an
|
||||||
|
organization, or merging organizations. If propagation of a covered
|
||||||
|
work results from an entity transaction, each party to that
|
||||||
|
transaction who receives a copy of the work also receives whatever
|
||||||
|
licenses to the work the party's predecessor in interest had or could
|
||||||
|
give under the previous paragraph, plus a right to possession of the
|
||||||
|
Corresponding Source of the work from the predecessor in interest, if
|
||||||
|
the predecessor has it or can get it with reasonable efforts.
|
||||||
|
|
||||||
|
You may not impose any further restrictions on the exercise of the
|
||||||
|
rights granted or affirmed under this License. For example, you may
|
||||||
|
not impose a license fee, royalty, or other charge for exercise of
|
||||||
|
rights granted under this License, and you may not initiate litigation
|
||||||
|
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||||
|
any patent claim is infringed by making, using, selling, offering for
|
||||||
|
sale, or importing the Program or any portion of it.
|
||||||
|
|
||||||
|
### 11. Patents.
|
||||||
|
|
||||||
|
A "contributor" is a copyright holder who authorizes use under this
|
||||||
|
License of the Program or a work on which the Program is based. The
|
||||||
|
work thus licensed is called the contributor's "contributor version".
|
||||||
|
|
||||||
|
A contributor's "essential patent claims" are all patent claims owned
|
||||||
|
or controlled by the contributor, whether already acquired or
|
||||||
|
hereafter acquired, that would be infringed by some manner, permitted
|
||||||
|
by this License, of making, using, or selling its contributor version,
|
||||||
|
but do not include claims that would be infringed only as a
|
||||||
|
consequence of further modification of the contributor version. For
|
||||||
|
purposes of this definition, "control" includes the right to grant
|
||||||
|
patent sublicenses in a manner consistent with the requirements of
|
||||||
|
this License.
|
||||||
|
|
||||||
|
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||||
|
patent license under the contributor's essential patent claims, to
|
||||||
|
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||||
|
propagate the contents of its contributor version.
|
||||||
|
|
||||||
|
In the following three paragraphs, a "patent license" is any express
|
||||||
|
agreement or commitment, however denominated, not to enforce a patent
|
||||||
|
(such as an express permission to practice a patent or covenant not to
|
||||||
|
sue for patent infringement). To "grant" such a patent license to a
|
||||||
|
party means to make such an agreement or commitment not to enforce a
|
||||||
|
patent against the party.
|
||||||
|
|
||||||
|
If you convey a covered work, knowingly relying on a patent license,
|
||||||
|
and the Corresponding Source of the work is not available for anyone
|
||||||
|
to copy, free of charge and under the terms of this License, through a
|
||||||
|
publicly available network server or other readily accessible means,
|
||||||
|
then you must either (1) cause the Corresponding Source to be so
|
||||||
|
available, or (2) arrange to deprive yourself of the benefit of the
|
||||||
|
patent license for this particular work, or (3) arrange, in a manner
|
||||||
|
consistent with the requirements of this License, to extend the patent
|
||||||
|
license to downstream recipients. "Knowingly relying" means you have
|
||||||
|
actual knowledge that, but for the patent license, your conveying the
|
||||||
|
covered work in a country, or your recipient's use of the covered work
|
||||||
|
in a country, would infringe one or more identifiable patents in that
|
||||||
|
country that you have reason to believe are valid.
|
||||||
|
|
||||||
|
If, pursuant to or in connection with a single transaction or
|
||||||
|
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||||
|
covered work, and grant a patent license to some of the parties
|
||||||
|
receiving the covered work authorizing them to use, propagate, modify
|
||||||
|
or convey a specific copy of the covered work, then the patent license
|
||||||
|
you grant is automatically extended to all recipients of the covered
|
||||||
|
work and works based on it.
|
||||||
|
|
||||||
|
A patent license is "discriminatory" if it does not include within the
|
||||||
|
scope of its coverage, prohibits the exercise of, or is conditioned on
|
||||||
|
the non-exercise of one or more of the rights that are specifically
|
||||||
|
granted under this License. You may not convey a covered work if you
|
||||||
|
are a party to an arrangement with a third party that is in the
|
||||||
|
business of distributing software, under which you make payment to the
|
||||||
|
third party based on the extent of your activity of conveying the
|
||||||
|
work, and under which the third party grants, to any of the parties
|
||||||
|
who would receive the covered work from you, a discriminatory patent
|
||||||
|
license (a) in connection with copies of the covered work conveyed by
|
||||||
|
you (or copies made from those copies), or (b) primarily for and in
|
||||||
|
connection with specific products or compilations that contain the
|
||||||
|
covered work, unless you entered into that arrangement, or that patent
|
||||||
|
license was granted, prior to 28 March 2007.
|
||||||
|
|
||||||
|
Nothing in this License shall be construed as excluding or limiting
|
||||||
|
any implied license or other defenses to infringement that may
|
||||||
|
otherwise be available to you under applicable patent law.
|
||||||
|
|
||||||
|
### 12. No Surrender of Others' Freedom.
|
||||||
|
|
||||||
|
If conditions are imposed on you (whether by court order, agreement or
|
||||||
|
otherwise) that contradict the conditions of this License, they do not
|
||||||
|
excuse you from the conditions of this License. If you cannot convey a
|
||||||
|
covered work so as to satisfy simultaneously your obligations under
|
||||||
|
this License and any other pertinent obligations, then as a
|
||||||
|
consequence you may not convey it at all. For example, if you agree to
|
||||||
|
terms that obligate you to collect a royalty for further conveying
|
||||||
|
from those to whom you convey the Program, the only way you could
|
||||||
|
satisfy both those terms and this License would be to refrain entirely
|
||||||
|
from conveying the Program.
|
||||||
|
|
||||||
|
### 13. Remote Network Interaction; Use with the GNU General Public License.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, if you modify the
|
||||||
|
Program, your modified version must prominently offer all users
|
||||||
|
interacting with it remotely through a computer network (if your
|
||||||
|
version supports such interaction) an opportunity to receive the
|
||||||
|
Corresponding Source of your version by providing access to the
|
||||||
|
Corresponding Source from a network server at no charge, through some
|
||||||
|
standard or customary means of facilitating copying of software. This
|
||||||
|
Corresponding Source shall include the Corresponding Source for any
|
||||||
|
work covered by version 3 of the GNU General Public License that is
|
||||||
|
incorporated pursuant to the following paragraph.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, you have
|
||||||
|
permission to link or combine any covered work with a work licensed
|
||||||
|
under version 3 of the GNU General Public License into a single
|
||||||
|
combined work, and to convey the resulting work. The terms of this
|
||||||
|
License will continue to apply to the part which is the covered work,
|
||||||
|
but the work with which it is combined will remain governed by version
|
||||||
|
3 of the GNU General Public License.
|
||||||
|
|
||||||
|
### 14. Revised Versions of this License.
|
||||||
|
|
||||||
|
The Free Software Foundation may publish revised and/or new versions
|
||||||
|
of the GNU Affero General Public License from time to time. Such new
|
||||||
|
versions will be similar in spirit to the present version, but may
|
||||||
|
differ in detail to address new problems or concerns.
|
||||||
|
|
||||||
|
Each version is given a distinguishing version number. If the Program
|
||||||
|
specifies that a certain numbered version of the GNU Affero General
|
||||||
|
Public License "or any later version" applies to it, you have the
|
||||||
|
option of following the terms and conditions either of that numbered
|
||||||
|
version or of any later version published by the Free Software
|
||||||
|
Foundation. If the Program does not specify a version number of the
|
||||||
|
GNU Affero General Public License, you may choose any version ever
|
||||||
|
published by the Free Software Foundation.
|
||||||
|
|
||||||
|
If the Program specifies that a proxy can decide which future versions
|
||||||
|
of the GNU Affero General Public License can be used, that proxy's
|
||||||
|
public statement of acceptance of a version permanently authorizes you
|
||||||
|
to choose that version for the Program.
|
||||||
|
|
||||||
|
Later license versions may give you additional or different
|
||||||
|
permissions. However, no additional obligations are imposed on any
|
||||||
|
author or copyright holder as a result of your choosing to follow a
|
||||||
|
later version.
|
||||||
|
|
||||||
|
### 15. Disclaimer of Warranty.
|
||||||
|
|
||||||
|
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||||
|
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||||
|
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT
|
||||||
|
WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT
|
||||||
|
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||||
|
A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND
|
||||||
|
PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
|
||||||
|
DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR
|
||||||
|
CORRECTION.
|
||||||
|
|
||||||
|
### 16. Limitation of Liability.
|
||||||
|
|
||||||
|
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||||
|
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR
|
||||||
|
CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES
|
||||||
|
ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT
|
||||||
|
NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR
|
||||||
|
LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM
|
||||||
|
TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER
|
||||||
|
PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
||||||
|
|
||||||
|
### 17. Interpretation of Sections 15 and 16.
|
||||||
|
|
||||||
|
If the disclaimer of warranty and limitation of liability provided
|
||||||
|
above cannot be given local legal effect according to their terms,
|
||||||
|
reviewing courts shall apply local law that most closely approximates
|
||||||
|
an absolute waiver of all civil liability in connection with the
|
||||||
|
Program, unless a warranty or assumption of liability accompanies a
|
||||||
|
copy of the Program in return for a fee.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
## How to Apply These Terms to Your New Programs
|
||||||
|
|
||||||
|
If you develop a new program, and you want it to be of the greatest
|
||||||
|
possible use to the public, the best way to achieve this is to make it
|
||||||
|
free software which everyone can redistribute and change under these
|
||||||
|
terms.
|
||||||
|
|
||||||
|
To do so, attach the following notices to the program. It is safest to
|
||||||
|
attach them to the start of each source file to most effectively state
|
||||||
|
the exclusion of warranty; and each file should have at least the
|
||||||
|
"copyright" line and a pointer to where the full notice is found.
|
||||||
|
|
||||||
|
<one line to give the program's name and a brief idea of what it does.>
|
||||||
|
Copyright (C) <year> <name of author>
|
||||||
|
|
||||||
|
This program is free software: you can redistribute it and/or modify
|
||||||
|
it under the terms of the GNU Affero General Public License as
|
||||||
|
published by the Free Software Foundation, either version 3 of the
|
||||||
|
License, or (at your option) any later version.
|
||||||
|
|
||||||
|
This program is distributed in the hope that it will be useful,
|
||||||
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
You should have received a copy of the GNU Affero General Public License
|
||||||
|
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
Also add information on how to contact you by electronic and paper
|
||||||
|
mail.
|
||||||
|
|
||||||
|
If your software can interact with users remotely through a computer
|
||||||
|
network, you should also make sure that it provides a way for users to
|
||||||
|
get its source. For example, if your program is a web application, its
|
||||||
|
interface could display a "Source" link that leads users to an archive
|
||||||
|
of the code. There are many ways you could offer source, and different
|
||||||
|
solutions will be better for different programs; see section 13 for
|
||||||
|
the specific requirements.
|
||||||
|
|
||||||
|
You should also get your employer (if you work as a programmer) or
|
||||||
|
school, if any, to sign a "copyright disclaimer" for the program, if
|
||||||
|
necessary. For more information on this, and how to apply and follow
|
||||||
|
the GNU AGPL, see <https://www.gnu.org/licenses/>.
|
||||||
26
Makefile
Normal file
26
Makefile
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
.DEFAULT_GOAL := all
|
||||||
|
|
||||||
|
build: clean
|
||||||
|
bee pack
|
||||||
|
|
||||||
|
run:
|
||||||
|
./oc-discovery
|
||||||
|
|
||||||
|
clean:
|
||||||
|
rm -rf oc-discovery
|
||||||
|
|
||||||
|
docker:
|
||||||
|
DOCKER_BUILDKIT=1 docker build -t oc-discovery -f Dockerfile .
|
||||||
|
docker tag oc-discovery opencloudregistry/oc-discovery:latest
|
||||||
|
|
||||||
|
publish-kind:
|
||||||
|
kind load docker-image opencloudregistry/oc-discovery:latest --name opencloud
|
||||||
|
|
||||||
|
publish-registry:
|
||||||
|
docker push opencloudregistry/oc-discovery:latest
|
||||||
|
|
||||||
|
all: docker publish-kind
|
||||||
|
|
||||||
|
ci: docker publish-registry
|
||||||
|
|
||||||
|
.PHONY: build run clean docker publish-kind publish-registry
|
||||||
31
README.md
31
README.md
@@ -14,3 +14,34 @@ If default Swagger page is displayed instead of tyour api, change url in swagger
|
|||||||
|
|
||||||
url: "swagger.json"
|
url: "swagger.json"
|
||||||
|
|
||||||
|
|
||||||
|
sequenceDiagram
|
||||||
|
autonumber
|
||||||
|
participant Dev as Développeur / Owner
|
||||||
|
participant IPFS as Réseau IPFS
|
||||||
|
participant CID as CID (hash du fichier)
|
||||||
|
participant Argo as Orchestrateur Argo
|
||||||
|
participant CU as Compute Unit
|
||||||
|
participant MinIO as Storage MinIO
|
||||||
|
|
||||||
|
%% 1. Ajout du fichier sur IPFS
|
||||||
|
Dev->>IPFS: Chiffre et ajoute fichier (algo/dataset)
|
||||||
|
IPFS-->>CID: Génère CID unique (hash du fichier)
|
||||||
|
Dev->>Dev: Stocke CID pour référence future
|
||||||
|
|
||||||
|
%% 2. Orchestration par Argo
|
||||||
|
Argo->>CID: Requête CID pour job
|
||||||
|
CID-->>Argo: Fournit le fichier (vérifié via hash)
|
||||||
|
|
||||||
|
%% 3. Execution sur la Compute Unit
|
||||||
|
Argo->>CU: Déploie job avec fichier récupéré
|
||||||
|
CU->>CU: Vérifie hash (CID) pour intégrité
|
||||||
|
CU->>CU: Exécute l'algo sur le dataset
|
||||||
|
|
||||||
|
%% 4. Stockage des résultats
|
||||||
|
CU->>MinIO: Stocke output (résultats) ou logs
|
||||||
|
CU->>IPFS: Optionnel : ajoute output sur IPFS (nouveau CID)
|
||||||
|
|
||||||
|
%% 5. Vérification et traçabilité
|
||||||
|
Dev->>IPFS: Vérifie CID output si nécessaire
|
||||||
|
CU->>Dev: Fournit résultat et log de hash
|
||||||
|
|||||||
31
conf/config.go
Normal file
31
conf/config.go
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
package conf
|
||||||
|
|
||||||
|
import "sync"
|
||||||
|
|
||||||
|
type Config struct {
|
||||||
|
Name string
|
||||||
|
Hostname string
|
||||||
|
PSKPath string
|
||||||
|
PublicKeyPath string
|
||||||
|
PrivateKeyPath string
|
||||||
|
NodeEndpointPort int64
|
||||||
|
IndexerAddresses string
|
||||||
|
NativeIndexerAddresses string // multiaddrs of native indexers, comma-separated; bypasses IndexerAddresses when set
|
||||||
|
|
||||||
|
PeerIDS string // TO REMOVE
|
||||||
|
|
||||||
|
NodeMode string
|
||||||
|
|
||||||
|
MinIndexer int
|
||||||
|
MaxIndexer int
|
||||||
|
}
|
||||||
|
|
||||||
|
var instance *Config
|
||||||
|
var once sync.Once
|
||||||
|
|
||||||
|
func GetConfig() *Config {
|
||||||
|
once.Do(func() {
|
||||||
|
instance = &Config{}
|
||||||
|
})
|
||||||
|
return instance
|
||||||
|
}
|
||||||
@@ -1,41 +0,0 @@
|
|||||||
package controllers
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"oc-discovery/models"
|
|
||||||
|
|
||||||
beego "github.com/beego/beego/v2/server/web"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Operations about Identitys
|
|
||||||
type IdentityController struct {
|
|
||||||
beego.Controller
|
|
||||||
}
|
|
||||||
|
|
||||||
// @Title CreateIdentity
|
|
||||||
// @Description create identitys
|
|
||||||
// @Param body body models.Identity true "body for identity content"
|
|
||||||
// @Success 200 {result} "ok" or error
|
|
||||||
// @Failure 403 body is empty
|
|
||||||
// @router / [post]
|
|
||||||
func (u *IdentityController) Post() {
|
|
||||||
var identity models.Identity
|
|
||||||
json.Unmarshal(u.Ctx.Input.RequestBody, &identity)
|
|
||||||
err := models.UpdateIdentity(&identity)
|
|
||||||
if err != nil {
|
|
||||||
u.Data["json"] = err.Error()
|
|
||||||
} else {
|
|
||||||
u.Data["json"] = "ok"
|
|
||||||
}
|
|
||||||
u.ServeJSON()
|
|
||||||
}
|
|
||||||
|
|
||||||
// @Title Get
|
|
||||||
// @Description get Identity
|
|
||||||
// @Success 200 {object} models.Identity
|
|
||||||
// @router / [get]
|
|
||||||
func (u *IdentityController) GetAll() {
|
|
||||||
identity := models.GetIdentity()
|
|
||||||
u.Data["json"] = identity
|
|
||||||
u.ServeJSON()
|
|
||||||
}
|
|
||||||
@@ -1,80 +0,0 @@
|
|||||||
package controllers
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"oc-discovery/models"
|
|
||||||
|
|
||||||
beego "github.com/beego/beego/v2/server/web"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Operations about peer
|
|
||||||
type PeerController struct {
|
|
||||||
beego.Controller
|
|
||||||
}
|
|
||||||
|
|
||||||
// @Title Create
|
|
||||||
// @Description create peers
|
|
||||||
// @Param body body []models.Peer true "The peer content"
|
|
||||||
// @Success 200 {string} models.Peer.Id
|
|
||||||
// @Failure 403 body is empty
|
|
||||||
// @router / [post]
|
|
||||||
func (o *PeerController) Post() {
|
|
||||||
var ob []models.Peer
|
|
||||||
json.Unmarshal(o.Ctx.Input.RequestBody, &ob)
|
|
||||||
models.AddPeers(ob)
|
|
||||||
o.Data["json"] = map[string]string{"Added": "OK"}
|
|
||||||
o.ServeJSON()
|
|
||||||
}
|
|
||||||
|
|
||||||
// @Title Get
|
|
||||||
// @Description find peer by peerid
|
|
||||||
// @Param peerId path string true "the peerid you want to get"
|
|
||||||
// @Success 200 {peer} models.Peer
|
|
||||||
// @Failure 403 :peerId is empty
|
|
||||||
// @router /:peerId [get]
|
|
||||||
func (o *PeerController) Get() {
|
|
||||||
peerId := o.Ctx.Input.Param(":peerId")
|
|
||||||
|
|
||||||
peer, err := models.GetPeer(peerId)
|
|
||||||
if err != nil {
|
|
||||||
o.Data["json"] = err.Error()
|
|
||||||
} else {
|
|
||||||
o.Data["json"] = peer
|
|
||||||
}
|
|
||||||
|
|
||||||
o.ServeJSON()
|
|
||||||
}
|
|
||||||
|
|
||||||
// @Title Find
|
|
||||||
// @Description find peers with query
|
|
||||||
// @Param query path string true "the keywords you need"
|
|
||||||
// @Success 200 {peers} []models.Peer
|
|
||||||
// @Failure 403
|
|
||||||
// @router /find/:query [get]
|
|
||||||
func (o *PeerController) Find() {
|
|
||||||
query := o.Ctx.Input.Param(":query")
|
|
||||||
peers, err := models.FindPeers(query)
|
|
||||||
if err != nil {
|
|
||||||
o.Data["json"] = err.Error()
|
|
||||||
} else {
|
|
||||||
o.Data["json"] = peers
|
|
||||||
}
|
|
||||||
o.ServeJSON()
|
|
||||||
}
|
|
||||||
|
|
||||||
// @Title Delete
|
|
||||||
// @Description delete the peer
|
|
||||||
// @Param peerId path string true "The peerId you want to delete"
|
|
||||||
// @Success 200 {string} delete success!
|
|
||||||
// @Failure 403 peerId is empty
|
|
||||||
// @router /:peerId [delete]
|
|
||||||
func (o *PeerController) Delete() {
|
|
||||||
peerId := o.Ctx.Input.Param(":peerId")
|
|
||||||
err := models.Delete(peerId)
|
|
||||||
if err != nil {
|
|
||||||
o.Data["json"] = err.Error()
|
|
||||||
} else {
|
|
||||||
o.Data["json"] = "delete success!"
|
|
||||||
}
|
|
||||||
o.ServeJSON()
|
|
||||||
}
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
package controllers
|
|
||||||
|
|
||||||
import (
|
|
||||||
beego "github.com/beego/beego/v2/server/web"
|
|
||||||
)
|
|
||||||
|
|
||||||
// VersionController operations for Version
|
|
||||||
type VersionController struct {
|
|
||||||
beego.Controller
|
|
||||||
}
|
|
||||||
|
|
||||||
// @Title GetAll
|
|
||||||
// @Description get version
|
|
||||||
// @Success 200
|
|
||||||
// @router / [get]
|
|
||||||
func (c *VersionController) GetAll() {
|
|
||||||
c.Data["json"] = map[string]string{"version": "1"}
|
|
||||||
c.ServeJSON()
|
|
||||||
}
|
|
||||||
211
daemons/node/common/common_pubsub.go
Normal file
211
daemons/node/common/common_pubsub.go
Normal file
@@ -0,0 +1,211 @@
|
|||||||
|
package common
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||||
|
"github.com/libp2p/go-libp2p/core/host"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Event struct {
|
||||||
|
Type string `json:"type"`
|
||||||
|
From string `json:"from"` // peerID
|
||||||
|
|
||||||
|
User string
|
||||||
|
|
||||||
|
DataType int64 `json:"datatype"`
|
||||||
|
Timestamp int64 `json:"ts"`
|
||||||
|
Payload []byte `json:"payload"`
|
||||||
|
Signature []byte `json:"sig"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewEvent(name string, from string, dt *tools.DataType, user string, payload []byte) *Event {
|
||||||
|
priv, err := tools.LoadKeyFromFilePrivate() // your node private key
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
evt := &Event{
|
||||||
|
Type: name,
|
||||||
|
From: from,
|
||||||
|
User: user,
|
||||||
|
Timestamp: time.Now().UTC().Unix(),
|
||||||
|
Payload: payload,
|
||||||
|
}
|
||||||
|
if dt != nil {
|
||||||
|
evt.DataType = int64(dt.EnumIndex())
|
||||||
|
} else {
|
||||||
|
evt.DataType = -1
|
||||||
|
}
|
||||||
|
|
||||||
|
body, _ := json.Marshal(evt)
|
||||||
|
sig, _ := priv.Sign(body)
|
||||||
|
evt.Signature = sig
|
||||||
|
return evt
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *Event) RawEvent() *Event {
|
||||||
|
return &Event{
|
||||||
|
Type: e.Type,
|
||||||
|
From: e.From,
|
||||||
|
User: e.User,
|
||||||
|
DataType: e.DataType,
|
||||||
|
Timestamp: e.Timestamp,
|
||||||
|
Payload: e.Payload,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *Event) toRawByte() ([]byte, error) {
|
||||||
|
return json.Marshal(e.RawEvent())
|
||||||
|
}
|
||||||
|
|
||||||
|
func (event *Event) Verify(p *peer.Peer) error {
|
||||||
|
if p == nil {
|
||||||
|
return errors.New("no peer found")
|
||||||
|
}
|
||||||
|
if p.Relation == peer.BLACKLIST { // if peer is blacklisted... quit...
|
||||||
|
return errors.New("peer is blacklisted")
|
||||||
|
}
|
||||||
|
return event.VerifySignature(p.PublicKey)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (event *Event) VerifySignature(pk string) error {
|
||||||
|
pubKey, err := PubKeyFromString(pk) // extract pubkey from pubkey str
|
||||||
|
if err != nil {
|
||||||
|
return errors.New("pubkey is malformed")
|
||||||
|
}
|
||||||
|
data, err := event.toRawByte()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
} // extract byte from raw event excluding signature.
|
||||||
|
if ok, _ := pubKey.Verify(data, event.Signature); !ok { // then verify if pubkey sign this message...
|
||||||
|
return errors.New("check signature failed")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type TopicNodeActivityPub struct {
|
||||||
|
NodeActivity int `json:"node_activity"`
|
||||||
|
Disposer string `json:"disposer_address"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
DID string `json:"did"` // real PEER ID
|
||||||
|
PeerID string `json:"peer_id"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type LongLivedPubSubService struct {
|
||||||
|
Host host.Host
|
||||||
|
LongLivedPubSubs map[string]*pubsub.Topic
|
||||||
|
PubsubMu sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewLongLivedPubSubService(h host.Host) *LongLivedPubSubService {
|
||||||
|
return &LongLivedPubSubService{
|
||||||
|
Host: h,
|
||||||
|
LongLivedPubSubs: map[string]*pubsub.Topic{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *LongLivedPubSubService) processEvent(
|
||||||
|
ctx context.Context,
|
||||||
|
p *peer.Peer,
|
||||||
|
event *Event,
|
||||||
|
topicName string, handler func(context.Context, string, *Event) error) error {
|
||||||
|
if err := event.Verify(p); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return handler(ctx, topicName, event)
|
||||||
|
}
|
||||||
|
|
||||||
|
const TopicPubSubNodeActivity = "oc-node-activity"
|
||||||
|
const TopicPubSubSearch = "oc-node-search"
|
||||||
|
|
||||||
|
func (s *LongLivedPubSubService) SubscribeToNodeActivity(ps *pubsub.PubSub, f *func(context.Context, TopicNodeActivityPub, string)) error {
|
||||||
|
ps.RegisterTopicValidator(TopicPubSubNodeActivity, func(ctx context.Context, p pp.ID, m *pubsub.Message) bool {
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
if topic, err := ps.Join(TopicPubSubNodeActivity); err != nil {
|
||||||
|
return err
|
||||||
|
} else {
|
||||||
|
s.PubsubMu.Lock()
|
||||||
|
defer s.PubsubMu.Unlock()
|
||||||
|
s.LongLivedPubSubs[TopicPubSubNodeActivity] = topic
|
||||||
|
}
|
||||||
|
if f != nil {
|
||||||
|
return SubscribeEvents(s, context.Background(), TopicPubSubNodeActivity, -1, *f)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *LongLivedPubSubService) SubscribeToSearch(ps *pubsub.PubSub, f *func(context.Context, Event, string)) error {
|
||||||
|
ps.RegisterTopicValidator(TopicPubSubSearch, func(ctx context.Context, p pp.ID, m *pubsub.Message) bool {
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
if topic, err := ps.Join(TopicPubSubSearch); err != nil {
|
||||||
|
return err
|
||||||
|
} else {
|
||||||
|
s.PubsubMu.Lock()
|
||||||
|
defer s.PubsubMu.Unlock()
|
||||||
|
s.LongLivedPubSubs[TopicPubSubSearch] = topic
|
||||||
|
}
|
||||||
|
if f != nil {
|
||||||
|
return SubscribeEvents(s, context.Background(), TopicPubSubSearch, -1, *f)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func SubscribeEvents[T interface{}](s *LongLivedPubSubService,
|
||||||
|
ctx context.Context, proto string, timeout int, f func(context.Context, T, string),
|
||||||
|
) error {
|
||||||
|
if s.LongLivedPubSubs[proto] == nil {
|
||||||
|
return errors.New("no protocol subscribed in pubsub")
|
||||||
|
}
|
||||||
|
topic := s.LongLivedPubSubs[proto]
|
||||||
|
sub, err := topic.Subscribe() // then subscribe to it
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// launch loop waiting for results.
|
||||||
|
go waitResults(s, ctx, sub, proto, timeout, f)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func waitResults[T interface{}](s *LongLivedPubSubService, ctx context.Context, sub *pubsub.Subscription, proto string, timeout int, f func(context.Context, T, string)) {
|
||||||
|
defer ctx.Done()
|
||||||
|
for {
|
||||||
|
s.PubsubMu.Lock() // check safely if cache is actually notified subscribed to topic
|
||||||
|
if s.LongLivedPubSubs[proto] == nil { // if not kill the loop.
|
||||||
|
break
|
||||||
|
}
|
||||||
|
s.PubsubMu.Unlock()
|
||||||
|
// if still subscribed -> wait for new message
|
||||||
|
var cancel context.CancelFunc
|
||||||
|
if timeout != -1 {
|
||||||
|
ctx, cancel = context.WithTimeout(ctx, time.Duration(timeout)*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
}
|
||||||
|
msg, err := sub.Next(ctx)
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, context.DeadlineExceeded) {
|
||||||
|
// timeout hit, no message before deadline kill subsciption.
|
||||||
|
s.PubsubMu.Lock()
|
||||||
|
delete(s.LongLivedPubSubs, proto)
|
||||||
|
s.PubsubMu.Unlock()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var evt T
|
||||||
|
if err := json.Unmarshal(msg.Data, &evt); err != nil { // map to event
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
f(ctx, evt, fmt.Sprintf("%v", proto))
|
||||||
|
}
|
||||||
|
}
|
||||||
840
daemons/node/common/common_stream.go
Normal file
840
daemons/node/common/common_stream.go
Normal file
@@ -0,0 +1,840 @@
|
|||||||
|
package common
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
cr "crypto/rand"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"math/rand"
|
||||||
|
"net"
|
||||||
|
"oc-discovery/conf"
|
||||||
|
"slices"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
"github.com/libp2p/go-libp2p/core/host"
|
||||||
|
"github.com/libp2p/go-libp2p/core/network"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
"github.com/libp2p/go-libp2p/core/protocol"
|
||||||
|
)
|
||||||
|
|
||||||
|
type LongLivedStreamRecordedService[T interface{}] struct {
|
||||||
|
*LongLivedPubSubService
|
||||||
|
StreamRecords map[protocol.ID]map[pp.ID]*StreamRecord[T]
|
||||||
|
StreamMU sync.RWMutex
|
||||||
|
maxNodesConn int
|
||||||
|
// AfterHeartbeat is an optional hook called after each successful heartbeat update.
|
||||||
|
// The indexer sets it to republish the embedded signed record to the DHT.
|
||||||
|
AfterHeartbeat func(pid pp.ID)
|
||||||
|
// AfterDelete is called after gc() evicts an expired peer, outside the lock.
|
||||||
|
// name and did may be empty if the HeartbeatStream had no metadata.
|
||||||
|
AfterDelete func(pid pp.ID, name string, did string)
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewStreamRecordedService[T interface{}](h host.Host, maxNodesConn int) *LongLivedStreamRecordedService[T] {
|
||||||
|
service := &LongLivedStreamRecordedService[T]{
|
||||||
|
LongLivedPubSubService: NewLongLivedPubSubService(h),
|
||||||
|
StreamRecords: map[protocol.ID]map[pp.ID]*StreamRecord[T]{},
|
||||||
|
maxNodesConn: maxNodesConn,
|
||||||
|
}
|
||||||
|
go service.StartGC(30 * time.Second)
|
||||||
|
// Garbage collection is needed on every Map of Long-Lived Stream... it may be a top level redesigned
|
||||||
|
go service.Snapshot(1 * time.Hour)
|
||||||
|
return service
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *LongLivedStreamRecordedService[T]) StartGC(interval time.Duration) {
|
||||||
|
go func() {
|
||||||
|
t := time.NewTicker(interval)
|
||||||
|
defer t.Stop()
|
||||||
|
for range t.C {
|
||||||
|
ix.gc()
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *LongLivedStreamRecordedService[T]) gc() {
|
||||||
|
ix.StreamMU.Lock()
|
||||||
|
now := time.Now().UTC()
|
||||||
|
if ix.StreamRecords[ProtocolHeartbeat] == nil {
|
||||||
|
ix.StreamRecords[ProtocolHeartbeat] = map[pp.ID]*StreamRecord[T]{}
|
||||||
|
ix.StreamMU.Unlock()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
streams := ix.StreamRecords[ProtocolHeartbeat]
|
||||||
|
fmt.Println(StaticNatives, StaticIndexers, streams)
|
||||||
|
|
||||||
|
type gcEntry struct {
|
||||||
|
pid pp.ID
|
||||||
|
name string
|
||||||
|
did string
|
||||||
|
}
|
||||||
|
var evicted []gcEntry
|
||||||
|
for pid, rec := range streams {
|
||||||
|
if now.After(rec.HeartbeatStream.Expiry) || now.Sub(rec.HeartbeatStream.UptimeTracker.LastSeen) > 2*rec.HeartbeatStream.Expiry.Sub(now) {
|
||||||
|
name, did := "", ""
|
||||||
|
if rec.HeartbeatStream != nil {
|
||||||
|
name = rec.HeartbeatStream.Name
|
||||||
|
did = rec.HeartbeatStream.DID
|
||||||
|
}
|
||||||
|
evicted = append(evicted, gcEntry{pid, name, did})
|
||||||
|
for _, sstreams := range ix.StreamRecords {
|
||||||
|
if sstreams[pid] != nil {
|
||||||
|
delete(sstreams, pid)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ix.StreamMU.Unlock()
|
||||||
|
|
||||||
|
if ix.AfterDelete != nil {
|
||||||
|
for _, e := range evicted {
|
||||||
|
ix.AfterDelete(e.pid, e.name, e.did)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *LongLivedStreamRecordedService[T]) Snapshot(interval time.Duration) {
|
||||||
|
go func() {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
t := time.NewTicker(interval)
|
||||||
|
defer t.Stop()
|
||||||
|
for range t.C {
|
||||||
|
infos := ix.snapshot()
|
||||||
|
for _, inf := range infos {
|
||||||
|
logger.Info().Msg(" -> " + inf.DID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
// -------- Snapshot / Query --------
|
||||||
|
func (ix *LongLivedStreamRecordedService[T]) snapshot() []*StreamRecord[T] {
|
||||||
|
ix.StreamMU.Lock()
|
||||||
|
defer ix.StreamMU.Unlock()
|
||||||
|
|
||||||
|
out := make([]*StreamRecord[T], 0, len(ix.StreamRecords))
|
||||||
|
for _, streams := range ix.StreamRecords {
|
||||||
|
for _, stream := range streams {
|
||||||
|
out = append(out, stream)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *LongLivedStreamRecordedService[T]) HandleHeartbeat(s network.Stream) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
defer s.Close()
|
||||||
|
dec := json.NewDecoder(s)
|
||||||
|
for {
|
||||||
|
ix.StreamMU.Lock()
|
||||||
|
if ix.StreamRecords[ProtocolHeartbeat] == nil {
|
||||||
|
ix.StreamRecords[ProtocolHeartbeat] = map[pp.ID]*StreamRecord[T]{}
|
||||||
|
}
|
||||||
|
streams := ix.StreamRecords[ProtocolHeartbeat]
|
||||||
|
streamsAnonym := map[pp.ID]HeartBeatStreamed{}
|
||||||
|
for k, v := range streams {
|
||||||
|
streamsAnonym[k] = v
|
||||||
|
}
|
||||||
|
ix.StreamMU.Unlock()
|
||||||
|
pid, hb, err := CheckHeartbeat(ix.Host, s, dec, streamsAnonym, &ix.StreamMU, ix.maxNodesConn)
|
||||||
|
if err != nil {
|
||||||
|
// Stream-level errors (EOF, reset, closed) mean the connection is gone
|
||||||
|
// — exit so the goroutine doesn't spin forever on a dead stream.
|
||||||
|
// Metric/policy errors (score too low, too many connections) are transient
|
||||||
|
// — those are also stream-terminal since the stream carries one session.
|
||||||
|
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) ||
|
||||||
|
strings.Contains(err.Error(), "reset") ||
|
||||||
|
strings.Contains(err.Error(), "closed") ||
|
||||||
|
strings.Contains(err.Error(), "too many connections") {
|
||||||
|
logger.Info().Err(err).Msg("heartbeat stream terminated, closing handler")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
logger.Warn().Err(err).Msg("heartbeat check failed, retrying on same stream")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ix.StreamMU.Lock()
|
||||||
|
// if record already seen update last seen
|
||||||
|
if rec, ok := streams[*pid]; ok {
|
||||||
|
rec.DID = hb.DID
|
||||||
|
if rec.HeartbeatStream == nil {
|
||||||
|
rec.HeartbeatStream = hb.Stream
|
||||||
|
}
|
||||||
|
rec.HeartbeatStream = hb.Stream
|
||||||
|
if rec.HeartbeatStream.UptimeTracker == nil {
|
||||||
|
rec.HeartbeatStream.UptimeTracker = &UptimeTracker{
|
||||||
|
FirstSeen: time.Now().UTC(),
|
||||||
|
LastSeen: time.Now().UTC(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
logger.Info().Msg("A new node is updated : " + pid.String())
|
||||||
|
} else {
|
||||||
|
hb.Stream.UptimeTracker = &UptimeTracker{
|
||||||
|
FirstSeen: time.Now().UTC(),
|
||||||
|
LastSeen: time.Now().UTC(),
|
||||||
|
}
|
||||||
|
streams[*pid] = &StreamRecord[T]{
|
||||||
|
DID: hb.DID,
|
||||||
|
HeartbeatStream: hb.Stream,
|
||||||
|
}
|
||||||
|
logger.Info().Msg("A new node is subscribed : " + pid.String())
|
||||||
|
}
|
||||||
|
ix.StreamMU.Unlock()
|
||||||
|
// Let the indexer republish the embedded signed record to the DHT.
|
||||||
|
if ix.AfterHeartbeat != nil {
|
||||||
|
ix.AfterHeartbeat(*pid)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func CheckHeartbeat(h host.Host, s network.Stream, dec *json.Decoder, streams map[pp.ID]HeartBeatStreamed, lock *sync.RWMutex, maxNodes int) (*pp.ID, *Heartbeat, error) {
|
||||||
|
if len(h.Network().Peers()) >= maxNodes {
|
||||||
|
return nil, nil, fmt.Errorf("too many connections, try another indexer")
|
||||||
|
}
|
||||||
|
var hb Heartbeat
|
||||||
|
if err := dec.Decode(&hb); err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
_, bpms, _ := getBandwidthChallengeRate(h, s.Conn().RemotePeer(), MinPayloadChallenge+int(rand.Float64()*(MaxPayloadChallenge-MinPayloadChallenge)))
|
||||||
|
{
|
||||||
|
pid, err := pp.Decode(hb.PeerID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
upTime := float64(0)
|
||||||
|
isFirstHeartbeat := true
|
||||||
|
lock.Lock()
|
||||||
|
if rec, ok := streams[pid]; ok && rec.GetUptimeTracker() != nil {
|
||||||
|
upTime = rec.GetUptimeTracker().Uptime().Hours() / float64(time.Since(TimeWatcher).Hours())
|
||||||
|
isFirstHeartbeat = false
|
||||||
|
}
|
||||||
|
lock.Unlock()
|
||||||
|
diversity := getDiversityRate(h, hb.IndexersBinded)
|
||||||
|
fmt.Println(upTime, bpms, diversity)
|
||||||
|
hb.ComputeIndexerScore(upTime, bpms, diversity)
|
||||||
|
// First heartbeat: uptime is always 0 so the score ceiling is 60, below the
|
||||||
|
// steady-state threshold of 75. Use a lower admission threshold so new peers
|
||||||
|
// can enter and start accumulating uptime. Subsequent heartbeats must meet
|
||||||
|
// the full threshold once uptime is tracked.
|
||||||
|
minScore := float64(50)
|
||||||
|
if isFirstHeartbeat {
|
||||||
|
minScore = 40
|
||||||
|
}
|
||||||
|
fmt.Println(hb.Score, minScore)
|
||||||
|
if hb.Score < minScore {
|
||||||
|
return nil, nil, errors.New("not enough trusting value")
|
||||||
|
}
|
||||||
|
hb.Stream = &Stream{
|
||||||
|
Name: hb.Name,
|
||||||
|
DID: hb.DID,
|
||||||
|
Stream: s,
|
||||||
|
Expiry: time.Now().UTC().Add(2 * time.Minute),
|
||||||
|
} // here is the long-lived bidirectionnal heart bit.
|
||||||
|
return &pid, &hb, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func getDiversityRate(h host.Host, peers []string) float64 {
|
||||||
|
|
||||||
|
peers, _ = checkPeers(h, peers)
|
||||||
|
diverse := []string{}
|
||||||
|
for _, p := range peers {
|
||||||
|
ip, err := ExtractIP(p)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Println("NO IP", p, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
div := ip.Mask(net.CIDRMask(24, 32)).String()
|
||||||
|
if !slices.Contains(diverse, div) {
|
||||||
|
diverse = append(diverse, div)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(diverse) == 0 || len(peers) == 0 {
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
return float64(len(diverse) / len(peers))
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkPeers(h host.Host, peers []string) ([]string, []string) {
|
||||||
|
concretePeer := []string{}
|
||||||
|
ips := []string{}
|
||||||
|
for _, p := range peers {
|
||||||
|
ad, err := pp.AddrInfoFromString(p)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if PeerIsAlive(h, *ad) {
|
||||||
|
concretePeer = append(concretePeer, p)
|
||||||
|
if ip, err := ExtractIP(p); err == nil {
|
||||||
|
ips = append(ips, ip.Mask(net.CIDRMask(24, 32)).String())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return concretePeer, ips
|
||||||
|
}
|
||||||
|
|
||||||
|
const MaxExpectedMbps = 100.0
|
||||||
|
const MinPayloadChallenge = 512
|
||||||
|
const MaxPayloadChallenge = 2048
|
||||||
|
const BaseRoundTrip = 400 * time.Millisecond
|
||||||
|
|
||||||
|
// getBandwidthChallengeRate opens a dedicated ProtocolBandwidthProbe stream to
|
||||||
|
// remotePeer, sends a random payload, reads the echo, and computes throughput.
|
||||||
|
// Using a separate stream avoids mixing binary data on the JSON heartbeat stream
|
||||||
|
// and ensures the echo handler is actually running on the remote side.
|
||||||
|
func getBandwidthChallengeRate(h host.Host, remotePeer pp.ID, payloadSize int) (bool, float64, error) {
|
||||||
|
payload := make([]byte, payloadSize)
|
||||||
|
if _, err := cr.Read(payload); err != nil {
|
||||||
|
return false, 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
s, err := h.NewStream(ctx, remotePeer, ProtocolBandwidthProbe)
|
||||||
|
if err != nil {
|
||||||
|
return false, 0, err
|
||||||
|
}
|
||||||
|
defer s.Reset()
|
||||||
|
s.SetDeadline(time.Now().Add(10 * time.Second))
|
||||||
|
start := time.Now()
|
||||||
|
if _, err = s.Write(payload); err != nil {
|
||||||
|
return false, 0, err
|
||||||
|
}
|
||||||
|
s.CloseWrite()
|
||||||
|
// Half-close the write side so the handler's io.Copy sees EOF and stops.
|
||||||
|
// Read the echo.
|
||||||
|
response := make([]byte, payloadSize)
|
||||||
|
if _, err = io.ReadFull(s, response); err != nil {
|
||||||
|
return false, 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
duration := time.Since(start)
|
||||||
|
maxRoundTrip := BaseRoundTrip + (time.Duration(payloadSize) * (100 * time.Millisecond))
|
||||||
|
mbps := float64(payloadSize*8) / duration.Seconds() / 1e6
|
||||||
|
if duration > maxRoundTrip || mbps < 5.0 {
|
||||||
|
return false, float64(mbps / MaxExpectedMbps), nil
|
||||||
|
}
|
||||||
|
return true, float64(mbps / MaxExpectedMbps), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type UptimeTracker struct {
|
||||||
|
FirstSeen time.Time
|
||||||
|
LastSeen time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
func (u *UptimeTracker) Uptime() time.Duration {
|
||||||
|
return time.Since(u.FirstSeen)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (u *UptimeTracker) IsEligible(min time.Duration) bool {
|
||||||
|
return u.Uptime() >= min
|
||||||
|
}
|
||||||
|
|
||||||
|
type StreamRecord[T interface{}] struct {
|
||||||
|
DID string
|
||||||
|
HeartbeatStream *Stream
|
||||||
|
Record T
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *StreamRecord[T]) GetUptimeTracker() *UptimeTracker {
|
||||||
|
if s.HeartbeatStream == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return s.HeartbeatStream.UptimeTracker
|
||||||
|
}
|
||||||
|
|
||||||
|
type Stream struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
DID string `json:"did"`
|
||||||
|
Stream network.Stream
|
||||||
|
Expiry time.Time `json:"expiry"`
|
||||||
|
UptimeTracker *UptimeTracker
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Stream) GetUptimeTracker() *UptimeTracker {
|
||||||
|
return s.UptimeTracker
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewStream[T interface{}](s network.Stream, did string, record T) *Stream {
|
||||||
|
return &Stream{
|
||||||
|
DID: did,
|
||||||
|
Stream: s,
|
||||||
|
Expiry: time.Now().UTC().Add(2 * time.Minute),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProtocolStream map[protocol.ID]map[pp.ID]*Stream
|
||||||
|
|
||||||
|
func (ps ProtocolStream) Get(protocol protocol.ID) map[pp.ID]*Stream {
|
||||||
|
if ps[protocol] == nil {
|
||||||
|
ps[protocol] = map[pp.ID]*Stream{}
|
||||||
|
}
|
||||||
|
|
||||||
|
return ps[protocol]
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps ProtocolStream) Add(protocol protocol.ID, peerID *pp.ID, s *Stream) error {
|
||||||
|
if ps[protocol] == nil {
|
||||||
|
ps[protocol] = map[pp.ID]*Stream{}
|
||||||
|
}
|
||||||
|
if peerID != nil {
|
||||||
|
if s != nil {
|
||||||
|
ps[protocol][*peerID] = s
|
||||||
|
} else {
|
||||||
|
return errors.New("unable to add stream : stream missing")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps ProtocolStream) Delete(protocol protocol.ID, peerID *pp.ID) {
|
||||||
|
if streams, ok := ps[protocol]; ok {
|
||||||
|
if peerID != nil && streams[*peerID] != nil {
|
||||||
|
streams[*peerID].Stream.Close()
|
||||||
|
delete(streams, *peerID)
|
||||||
|
} else {
|
||||||
|
for _, s := range ps {
|
||||||
|
for _, v := range s {
|
||||||
|
v.Stream.Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
delete(ps, protocol)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
ProtocolPublish = "/opencloud/record/publish/1.0"
|
||||||
|
ProtocolGet = "/opencloud/record/get/1.0"
|
||||||
|
)
|
||||||
|
|
||||||
|
var TimeWatcher time.Time
|
||||||
|
|
||||||
|
var StaticIndexers map[string]*pp.AddrInfo = map[string]*pp.AddrInfo{}
|
||||||
|
var StreamMuIndexes sync.RWMutex
|
||||||
|
var StreamIndexers ProtocolStream = ProtocolStream{}
|
||||||
|
|
||||||
|
// indexerHeartbeatNudge allows replenishIndexersFromNative to trigger an immediate
|
||||||
|
// heartbeat tick after adding new entries to StaticIndexers, without waiting up
|
||||||
|
// to 20s for the regular ticker. Buffered(1) so the sender never blocks.
|
||||||
|
var indexerHeartbeatNudge = make(chan struct{}, 1)
|
||||||
|
|
||||||
|
// NudgeIndexerHeartbeat signals the indexer heartbeat goroutine to fire immediately.
|
||||||
|
func NudgeIndexerHeartbeat() {
|
||||||
|
select {
|
||||||
|
case indexerHeartbeatNudge <- struct{}{}:
|
||||||
|
default: // nudge already pending, skip
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func ConnectToIndexers(h host.Host, minIndexer int, maxIndexer int, myPID pp.ID, recordFn ...func() json.RawMessage) error {
|
||||||
|
TimeWatcher = time.Now().UTC()
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
|
||||||
|
// If native addresses are configured, get the indexer pool from the native mesh,
|
||||||
|
// then start the long-lived heartbeat goroutine toward those indexers.
|
||||||
|
if conf.GetConfig().NativeIndexerAddresses != "" {
|
||||||
|
if err := ConnectToNatives(h, minIndexer, maxIndexer, myPID); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// Step 2: start the long-lived heartbeat goroutine toward the indexer pool.
|
||||||
|
// replaceStaticIndexers/replenishIndexersFromNative update the map in-place
|
||||||
|
// so this single goroutine follows all pool changes automatically.
|
||||||
|
logger.Info().Msg("[native] step 2 — starting long-lived heartbeat to indexer pool")
|
||||||
|
SendHeartbeat(context.Background(), ProtocolHeartbeat, conf.GetConfig().Name,
|
||||||
|
h, StreamIndexers, StaticIndexers, &StreamMuIndexes, 20*time.Second, recordFn...)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
addresses := strings.Split(conf.GetConfig().IndexerAddresses, ",")
|
||||||
|
|
||||||
|
if len(addresses) > maxIndexer {
|
||||||
|
addresses = addresses[0:maxIndexer]
|
||||||
|
}
|
||||||
|
|
||||||
|
StreamMuIndexes.Lock()
|
||||||
|
for _, indexerAddr := range addresses {
|
||||||
|
ad, err := pp.AddrInfoFromString(indexerAddr)
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
StaticIndexers[indexerAddr] = ad
|
||||||
|
}
|
||||||
|
indexerCount := len(StaticIndexers)
|
||||||
|
StreamMuIndexes.Unlock()
|
||||||
|
|
||||||
|
SendHeartbeat(context.Background(), ProtocolHeartbeat, conf.GetConfig().Name, h, StreamIndexers, StaticIndexers, &StreamMuIndexes, 20*time.Second, recordFn...) // your indexer is just like a node for the next indexer.
|
||||||
|
if indexerCount < minIndexer {
|
||||||
|
return errors.New("you run a node without indexers... your gonna be isolated.")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func AddStreamProtocol(ctx *context.Context, protoS ProtocolStream, h host.Host, proto protocol.ID, id pp.ID, mypid pp.ID, force bool, onStreamCreated *func(network.Stream)) ProtocolStream {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
if onStreamCreated == nil {
|
||||||
|
f := func(s network.Stream) {
|
||||||
|
protoS[proto][id] = &Stream{
|
||||||
|
Stream: s,
|
||||||
|
Expiry: time.Now().UTC().Add(2 * time.Minute),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
onStreamCreated = &f
|
||||||
|
}
|
||||||
|
f := *onStreamCreated
|
||||||
|
if mypid > id || force {
|
||||||
|
if ctx == nil {
|
||||||
|
c := context.Background()
|
||||||
|
ctx = &c
|
||||||
|
}
|
||||||
|
if protoS[proto] == nil {
|
||||||
|
protoS[proto] = map[pp.ID]*Stream{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if protoS[proto][id] != nil {
|
||||||
|
protoS[proto][id].Expiry = time.Now().Add(2 * time.Minute)
|
||||||
|
} else {
|
||||||
|
logger.Info().Msg("NEW STREAM Generated" + fmt.Sprintf("%v", proto) + " " + id.String())
|
||||||
|
s, err := h.NewStream(*ctx, id, proto)
|
||||||
|
if err != nil {
|
||||||
|
panic(err.Error())
|
||||||
|
}
|
||||||
|
f(s)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return protoS
|
||||||
|
}
|
||||||
|
|
||||||
|
type Heartbeat struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Stream *Stream `json:"stream"`
|
||||||
|
DID string `json:"did"`
|
||||||
|
PeerID string `json:"peer_id"`
|
||||||
|
Timestamp int64 `json:"timestamp"`
|
||||||
|
IndexersBinded []string `json:"indexers_binded"`
|
||||||
|
Score float64
|
||||||
|
// Record carries a fresh signed PeerRecord (JSON) so the receiving indexer
|
||||||
|
// can republish it to the DHT without an extra round-trip.
|
||||||
|
// Only set by nodes (not indexers heartbeating other indexers).
|
||||||
|
Record json.RawMessage `json:"record,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (hb *Heartbeat) ComputeIndexerScore(uptimeHours float64, bpms float64, diversity float64) {
|
||||||
|
hb.Score = ((0.3 * uptimeHours) +
|
||||||
|
(0.3 * bpms) +
|
||||||
|
(0.4 * diversity)) * 100
|
||||||
|
}
|
||||||
|
|
||||||
|
type HeartbeatInfo []struct {
|
||||||
|
Info []byte `json:"info"`
|
||||||
|
}
|
||||||
|
|
||||||
|
const ProtocolHeartbeat = "/opencloud/heartbeat/1.0"
|
||||||
|
|
||||||
|
// ProtocolBandwidthProbe is a dedicated short-lived stream used exclusively
|
||||||
|
// for bandwidth/latency measurement. The handler echoes any bytes it receives.
|
||||||
|
// All nodes and indexers register this handler so peers can measure them.
|
||||||
|
const ProtocolBandwidthProbe = "/opencloud/probe/1.0"
|
||||||
|
|
||||||
|
// HandleBandwidthProbe echoes back everything written on the stream, then closes.
|
||||||
|
// It is registered by all participants so the measuring side (the heartbeat receiver)
|
||||||
|
// can open a dedicated probe stream and read the round-trip latency + throughput.
|
||||||
|
func HandleBandwidthProbe(s network.Stream) {
|
||||||
|
defer s.Close()
|
||||||
|
s.SetDeadline(time.Now().Add(10 * time.Second))
|
||||||
|
io.Copy(s, s) // echo every byte back to the sender
|
||||||
|
}
|
||||||
|
|
||||||
|
// SendHeartbeat starts a goroutine that sends periodic heartbeats to peers.
|
||||||
|
// recordFn, when provided, is called on each tick and its output is embedded in
|
||||||
|
// the heartbeat as a fresh signed PeerRecord so the receiving indexer can
|
||||||
|
// republish it to the DHT without an extra round-trip.
|
||||||
|
// Pass no recordFn (or nil) for indexer→indexer / native heartbeats.
|
||||||
|
func SendHeartbeat(ctx context.Context, proto protocol.ID, name string, h host.Host, ps ProtocolStream, peers map[string]*pp.AddrInfo, mu *sync.RWMutex, interval time.Duration, recordFn ...func() json.RawMessage) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
// isIndexerHB is true when this goroutine drives the indexer heartbeat.
|
||||||
|
// isNativeHB is true when it drives the native heartbeat.
|
||||||
|
isIndexerHB := mu == &StreamMuIndexes
|
||||||
|
isNativeHB := mu == &StreamNativeMu
|
||||||
|
var recFn func() json.RawMessage
|
||||||
|
if len(recordFn) > 0 {
|
||||||
|
recFn = recordFn[0]
|
||||||
|
}
|
||||||
|
go func() {
|
||||||
|
logger.Info().Str("proto", string(proto)).Int("peers", len(peers)).Msg("heartbeat started")
|
||||||
|
t := time.NewTicker(interval)
|
||||||
|
defer t.Stop()
|
||||||
|
|
||||||
|
// doTick sends one round of heartbeats to the current peer snapshot.
|
||||||
|
doTick := func() {
|
||||||
|
// Build the heartbeat payload — snapshot current indexer addresses.
|
||||||
|
StreamMuIndexes.RLock()
|
||||||
|
addrs := make([]string, 0, len(StaticIndexers))
|
||||||
|
for addr := range StaticIndexers {
|
||||||
|
addrs = append(addrs, addr)
|
||||||
|
}
|
||||||
|
StreamMuIndexes.RUnlock()
|
||||||
|
hb := Heartbeat{
|
||||||
|
Name: name,
|
||||||
|
PeerID: h.ID().String(),
|
||||||
|
Timestamp: time.Now().UTC().Unix(),
|
||||||
|
IndexersBinded: addrs,
|
||||||
|
}
|
||||||
|
if recFn != nil {
|
||||||
|
hb.Record = recFn()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Snapshot the peer list under a read lock so we don't hold the
|
||||||
|
// write lock during network I/O.
|
||||||
|
if mu != nil {
|
||||||
|
mu.RLock()
|
||||||
|
}
|
||||||
|
snapshot := make([]*pp.AddrInfo, 0, len(peers))
|
||||||
|
for _, ix := range peers {
|
||||||
|
snapshot = append(snapshot, ix)
|
||||||
|
}
|
||||||
|
if mu != nil {
|
||||||
|
mu.RUnlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, ix := range snapshot {
|
||||||
|
wasConnected := h.Network().Connectedness(ix.ID) == network.Connected
|
||||||
|
if err := sendHeartbeat(ctx, h, proto, ix, hb, ps, interval*time.Second); err != nil {
|
||||||
|
// Step 3: heartbeat failed — remove from pool and trigger replenish.
|
||||||
|
logger.Info().Str("peer", ix.ID.String()).Str("proto", string(proto)).Msg("[native] step 3 — heartbeat failed, removing peer from pool")
|
||||||
|
|
||||||
|
// Remove the dead peer and clean up its stream.
|
||||||
|
// mu already covers ps when isIndexerHB (same mutex), so one
|
||||||
|
// lock acquisition is sufficient — no re-entrant double-lock.
|
||||||
|
if mu != nil {
|
||||||
|
mu.Lock()
|
||||||
|
}
|
||||||
|
if ps[proto] != nil {
|
||||||
|
if s, ok := ps[proto][ix.ID]; ok {
|
||||||
|
if s.Stream != nil {
|
||||||
|
s.Stream.Close()
|
||||||
|
}
|
||||||
|
delete(ps[proto], ix.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
lostAddr := ""
|
||||||
|
for addr, ad := range peers {
|
||||||
|
if ad.ID == ix.ID {
|
||||||
|
lostAddr = addr
|
||||||
|
delete(peers, addr)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
need := conf.GetConfig().MinIndexer - len(peers)
|
||||||
|
remaining := len(peers)
|
||||||
|
if mu != nil {
|
||||||
|
mu.Unlock()
|
||||||
|
}
|
||||||
|
logger.Info().Int("remaining", remaining).Int("min", conf.GetConfig().MinIndexer).Int("need", need).Msg("[native] step 3 — pool state after removal")
|
||||||
|
|
||||||
|
// Step 4: ask the native for the missing indexer count.
|
||||||
|
if isIndexerHB && conf.GetConfig().NativeIndexerAddresses != "" {
|
||||||
|
if need < 1 {
|
||||||
|
need = 1
|
||||||
|
}
|
||||||
|
logger.Info().Int("need", need).Msg("[native] step 3→4 — triggering replenish")
|
||||||
|
go replenishIndexersFromNative(h, need)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Native heartbeat failed — find a replacement native.
|
||||||
|
// Case 1: if the dead native was also serving as an indexer, evict it
|
||||||
|
// from StaticIndexers immediately without waiting for the indexer HB tick.
|
||||||
|
if isNativeHB {
|
||||||
|
logger.Info().Str("addr", lostAddr).Msg("[native] step 3 — native heartbeat failed, triggering native replenish")
|
||||||
|
if lostAddr != "" && conf.GetConfig().NativeIndexerAddresses != "" {
|
||||||
|
StreamMuIndexes.Lock()
|
||||||
|
if _, wasIndexer := StaticIndexers[lostAddr]; wasIndexer {
|
||||||
|
delete(StaticIndexers, lostAddr)
|
||||||
|
if s := StreamIndexers[ProtocolHeartbeat]; s != nil {
|
||||||
|
if stream, ok := s[ix.ID]; ok {
|
||||||
|
if stream.Stream != nil {
|
||||||
|
stream.Stream.Close()
|
||||||
|
}
|
||||||
|
delete(s, ix.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
idxNeed := conf.GetConfig().MinIndexer - len(StaticIndexers)
|
||||||
|
StreamMuIndexes.Unlock()
|
||||||
|
if idxNeed < 1 {
|
||||||
|
idxNeed = 1
|
||||||
|
}
|
||||||
|
logger.Info().Str("addr", lostAddr).Msg("[native] dead native evicted from indexer pool, triggering replenish")
|
||||||
|
go replenishIndexersFromNative(h, idxNeed)
|
||||||
|
} else {
|
||||||
|
StreamMuIndexes.Unlock()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
go replenishNativesFromPeers(h, lostAddr, proto)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Case 2: native-as-indexer reconnected after a restart.
|
||||||
|
// If the peer was disconnected before this tick and the heartbeat just
|
||||||
|
// succeeded (transparent reconnect), the native may have restarted with
|
||||||
|
// blank state (responsiblePeers empty). Evict it from StaticIndexers and
|
||||||
|
// re-request an assignment so the native re-tracks us properly and
|
||||||
|
// runOffloadLoop can eventually migrate us to real indexers.
|
||||||
|
if !wasConnected && isIndexerHB && conf.GetConfig().NativeIndexerAddresses != "" {
|
||||||
|
StreamNativeMu.RLock()
|
||||||
|
isNativeIndexer := false
|
||||||
|
for _, ad := range StaticNatives {
|
||||||
|
if ad.ID == ix.ID {
|
||||||
|
isNativeIndexer = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
StreamNativeMu.RUnlock()
|
||||||
|
if isNativeIndexer {
|
||||||
|
if mu != nil {
|
||||||
|
mu.Lock()
|
||||||
|
}
|
||||||
|
if ps[proto] != nil {
|
||||||
|
if s, ok := ps[proto][ix.ID]; ok {
|
||||||
|
if s.Stream != nil {
|
||||||
|
s.Stream.Close()
|
||||||
|
}
|
||||||
|
delete(ps[proto], ix.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
reconnectedAddr := ""
|
||||||
|
for addr, ad := range peers {
|
||||||
|
if ad.ID == ix.ID {
|
||||||
|
reconnectedAddr = addr
|
||||||
|
delete(peers, addr)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
idxNeed := conf.GetConfig().MinIndexer - len(peers)
|
||||||
|
if mu != nil {
|
||||||
|
mu.Unlock()
|
||||||
|
}
|
||||||
|
if idxNeed < 1 {
|
||||||
|
idxNeed = 1
|
||||||
|
}
|
||||||
|
logger.Info().Str("addr", reconnectedAddr).Str("peer", ix.ID.String()).Msg(
|
||||||
|
"[native] native-as-indexer reconnected after restart — evicting and re-requesting assignment")
|
||||||
|
go replenishIndexersFromNative(h, idxNeed)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
logger.Debug().Str("peer", ix.ID.String()).Str("proto", string(proto)).Msg("[native] step 2 — heartbeat sent ok")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-t.C:
|
||||||
|
doTick()
|
||||||
|
case <-indexerHeartbeatNudge:
|
||||||
|
if isIndexerHB {
|
||||||
|
logger.Info().Msg("[native] step 2 — nudge received, heartbeating new indexers immediately")
|
||||||
|
doTick()
|
||||||
|
}
|
||||||
|
case <-nativeHeartbeatNudge:
|
||||||
|
if isNativeHB {
|
||||||
|
logger.Info().Msg("[native] native nudge received, heartbeating replacement native immediately")
|
||||||
|
doTick()
|
||||||
|
}
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProtocolInfo struct {
|
||||||
|
PersistantStream bool
|
||||||
|
WaitResponse bool
|
||||||
|
TTL time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
func TempStream(h host.Host, ad pp.AddrInfo, proto protocol.ID, did string, streams ProtocolStream, pts map[protocol.ID]*ProtocolInfo, mu *sync.RWMutex) (ProtocolStream, error) {
|
||||||
|
expiry := 2 * time.Second
|
||||||
|
if pts[proto] != nil {
|
||||||
|
expiry = pts[proto].TTL
|
||||||
|
}
|
||||||
|
ctxTTL, _ := context.WithTimeout(context.Background(), expiry)
|
||||||
|
if h.Network().Connectedness(ad.ID) != network.Connected {
|
||||||
|
if err := h.Connect(ctxTTL, ad); err != nil {
|
||||||
|
return streams, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if streams[proto] != nil && streams[proto][ad.ID] != nil {
|
||||||
|
return streams, nil
|
||||||
|
} else if s, err := h.NewStream(ctxTTL, ad.ID, proto); err == nil {
|
||||||
|
mu.Lock()
|
||||||
|
if streams[proto] == nil {
|
||||||
|
streams[proto] = map[pp.ID]*Stream{}
|
||||||
|
}
|
||||||
|
mu.Unlock()
|
||||||
|
time.AfterFunc(expiry, func() {
|
||||||
|
mu.Lock()
|
||||||
|
delete(streams[proto], ad.ID)
|
||||||
|
mu.Unlock()
|
||||||
|
})
|
||||||
|
mu.Lock()
|
||||||
|
streams[proto][ad.ID] = &Stream{
|
||||||
|
DID: did,
|
||||||
|
Stream: s,
|
||||||
|
Expiry: time.Now().UTC().Add(expiry),
|
||||||
|
}
|
||||||
|
mu.Unlock()
|
||||||
|
return streams, nil
|
||||||
|
} else {
|
||||||
|
return streams, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func sendHeartbeat(ctx context.Context, h host.Host, proto protocol.ID, p *pp.AddrInfo,
|
||||||
|
hb Heartbeat, ps ProtocolStream, interval time.Duration) error {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
if ps[proto] == nil {
|
||||||
|
ps[proto] = map[pp.ID]*Stream{}
|
||||||
|
}
|
||||||
|
streams := ps[proto]
|
||||||
|
pss, exists := streams[p.ID]
|
||||||
|
ctxTTL, cancel := context.WithTimeout(ctx, 3*interval)
|
||||||
|
defer cancel()
|
||||||
|
// Connect si nécessaire
|
||||||
|
if h.Network().Connectedness(p.ID) != network.Connected {
|
||||||
|
if err := h.Connect(ctxTTL, *p); err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
exists = false // on devra recréer le stream
|
||||||
|
}
|
||||||
|
// Crée le stream si inexistant ou fermé
|
||||||
|
if !exists || pss.Stream == nil {
|
||||||
|
logger.Info().Msg("New Stream engaged as Heartbeat " + fmt.Sprintf("%v", proto) + " " + p.ID.String())
|
||||||
|
s, err := h.NewStream(ctx, p.ID, proto)
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
pss = &Stream{
|
||||||
|
Stream: s,
|
||||||
|
Expiry: time.Now().UTC().Add(2 * time.Minute),
|
||||||
|
}
|
||||||
|
streams[p.ID] = pss
|
||||||
|
}
|
||||||
|
|
||||||
|
// Envoie le heartbeat
|
||||||
|
ss := json.NewEncoder(pss.Stream)
|
||||||
|
err := ss.Encode(&hb)
|
||||||
|
if err != nil {
|
||||||
|
pss.Stream.Close()
|
||||||
|
pss.Stream = nil // recréera au prochain tick
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
pss.Expiry = time.Now().UTC().Add(2 * time.Minute)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
66
daemons/node/common/crypto.go
Normal file
66
daemons/node/common/crypto.go
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
package common
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/base64"
|
||||||
|
"errors"
|
||||||
|
"oc-discovery/conf"
|
||||||
|
"oc-discovery/models"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
"github.com/libp2p/go-libp2p/core/crypto"
|
||||||
|
"github.com/libp2p/go-libp2p/core/pnet"
|
||||||
|
)
|
||||||
|
|
||||||
|
func VerifyPeer(peers []*peer.Peer, event models.Event) error {
|
||||||
|
if len(peers) == 0 {
|
||||||
|
return errors.New("no peer found")
|
||||||
|
}
|
||||||
|
p := peers[0]
|
||||||
|
if p.Relation == peer.BLACKLIST { // if peer is blacklisted... quit...
|
||||||
|
return errors.New("peer is blacklisted")
|
||||||
|
}
|
||||||
|
pubKey, err := PubKeyFromString(p.PublicKey) // extract pubkey from pubkey str
|
||||||
|
if err != nil {
|
||||||
|
return errors.New("pubkey is malformed")
|
||||||
|
}
|
||||||
|
data, err := event.ToRawByte()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
} // extract byte from raw event excluding signature.
|
||||||
|
if ok, _ := pubKey.Verify(data, event.Signature); !ok { // then verify if pubkey sign this message...
|
||||||
|
return errors.New("check signature failed")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func Sign(priv crypto.PrivKey, data []byte) ([]byte, error) {
|
||||||
|
return priv.Sign(data)
|
||||||
|
}
|
||||||
|
|
||||||
|
func Verify(pub crypto.PubKey, data, sig []byte) (bool, error) {
|
||||||
|
return pub.Verify(data, sig)
|
||||||
|
}
|
||||||
|
|
||||||
|
func LoadPSKFromFile() (pnet.PSK, error) {
|
||||||
|
path := conf.GetConfig().PSKPath
|
||||||
|
data, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
psk, err := pnet.DecodeV1PSK(bytes.NewReader(data))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return psk, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func PubKeyFromString(s string) (crypto.PubKey, error) {
|
||||||
|
data, err := base64.StdEncoding.DecodeString(s)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return crypto.UnmarshalPublicKey(data)
|
||||||
|
}
|
||||||
15
daemons/node/common/interface.go
Normal file
15
daemons/node/common/interface.go
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
package common
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
)
|
||||||
|
|
||||||
|
type HeartBeatStreamed interface {
|
||||||
|
GetUptimeTracker() *UptimeTracker
|
||||||
|
}
|
||||||
|
|
||||||
|
type DiscoveryPeer interface {
|
||||||
|
GetPeerRecord(ctx context.Context, key string) ([]*peer.Peer, error)
|
||||||
|
}
|
||||||
777
daemons/node/common/native_stream.go
Normal file
777
daemons/node/common/native_stream.go
Normal file
@@ -0,0 +1,777 @@
|
|||||||
|
package common
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"math/rand"
|
||||||
|
"oc-discovery/conf"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
"github.com/libp2p/go-libp2p/core/host"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
"github.com/libp2p/go-libp2p/core/protocol"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
ProtocolNativeSubscription = "/opencloud/native/subscribe/1.0"
|
||||||
|
ProtocolNativeGetIndexers = "/opencloud/native/indexers/1.0"
|
||||||
|
// ProtocolNativeConsensus is used by nodes/indexers to cross-validate an indexer
|
||||||
|
// pool against all configured native peers.
|
||||||
|
ProtocolNativeConsensus = "/opencloud/native/consensus/1.0"
|
||||||
|
RecommendedHeartbeatInterval = 60 * time.Second
|
||||||
|
|
||||||
|
// TopicIndexerRegistry is the PubSub topic used by native indexers to gossip
|
||||||
|
// newly registered indexer PeerIDs to neighbouring natives.
|
||||||
|
TopicIndexerRegistry = "oc-indexer-registry"
|
||||||
|
|
||||||
|
// consensusQueryTimeout is the per-native timeout for a consensus query.
|
||||||
|
consensusQueryTimeout = 3 * time.Second
|
||||||
|
// consensusCollectTimeout is the total wait for all native responses.
|
||||||
|
consensusCollectTimeout = 4 * time.Second
|
||||||
|
)
|
||||||
|
|
||||||
|
// ConsensusRequest is sent by a node/indexer to a native to validate a candidate
|
||||||
|
// indexer list. The native replies with what it trusts and what it suggests instead.
|
||||||
|
type ConsensusRequest struct {
|
||||||
|
Candidates []string `json:"candidates"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConsensusResponse is returned by a native during a consensus challenge.
|
||||||
|
// Trusted = candidates the native considers alive.
|
||||||
|
// Suggestions = extras the native knows and trusts but that were not in the candidate list.
|
||||||
|
type ConsensusResponse struct {
|
||||||
|
Trusted []string `json:"trusted"`
|
||||||
|
Suggestions []string `json:"suggestions,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// IndexerRegistration is sent by an indexer to a native to signal its alive state.
|
||||||
|
// Only Addr is required; PeerID is derived from it if omitted.
|
||||||
|
type IndexerRegistration struct {
|
||||||
|
PeerID string `json:"peer_id,omitempty"`
|
||||||
|
Addr string `json:"addr"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetIndexersRequest asks a native for a pool of live indexers.
|
||||||
|
type GetIndexersRequest struct {
|
||||||
|
Count int `json:"count"`
|
||||||
|
From string `json:"from"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetIndexersResponse is returned by the native with live indexer multiaddrs.
|
||||||
|
type GetIndexersResponse struct {
|
||||||
|
Indexers []string `json:"indexers"`
|
||||||
|
IsSelfFallback bool `json:"is_self_fallback,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var StaticNatives = map[string]*pp.AddrInfo{}
|
||||||
|
var StreamNativeMu sync.RWMutex
|
||||||
|
var StreamNatives ProtocolStream = ProtocolStream{}
|
||||||
|
|
||||||
|
// nativeHeartbeatOnce ensures we start exactly one long-lived heartbeat goroutine
|
||||||
|
// toward the native mesh, even when ConnectToNatives is called from recovery paths.
|
||||||
|
var nativeHeartbeatOnce sync.Once
|
||||||
|
|
||||||
|
// nativeMeshHeartbeatOnce guards the native-to-native heartbeat goroutine started
|
||||||
|
// by EnsureNativePeers so only one goroutine covers the whole StaticNatives map.
|
||||||
|
var nativeMeshHeartbeatOnce sync.Once
|
||||||
|
|
||||||
|
// ConnectToNatives is the initial setup for nodes/indexers in native mode:
|
||||||
|
// 1. Parses native addresses → StaticNatives.
|
||||||
|
// 2. Starts a single long-lived heartbeat goroutine toward the native mesh.
|
||||||
|
// 3. Fetches an initial indexer pool from the first responsive native.
|
||||||
|
// 4. Runs consensus when real (non-fallback) indexers are returned.
|
||||||
|
// 5. Replaces StaticIndexers with the confirmed pool.
|
||||||
|
func ConnectToNatives(h host.Host, minIndexer int, maxIndexer int, myPID pp.ID) error {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
logger.Info().Msg("[native] step 1 — parsing native addresses")
|
||||||
|
|
||||||
|
// Parse native addresses — safe to call multiple times.
|
||||||
|
StreamNativeMu.Lock()
|
||||||
|
orderedAddrs := []string{}
|
||||||
|
for _, addr := range strings.Split(conf.GetConfig().NativeIndexerAddresses, ",") {
|
||||||
|
addr = strings.TrimSpace(addr)
|
||||||
|
if addr == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ad, err := pp.AddrInfoFromString(addr)
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err).Msg("[native] step 1 — invalid native addr")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
StaticNatives[addr] = ad
|
||||||
|
orderedAddrs = append(orderedAddrs, addr)
|
||||||
|
logger.Info().Str("addr", addr).Msg("[native] step 1 — native registered")
|
||||||
|
}
|
||||||
|
if len(StaticNatives) == 0 {
|
||||||
|
StreamNativeMu.Unlock()
|
||||||
|
return errors.New("no valid native addresses configured")
|
||||||
|
}
|
||||||
|
StreamNativeMu.Unlock()
|
||||||
|
logger.Info().Int("count", len(orderedAddrs)).Msg("[native] step 1 — natives parsed")
|
||||||
|
|
||||||
|
// Step 1: one long-lived heartbeat to each native.
|
||||||
|
nativeHeartbeatOnce.Do(func() {
|
||||||
|
logger.Info().Msg("[native] step 1 — starting long-lived heartbeat to native mesh")
|
||||||
|
SendHeartbeat(context.Background(), ProtocolHeartbeat,
|
||||||
|
conf.GetConfig().Name, h, StreamNatives, StaticNatives, &StreamNativeMu, 20*time.Second)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Fetch initial pool from the first responsive native.
|
||||||
|
logger.Info().Int("want", maxIndexer).Msg("[native] step 1 — fetching indexer pool from native")
|
||||||
|
candidates, isFallback := fetchIndexersFromNative(h, orderedAddrs, maxIndexer)
|
||||||
|
if len(candidates) == 0 {
|
||||||
|
logger.Warn().Msg("[native] step 1 — no candidates returned by any native")
|
||||||
|
if minIndexer > 0 {
|
||||||
|
return errors.New("ConnectToNatives: no indexers available from any native")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
logger.Info().Int("candidates", len(candidates)).Bool("fallback", isFallback).Msg("[native] step 1 — pool received")
|
||||||
|
|
||||||
|
// Step 2: populate StaticIndexers — consensus for real indexers, direct for fallback.
|
||||||
|
pool := resolvePool(h, candidates, isFallback, maxIndexer)
|
||||||
|
replaceStaticIndexers(pool)
|
||||||
|
|
||||||
|
StreamMuIndexes.RLock()
|
||||||
|
indexerCount := len(StaticIndexers)
|
||||||
|
StreamMuIndexes.RUnlock()
|
||||||
|
logger.Info().Int("pool_size", indexerCount).Msg("[native] step 2 — StaticIndexers replaced")
|
||||||
|
|
||||||
|
if minIndexer > 0 && indexerCount < minIndexer {
|
||||||
|
return errors.New("not enough majority-confirmed indexers available")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// replenishIndexersFromNative is called when an indexer heartbeat fails (step 3→4).
|
||||||
|
// It asks the native for exactly `need` replacement indexers, runs consensus when
|
||||||
|
// real indexers are returned, and adds the results to StaticIndexers without
|
||||||
|
// clearing the existing pool.
|
||||||
|
func replenishIndexersFromNative(h host.Host, need int) {
|
||||||
|
if need <= 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
logger.Info().Int("need", need).Msg("[native] step 4 — replenishing indexer pool from native")
|
||||||
|
|
||||||
|
StreamNativeMu.RLock()
|
||||||
|
addrs := make([]string, 0, len(StaticNatives))
|
||||||
|
for addr := range StaticNatives {
|
||||||
|
addrs = append(addrs, addr)
|
||||||
|
}
|
||||||
|
StreamNativeMu.RUnlock()
|
||||||
|
|
||||||
|
candidates, isFallback := fetchIndexersFromNative(h, addrs, need)
|
||||||
|
if len(candidates) == 0 {
|
||||||
|
logger.Warn().Msg("[native] step 4 — no candidates returned by any native")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
logger.Info().Int("candidates", len(candidates)).Bool("fallback", isFallback).Msg("[native] step 4 — candidates received")
|
||||||
|
|
||||||
|
pool := resolvePool(h, candidates, isFallback, need)
|
||||||
|
if len(pool) == 0 {
|
||||||
|
logger.Warn().Msg("[native] step 4 — consensus yielded no confirmed indexers")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add new indexers to the pool — do NOT clear existing ones.
|
||||||
|
StreamMuIndexes.Lock()
|
||||||
|
for addr, ad := range pool {
|
||||||
|
StaticIndexers[addr] = ad
|
||||||
|
}
|
||||||
|
total := len(StaticIndexers)
|
||||||
|
|
||||||
|
StreamMuIndexes.Unlock()
|
||||||
|
logger.Info().Int("added", len(pool)).Int("total", total).Msg("[native] step 4 — pool replenished")
|
||||||
|
|
||||||
|
// Nudge the heartbeat goroutine to connect immediately instead of waiting
|
||||||
|
// for the next 20s tick.
|
||||||
|
NudgeIndexerHeartbeat()
|
||||||
|
logger.Info().Msg("[native] step 4 — heartbeat goroutine nudged")
|
||||||
|
}
|
||||||
|
|
||||||
|
// fetchIndexersFromNative opens a ProtocolNativeGetIndexers stream to the first
|
||||||
|
// responsive native and returns the candidate list and fallback flag.
|
||||||
|
func fetchIndexersFromNative(h host.Host, nativeAddrs []string, count int) (candidates []string, isFallback bool) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
for _, addr := range nativeAddrs {
|
||||||
|
ad, err := pp.AddrInfoFromString(addr)
|
||||||
|
if err != nil {
|
||||||
|
logger.Warn().Str("addr", addr).Msg("[native] fetch — skipping invalid addr")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||||
|
if err := h.Connect(ctx, *ad); err != nil {
|
||||||
|
cancel()
|
||||||
|
logger.Warn().Str("addr", addr).Err(err).Msg("[native] fetch — connect failed")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s, err := h.NewStream(ctx, ad.ID, ProtocolNativeGetIndexers)
|
||||||
|
cancel()
|
||||||
|
if err != nil {
|
||||||
|
logger.Warn().Str("addr", addr).Err(err).Msg("[native] fetch — stream open failed")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
req := GetIndexersRequest{Count: count, From: h.ID().String()}
|
||||||
|
if encErr := json.NewEncoder(s).Encode(req); encErr != nil {
|
||||||
|
s.Close()
|
||||||
|
logger.Warn().Str("addr", addr).Err(encErr).Msg("[native] fetch — encode request failed")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var resp GetIndexersResponse
|
||||||
|
if decErr := json.NewDecoder(s).Decode(&resp); decErr != nil {
|
||||||
|
s.Close()
|
||||||
|
logger.Warn().Str("addr", addr).Err(decErr).Msg("[native] fetch — decode response failed")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s.Close()
|
||||||
|
logger.Info().Str("native", addr).Int("indexers", len(resp.Indexers)).Bool("fallback", resp.IsSelfFallback).Msg("[native] fetch — response received")
|
||||||
|
return resp.Indexers, resp.IsSelfFallback
|
||||||
|
}
|
||||||
|
logger.Warn().Msg("[native] fetch — no native responded")
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// resolvePool converts a candidate list to a validated addr→AddrInfo map.
|
||||||
|
// When isFallback is true the native itself is the indexer — no consensus needed.
|
||||||
|
// When isFallback is false, consensus is run before accepting the candidates.
|
||||||
|
func resolvePool(h host.Host, candidates []string, isFallback bool, maxIndexer int) map[string]*pp.AddrInfo {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
if isFallback {
|
||||||
|
logger.Info().Strs("addrs", candidates).Msg("[native] resolve — fallback mode, skipping consensus")
|
||||||
|
pool := make(map[string]*pp.AddrInfo, len(candidates))
|
||||||
|
for _, addr := range candidates {
|
||||||
|
ad, err := pp.AddrInfoFromString(addr)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
pool[addr] = ad
|
||||||
|
}
|
||||||
|
return pool
|
||||||
|
}
|
||||||
|
|
||||||
|
// Round 1.
|
||||||
|
logger.Info().Int("candidates", len(candidates)).Msg("[native] resolve — consensus round 1")
|
||||||
|
confirmed, suggestions := clientSideConsensus(h, candidates)
|
||||||
|
logger.Info().Int("confirmed", len(confirmed)).Int("suggestions", len(suggestions)).Msg("[native] resolve — consensus round 1 done")
|
||||||
|
|
||||||
|
// Round 2: fill gaps from suggestions if below target.
|
||||||
|
if len(confirmed) < maxIndexer && len(suggestions) > 0 {
|
||||||
|
rand.Shuffle(len(suggestions), func(i, j int) { suggestions[i], suggestions[j] = suggestions[j], suggestions[i] })
|
||||||
|
gap := maxIndexer - len(confirmed)
|
||||||
|
if gap > len(suggestions) {
|
||||||
|
gap = len(suggestions)
|
||||||
|
}
|
||||||
|
logger.Info().Int("gap", gap).Msg("[native] resolve — consensus round 2 (filling gaps)")
|
||||||
|
confirmed2, _ := clientSideConsensus(h, append(confirmed, suggestions[:gap]...))
|
||||||
|
if len(confirmed2) > 0 {
|
||||||
|
confirmed = confirmed2
|
||||||
|
}
|
||||||
|
logger.Info().Int("confirmed", len(confirmed)).Msg("[native] resolve — consensus round 2 done")
|
||||||
|
}
|
||||||
|
|
||||||
|
pool := make(map[string]*pp.AddrInfo, len(confirmed))
|
||||||
|
for _, addr := range confirmed {
|
||||||
|
ad, err := pp.AddrInfoFromString(addr)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
pool[addr] = ad
|
||||||
|
}
|
||||||
|
logger.Info().Int("pool_size", len(pool)).Msg("[native] resolve — pool ready")
|
||||||
|
return pool
|
||||||
|
}
|
||||||
|
|
||||||
|
// replaceStaticIndexers atomically replaces the active indexer pool.
|
||||||
|
// Peers no longer in next have their heartbeat streams closed so the SendHeartbeat
|
||||||
|
// goroutine stops sending to them on the next tick.
|
||||||
|
func replaceStaticIndexers(next map[string]*pp.AddrInfo) {
|
||||||
|
StreamMuIndexes.Lock()
|
||||||
|
defer StreamMuIndexes.Unlock()
|
||||||
|
for addr, ad := range next {
|
||||||
|
StaticIndexers[addr] = ad
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// clientSideConsensus challenges a candidate list to ALL configured native peers
|
||||||
|
// in parallel. Each native replies with the candidates it trusts plus extras it
|
||||||
|
// recommends. An indexer is confirmed when strictly more than 50% of responding
|
||||||
|
// natives trust it.
|
||||||
|
func clientSideConsensus(h host.Host, candidates []string) (confirmed []string, suggestions []string) {
|
||||||
|
if len(candidates) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
StreamNativeMu.RLock()
|
||||||
|
peers := make([]*pp.AddrInfo, 0, len(StaticNatives))
|
||||||
|
for _, ad := range StaticNatives {
|
||||||
|
peers = append(peers, ad)
|
||||||
|
}
|
||||||
|
StreamNativeMu.RUnlock()
|
||||||
|
|
||||||
|
if len(peers) == 0 {
|
||||||
|
return candidates, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type nativeResult struct {
|
||||||
|
trusted []string
|
||||||
|
suggestions []string
|
||||||
|
responded bool
|
||||||
|
}
|
||||||
|
ch := make(chan nativeResult, len(peers))
|
||||||
|
|
||||||
|
for _, ad := range peers {
|
||||||
|
go func(ad *pp.AddrInfo) {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), consensusQueryTimeout)
|
||||||
|
defer cancel()
|
||||||
|
if err := h.Connect(ctx, *ad); err != nil {
|
||||||
|
ch <- nativeResult{}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
s, err := h.NewStream(ctx, ad.ID, ProtocolNativeConsensus)
|
||||||
|
if err != nil {
|
||||||
|
ch <- nativeResult{}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer s.Close()
|
||||||
|
if err := json.NewEncoder(s).Encode(ConsensusRequest{Candidates: candidates}); err != nil {
|
||||||
|
ch <- nativeResult{}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
var resp ConsensusResponse
|
||||||
|
if err := json.NewDecoder(s).Decode(&resp); err != nil {
|
||||||
|
ch <- nativeResult{}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ch <- nativeResult{trusted: resp.Trusted, suggestions: resp.Suggestions, responded: true}
|
||||||
|
}(ad)
|
||||||
|
}
|
||||||
|
|
||||||
|
timer := time.NewTimer(consensusCollectTimeout)
|
||||||
|
defer timer.Stop()
|
||||||
|
|
||||||
|
trustedCounts := map[string]int{}
|
||||||
|
suggestionPool := map[string]struct{}{}
|
||||||
|
total := 0
|
||||||
|
collected := 0
|
||||||
|
|
||||||
|
collect:
|
||||||
|
for collected < len(peers) {
|
||||||
|
select {
|
||||||
|
case r := <-ch:
|
||||||
|
collected++
|
||||||
|
if !r.responded {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
total++
|
||||||
|
seen := map[string]struct{}{}
|
||||||
|
for _, addr := range r.trusted {
|
||||||
|
if _, already := seen[addr]; !already {
|
||||||
|
trustedCounts[addr]++
|
||||||
|
seen[addr] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, addr := range r.suggestions {
|
||||||
|
suggestionPool[addr] = struct{}{}
|
||||||
|
}
|
||||||
|
case <-timer.C:
|
||||||
|
break collect
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if total == 0 {
|
||||||
|
return candidates, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
confirmedSet := map[string]struct{}{}
|
||||||
|
for addr, count := range trustedCounts {
|
||||||
|
if count*2 > total {
|
||||||
|
confirmed = append(confirmed, addr)
|
||||||
|
confirmedSet[addr] = struct{}{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for addr := range suggestionPool {
|
||||||
|
if _, ok := confirmedSet[addr]; !ok {
|
||||||
|
suggestions = append(suggestions, addr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// RegisterWithNative sends a one-shot registration to each configured native indexer.
|
||||||
|
// Should be called periodically every RecommendedHeartbeatInterval.
|
||||||
|
func RegisterWithNative(h host.Host, nativeAddressesStr string) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
myAddr := ""
|
||||||
|
if !strings.Contains(h.Addrs()[len(h.Addrs())-1].String(), "127.0.0.1") {
|
||||||
|
myAddr = h.Addrs()[len(h.Addrs())-1].String() + "/p2p/" + h.ID().String()
|
||||||
|
}
|
||||||
|
if myAddr == "" {
|
||||||
|
logger.Warn().Msg("RegisterWithNative: no routable address yet, skipping")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
reg := IndexerRegistration{
|
||||||
|
PeerID: h.ID().String(),
|
||||||
|
Addr: myAddr,
|
||||||
|
}
|
||||||
|
for _, addr := range strings.Split(nativeAddressesStr, ",") {
|
||||||
|
addr = strings.TrimSpace(addr)
|
||||||
|
if addr == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ad, err := pp.AddrInfoFromString(addr)
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err).Msg("RegisterWithNative: invalid addr")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||||
|
if err := h.Connect(ctx, *ad); err != nil {
|
||||||
|
cancel()
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s, err := h.NewStream(ctx, ad.ID, ProtocolNativeSubscription)
|
||||||
|
cancel()
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err).Msg("RegisterWithNative: stream open failed")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err := json.NewEncoder(s).Encode(reg); err != nil {
|
||||||
|
logger.Err(err).Msg("RegisterWithNative: encode failed")
|
||||||
|
}
|
||||||
|
s.Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// EnsureNativePeers populates StaticNatives from config and starts a single
|
||||||
|
// heartbeat goroutine toward the native mesh. Safe to call multiple times;
|
||||||
|
// the heartbeat goroutine is started at most once (nativeMeshHeartbeatOnce).
|
||||||
|
func EnsureNativePeers(h host.Host) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
nativeAddrs := conf.GetConfig().NativeIndexerAddresses
|
||||||
|
if nativeAddrs == "" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
StreamNativeMu.Lock()
|
||||||
|
for _, addr := range strings.Split(nativeAddrs, ",") {
|
||||||
|
addr = strings.TrimSpace(addr)
|
||||||
|
if addr == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ad, err := pp.AddrInfoFromString(addr)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
StaticNatives[addr] = ad
|
||||||
|
logger.Info().Str("addr", addr).Msg("native: registered peer in native mesh")
|
||||||
|
}
|
||||||
|
StreamNativeMu.Unlock()
|
||||||
|
// One heartbeat goroutine iterates over all of StaticNatives on each tick;
|
||||||
|
// starting one per address would multiply heartbeats by the native count.
|
||||||
|
nativeMeshHeartbeatOnce.Do(func() {
|
||||||
|
logger.Info().Msg("native: starting mesh heartbeat goroutine")
|
||||||
|
SendHeartbeat(context.Background(), ProtocolHeartbeat,
|
||||||
|
conf.GetConfig().Name, h, StreamNatives, StaticNatives, &StreamNativeMu, 20*time.Second)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func StartNativeRegistration(h host.Host, nativeAddressesStr string) {
|
||||||
|
go func() {
|
||||||
|
// Poll until a routable (non-loopback) address is available before the first
|
||||||
|
// registration attempt. libp2p may not have discovered external addresses yet
|
||||||
|
// at startup. Cap at 12 retries (~1 minute) so we don't spin indefinitely.
|
||||||
|
for i := 0; i < 12; i++ {
|
||||||
|
hasRoutable := false
|
||||||
|
if !strings.Contains(h.Addrs()[len(h.Addrs())-1].String(), "127.0.0.1") {
|
||||||
|
hasRoutable = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
if hasRoutable {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
time.Sleep(5 * time.Second)
|
||||||
|
}
|
||||||
|
RegisterWithNative(h, nativeAddressesStr)
|
||||||
|
t := time.NewTicker(RecommendedHeartbeatInterval)
|
||||||
|
defer t.Stop()
|
||||||
|
for range t.C {
|
||||||
|
RegisterWithNative(h, nativeAddressesStr)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ── Lost-native replacement ───────────────────────────────────────────────────
|
||||||
|
|
||||||
|
const (
|
||||||
|
// ProtocolNativeGetPeers lets a node/indexer ask a native for a random
|
||||||
|
// selection of that native's own native contacts (to replace a dead native).
|
||||||
|
ProtocolNativeGetPeers = "/opencloud/native/peers/1.0"
|
||||||
|
// ProtocolIndexerGetNatives lets nodes/indexers ask a connected indexer for
|
||||||
|
// its configured native addresses (fallback when no alive native responds).
|
||||||
|
ProtocolIndexerGetNatives = "/opencloud/indexer/natives/1.0"
|
||||||
|
// retryNativeInterval is how often retryLostNative polls a dead native.
|
||||||
|
retryNativeInterval = 30 * time.Second
|
||||||
|
)
|
||||||
|
|
||||||
|
// GetNativePeersRequest is sent to a native to ask for its known native contacts.
|
||||||
|
type GetNativePeersRequest struct {
|
||||||
|
Exclude []string `json:"exclude"`
|
||||||
|
Count int `json:"count"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetNativePeersResponse carries native addresses returned by a native's peer list.
|
||||||
|
type GetNativePeersResponse struct {
|
||||||
|
Peers []string `json:"peers"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetIndexerNativesRequest is sent to an indexer to ask for its configured native addresses.
|
||||||
|
type GetIndexerNativesRequest struct {
|
||||||
|
Exclude []string `json:"exclude"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetIndexerNativesResponse carries native addresses returned by an indexer.
|
||||||
|
type GetIndexerNativesResponse struct {
|
||||||
|
Natives []string `json:"natives"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// nativeHeartbeatNudge allows replenishNativesFromPeers to trigger an immediate
|
||||||
|
// native heartbeat tick after adding a replacement native to the pool.
|
||||||
|
var nativeHeartbeatNudge = make(chan struct{}, 1)
|
||||||
|
|
||||||
|
// NudgeNativeHeartbeat signals the native heartbeat goroutine to fire immediately.
|
||||||
|
func NudgeNativeHeartbeat() {
|
||||||
|
select {
|
||||||
|
case nativeHeartbeatNudge <- struct{}{}:
|
||||||
|
default: // nudge already pending, skip
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// replenishIndexersIfNeeded checks if the indexer pool is below the configured
|
||||||
|
// minimum (or empty) and, if so, asks the native mesh for replacements.
|
||||||
|
// Called whenever a native is recovered so the indexer pool is restored.
|
||||||
|
func replenishIndexersIfNeeded(h host.Host) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
minIdx := conf.GetConfig().MinIndexer
|
||||||
|
if minIdx < 1 {
|
||||||
|
minIdx = 1
|
||||||
|
}
|
||||||
|
StreamMuIndexes.RLock()
|
||||||
|
indexerCount := len(StaticIndexers)
|
||||||
|
StreamMuIndexes.RUnlock()
|
||||||
|
if indexerCount < minIdx {
|
||||||
|
need := minIdx - indexerCount
|
||||||
|
logger.Info().Int("need", need).Int("current", indexerCount).Msg("[native] native recovered — replenishing indexer pool")
|
||||||
|
go replenishIndexersFromNative(h, need)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// replenishNativesFromPeers is called when the heartbeat to a native fails.
|
||||||
|
// Flow:
|
||||||
|
// 1. Ask other alive natives for one of their native contacts (ProtocolNativeGetPeers).
|
||||||
|
// 2. If none respond or return a new address, ask connected indexers (ProtocolIndexerGetNatives).
|
||||||
|
// 3. If no replacement found:
|
||||||
|
// - remaining > 1 → ignore (enough natives remain).
|
||||||
|
// - remaining ≤ 1 → start periodic retry (retryLostNative).
|
||||||
|
func replenishNativesFromPeers(h host.Host, lostAddr string, proto protocol.ID) {
|
||||||
|
if lostAddr == "" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
logger.Info().Str("lost", lostAddr).Msg("[native] replenish natives — start")
|
||||||
|
|
||||||
|
// Build exclude list: the lost addr + all currently alive natives.
|
||||||
|
// lostAddr has already been removed from StaticNatives by doTick.
|
||||||
|
StreamNativeMu.RLock()
|
||||||
|
remaining := len(StaticNatives)
|
||||||
|
exclude := make([]string, 0, remaining+1)
|
||||||
|
exclude = append(exclude, lostAddr)
|
||||||
|
for addr := range StaticNatives {
|
||||||
|
exclude = append(exclude, addr)
|
||||||
|
}
|
||||||
|
StreamNativeMu.RUnlock()
|
||||||
|
|
||||||
|
logger.Info().Int("remaining", remaining).Msg("[native] replenish natives — step 1: ask alive natives for a peer")
|
||||||
|
|
||||||
|
// Step 1: ask other alive natives for a replacement.
|
||||||
|
newAddr := fetchNativeFromNatives(h, exclude)
|
||||||
|
|
||||||
|
// Step 2: fallback — ask connected indexers for their native addresses.
|
||||||
|
if newAddr == "" {
|
||||||
|
logger.Info().Msg("[native] replenish natives — step 2: ask indexers for their native addresses")
|
||||||
|
newAddr = fetchNativeFromIndexers(h, exclude)
|
||||||
|
}
|
||||||
|
|
||||||
|
if newAddr != "" {
|
||||||
|
ad, err := pp.AddrInfoFromString(newAddr)
|
||||||
|
if err == nil {
|
||||||
|
StreamNativeMu.Lock()
|
||||||
|
StaticNatives[newAddr] = ad
|
||||||
|
StreamNativeMu.Unlock()
|
||||||
|
logger.Info().Str("new", newAddr).Msg("[native] replenish natives — replacement added, nudging heartbeat")
|
||||||
|
NudgeNativeHeartbeat()
|
||||||
|
replenishIndexersIfNeeded(h)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 3: no replacement found.
|
||||||
|
logger.Warn().Int("remaining", remaining).Msg("[native] replenish natives — no replacement found")
|
||||||
|
if remaining > 1 {
|
||||||
|
logger.Info().Msg("[native] replenish natives — enough natives remain, ignoring loss")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// Last (or only) native — retry periodically.
|
||||||
|
logger.Info().Str("addr", lostAddr).Msg("[native] replenish natives — last native lost, starting periodic retry")
|
||||||
|
go retryLostNative(h, lostAddr, proto)
|
||||||
|
}
|
||||||
|
|
||||||
|
// fetchNativeFromNatives asks each alive native for one of its own native contacts
|
||||||
|
// not in exclude. Returns the first new address found or "" if none.
|
||||||
|
func fetchNativeFromNatives(h host.Host, exclude []string) string {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
excludeSet := make(map[string]struct{}, len(exclude))
|
||||||
|
for _, e := range exclude {
|
||||||
|
excludeSet[e] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
StreamNativeMu.RLock()
|
||||||
|
natives := make([]*pp.AddrInfo, 0, len(StaticNatives))
|
||||||
|
for _, ad := range StaticNatives {
|
||||||
|
natives = append(natives, ad)
|
||||||
|
}
|
||||||
|
StreamNativeMu.RUnlock()
|
||||||
|
|
||||||
|
rand.Shuffle(len(natives), func(i, j int) { natives[i], natives[j] = natives[j], natives[i] })
|
||||||
|
|
||||||
|
for _, ad := range natives {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||||
|
if err := h.Connect(ctx, *ad); err != nil {
|
||||||
|
cancel()
|
||||||
|
logger.Warn().Str("native", ad.ID.String()).Err(err).Msg("[native] fetch native peers — connect failed")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s, err := h.NewStream(ctx, ad.ID, ProtocolNativeGetPeers)
|
||||||
|
cancel()
|
||||||
|
if err != nil {
|
||||||
|
logger.Warn().Str("native", ad.ID.String()).Err(err).Msg("[native] fetch native peers — stream failed")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
req := GetNativePeersRequest{Exclude: exclude, Count: 1}
|
||||||
|
if encErr := json.NewEncoder(s).Encode(req); encErr != nil {
|
||||||
|
s.Close()
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var resp GetNativePeersResponse
|
||||||
|
if decErr := json.NewDecoder(s).Decode(&resp); decErr != nil {
|
||||||
|
s.Close()
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s.Close()
|
||||||
|
for _, peer := range resp.Peers {
|
||||||
|
if _, excluded := excludeSet[peer]; !excluded && peer != "" {
|
||||||
|
logger.Info().Str("from", ad.ID.String()).Str("new", peer).Msg("[native] fetch native peers — got replacement")
|
||||||
|
return peer
|
||||||
|
}
|
||||||
|
}
|
||||||
|
logger.Debug().Str("native", ad.ID.String()).Msg("[native] fetch native peers — no new native from this peer")
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// fetchNativeFromIndexers asks connected indexers for their configured native addresses,
|
||||||
|
// returning the first one not in exclude.
|
||||||
|
func fetchNativeFromIndexers(h host.Host, exclude []string) string {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
excludeSet := make(map[string]struct{}, len(exclude))
|
||||||
|
for _, e := range exclude {
|
||||||
|
excludeSet[e] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
StreamMuIndexes.RLock()
|
||||||
|
indexers := make([]*pp.AddrInfo, 0, len(StaticIndexers))
|
||||||
|
for _, ad := range StaticIndexers {
|
||||||
|
indexers = append(indexers, ad)
|
||||||
|
}
|
||||||
|
StreamMuIndexes.RUnlock()
|
||||||
|
|
||||||
|
rand.Shuffle(len(indexers), func(i, j int) { indexers[i], indexers[j] = indexers[j], indexers[i] })
|
||||||
|
|
||||||
|
for _, ad := range indexers {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||||
|
if err := h.Connect(ctx, *ad); err != nil {
|
||||||
|
cancel()
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s, err := h.NewStream(ctx, ad.ID, ProtocolIndexerGetNatives)
|
||||||
|
cancel()
|
||||||
|
if err != nil {
|
||||||
|
logger.Warn().Str("indexer", ad.ID.String()).Err(err).Msg("[native] fetch indexer natives — stream failed")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
req := GetIndexerNativesRequest{Exclude: exclude}
|
||||||
|
if encErr := json.NewEncoder(s).Encode(req); encErr != nil {
|
||||||
|
s.Close()
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var resp GetIndexerNativesResponse
|
||||||
|
if decErr := json.NewDecoder(s).Decode(&resp); decErr != nil {
|
||||||
|
s.Close()
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s.Close()
|
||||||
|
for _, nativeAddr := range resp.Natives {
|
||||||
|
if _, excluded := excludeSet[nativeAddr]; !excluded && nativeAddr != "" {
|
||||||
|
logger.Info().Str("indexer", ad.ID.String()).Str("native", nativeAddr).Msg("[native] fetch indexer natives — got native")
|
||||||
|
return nativeAddr
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
logger.Warn().Msg("[native] fetch indexer natives — no native found from indexers")
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// retryLostNative periodically retries connecting to a lost native address until
|
||||||
|
// it becomes reachable again or was already restored by another path.
|
||||||
|
func retryLostNative(h host.Host, addr string, nativeProto protocol.ID) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
logger.Info().Str("addr", addr).Msg("[native] retry — periodic retry for lost native started")
|
||||||
|
t := time.NewTicker(retryNativeInterval)
|
||||||
|
defer t.Stop()
|
||||||
|
for range t.C {
|
||||||
|
StreamNativeMu.RLock()
|
||||||
|
_, alreadyRestored := StaticNatives[addr]
|
||||||
|
StreamNativeMu.RUnlock()
|
||||||
|
if alreadyRestored {
|
||||||
|
logger.Info().Str("addr", addr).Msg("[native] retry — native already restored, stopping retry")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ad, err := pp.AddrInfoFromString(addr)
|
||||||
|
if err != nil {
|
||||||
|
logger.Warn().Str("addr", addr).Msg("[native] retry — invalid addr, stopping retry")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||||
|
err = h.Connect(ctx, *ad)
|
||||||
|
cancel()
|
||||||
|
if err != nil {
|
||||||
|
logger.Warn().Str("addr", addr).Msg("[native] retry — still unreachable")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// Reachable again — add back to pool.
|
||||||
|
StreamNativeMu.Lock()
|
||||||
|
StaticNatives[addr] = ad
|
||||||
|
StreamNativeMu.Unlock()
|
||||||
|
logger.Info().Str("addr", addr).Msg("[native] retry — native reconnected and added back to pool")
|
||||||
|
NudgeNativeHeartbeat()
|
||||||
|
replenishIndexersIfNeeded(h)
|
||||||
|
if nativeProto == ProtocolNativeGetIndexers {
|
||||||
|
StartNativeRegistration(h, addr) // register back
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
39
daemons/node/common/utils.go
Normal file
39
daemons/node/common/utils.go
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
package common
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"net"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/libp2p/go-libp2p/core/host"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
"github.com/multiformats/go-multiaddr"
|
||||||
|
)
|
||||||
|
|
||||||
|
func PeerIsAlive(h host.Host, ad pp.AddrInfo) bool {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
err := h.Connect(ctx, ad)
|
||||||
|
return err == nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func ExtractIP(addr string) (net.IP, error) {
|
||||||
|
ma, err := multiaddr.NewMultiaddr(addr)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
ipStr, err := ma.ValueForProtocol(multiaddr.P_IP4)
|
||||||
|
if err != nil {
|
||||||
|
ipStr, err = ma.ValueForProtocol(multiaddr.P_IP6)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ip := net.ParseIP(ipStr)
|
||||||
|
if ip == nil {
|
||||||
|
return nil, fmt.Errorf("invalid IP: %s", ipStr)
|
||||||
|
}
|
||||||
|
return ip, nil
|
||||||
|
}
|
||||||
350
daemons/node/indexer/handler.go
Normal file
350
daemons/node/indexer/handler.go
Normal file
@@ -0,0 +1,350 @@
|
|||||||
|
package indexer
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/base64"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"oc-discovery/conf"
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
pp "cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/utils"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
"github.com/libp2p/go-libp2p/core/crypto"
|
||||||
|
"github.com/libp2p/go-libp2p/core/network"
|
||||||
|
"github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
)
|
||||||
|
|
||||||
|
type PeerRecordPayload struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
DID string `json:"did"`
|
||||||
|
PubKey []byte `json:"pub_key"`
|
||||||
|
ExpiryDate time.Time `json:"expiry_date"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type PeerRecord struct {
|
||||||
|
PeerRecordPayload
|
||||||
|
PeerID string `json:"peer_id"`
|
||||||
|
APIUrl string `json:"api_url"`
|
||||||
|
StreamAddress string `json:"stream_address"`
|
||||||
|
NATSAddress string `json:"nats_address"`
|
||||||
|
WalletAddress string `json:"wallet_address"`
|
||||||
|
Signature []byte `json:"signature"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *PeerRecord) Sign() error {
|
||||||
|
priv, err := tools.LoadKeyFromFilePrivate()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
payload, _ := json.Marshal(p.PeerRecordPayload)
|
||||||
|
b, err := common.Sign(priv, payload)
|
||||||
|
p.Signature = b
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *PeerRecord) Verify() (crypto.PubKey, error) {
|
||||||
|
pubKey, err := crypto.UnmarshalPublicKey(p.PubKey) // retrieve pub key in message
|
||||||
|
if err != nil {
|
||||||
|
return pubKey, err
|
||||||
|
}
|
||||||
|
payload, _ := json.Marshal(p.PeerRecordPayload)
|
||||||
|
|
||||||
|
if ok, _ := pubKey.Verify(payload, p.Signature); !ok { // verify minimal message was sign per pubKey
|
||||||
|
return pubKey, errors.New("invalid signature")
|
||||||
|
}
|
||||||
|
return pubKey, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pr *PeerRecord) ExtractPeer(ourkey string, key string, pubKey crypto.PubKey) (bool, *pp.Peer, error) {
|
||||||
|
pubBytes, err := crypto.MarshalPublicKey(pubKey)
|
||||||
|
if err != nil {
|
||||||
|
return false, nil, err
|
||||||
|
}
|
||||||
|
rel := pp.NONE
|
||||||
|
if ourkey == key { // at this point is PeerID is same as our... we are... thats our peer INFO
|
||||||
|
rel = pp.SELF
|
||||||
|
}
|
||||||
|
|
||||||
|
p := &pp.Peer{
|
||||||
|
AbstractObject: utils.AbstractObject{
|
||||||
|
UUID: pr.DID,
|
||||||
|
Name: pr.Name,
|
||||||
|
},
|
||||||
|
Relation: rel, // VERIFY.... it crush nothing
|
||||||
|
PeerID: pr.PeerID,
|
||||||
|
PublicKey: base64.StdEncoding.EncodeToString(pubBytes),
|
||||||
|
APIUrl: pr.APIUrl,
|
||||||
|
StreamAddress: pr.StreamAddress,
|
||||||
|
NATSAddress: pr.NATSAddress,
|
||||||
|
WalletAddress: pr.WalletAddress,
|
||||||
|
}
|
||||||
|
b, err := json.Marshal(p)
|
||||||
|
if err != nil {
|
||||||
|
return pp.SELF == p.Relation, nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if time.Now().UTC().After(pr.ExpiryDate) {
|
||||||
|
return pp.SELF == p.Relation, nil, errors.New("peer " + key + " is offline")
|
||||||
|
}
|
||||||
|
go tools.NewNATSCaller().SetNATSPub(tools.CREATE_RESOURCE, tools.NATSResponse{
|
||||||
|
FromApp: "oc-discovery",
|
||||||
|
Datatype: tools.PEER,
|
||||||
|
Method: int(tools.CREATE_RESOURCE),
|
||||||
|
SearchAttr: "peer_id",
|
||||||
|
Payload: b,
|
||||||
|
})
|
||||||
|
|
||||||
|
return pp.SELF == p.Relation, p, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type GetValue struct {
|
||||||
|
Key string `json:"key"`
|
||||||
|
PeerID peer.ID `json:"peer_id"`
|
||||||
|
Name string `json:"name,omitempty"`
|
||||||
|
Search bool `json:"search,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type GetResponse struct {
|
||||||
|
Found bool `json:"found"`
|
||||||
|
Records map[string]PeerRecord `json:"records,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *IndexerService) genKey(did string) string {
|
||||||
|
return "/node/" + did
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *IndexerService) genNameKey(name string) string {
|
||||||
|
return "/name/" + name
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *IndexerService) genPIDKey(peerID string) string {
|
||||||
|
return "/pid/" + peerID
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *IndexerService) initNodeHandler() {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
logger.Info().Msg("Init Node Handler")
|
||||||
|
// Each heartbeat from a node carries a freshly signed PeerRecord.
|
||||||
|
// Republish it to the DHT so the record never expires as long as the node
|
||||||
|
// is alive — no separate publish stream needed from the node side.
|
||||||
|
ix.AfterHeartbeat = func(pid peer.ID) {
|
||||||
|
ctx1, cancel1 := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
defer cancel1()
|
||||||
|
res, err := ix.DHT.GetValue(ctx1, ix.genPIDKey(pid.String()))
|
||||||
|
if err != nil {
|
||||||
|
logger.Warn().Err(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
did := string(res)
|
||||||
|
ctx2, cancel2 := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
defer cancel2()
|
||||||
|
res, err = ix.DHT.GetValue(ctx2, ix.genKey(did))
|
||||||
|
if err != nil {
|
||||||
|
logger.Warn().Err(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
var rec PeerRecord
|
||||||
|
if err := json.Unmarshal(res, &rec); err != nil {
|
||||||
|
logger.Warn().Err(err).Str("peer", pid.String()).Msg("indexer: heartbeat record unmarshal failed")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if _, err := rec.Verify(); err != nil {
|
||||||
|
logger.Warn().Err(err).Str("peer", pid.String()).Msg("indexer: heartbeat record signature invalid")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
data, err := json.Marshal(rec)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
defer cancel()
|
||||||
|
logger.Info().Msg("REFRESH PutValue " + ix.genKey(rec.DID))
|
||||||
|
if err := ix.DHT.PutValue(ctx, ix.genKey(rec.DID), data); err != nil {
|
||||||
|
logger.Warn().Err(err).Str("did", rec.DID).Msg("indexer: DHT refresh failed")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if rec.Name != "" {
|
||||||
|
ctx2, cancel2 := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
ix.DHT.PutValue(ctx2, ix.genNameKey(rec.Name), []byte(rec.DID))
|
||||||
|
cancel2()
|
||||||
|
}
|
||||||
|
if rec.PeerID != "" {
|
||||||
|
ctx3, cancel3 := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
ix.DHT.PutValue(ctx3, ix.genPIDKey(rec.PeerID), []byte(rec.DID))
|
||||||
|
cancel3()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ix.Host.SetStreamHandler(common.ProtocolHeartbeat, ix.HandleHeartbeat)
|
||||||
|
ix.Host.SetStreamHandler(common.ProtocolPublish, ix.handleNodePublish)
|
||||||
|
ix.Host.SetStreamHandler(common.ProtocolGet, ix.handleNodeGet)
|
||||||
|
ix.Host.SetStreamHandler(common.ProtocolIndexerGetNatives, ix.handleGetNatives)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *IndexerService) handleNodePublish(s network.Stream) {
|
||||||
|
defer s.Close()
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
|
||||||
|
var rec PeerRecord
|
||||||
|
if err := json.NewDecoder(s).Decode(&rec); err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if _, err := rec.Verify(); err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if rec.PeerID == "" || rec.ExpiryDate.Before(time.Now().UTC()) {
|
||||||
|
logger.Err(errors.New(rec.PeerID + " is expired."))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
pid, err := peer.Decode(rec.PeerID)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ix.StreamMU.Lock()
|
||||||
|
defer ix.StreamMU.Unlock()
|
||||||
|
if ix.StreamRecords[common.ProtocolHeartbeat] == nil {
|
||||||
|
ix.StreamRecords[common.ProtocolHeartbeat] = map[peer.ID]*common.StreamRecord[PeerRecord]{}
|
||||||
|
}
|
||||||
|
streams := ix.StreamRecords[common.ProtocolHeartbeat]
|
||||||
|
if srec, ok := streams[pid]; ok {
|
||||||
|
srec.DID = rec.DID
|
||||||
|
srec.Record = rec
|
||||||
|
srec.HeartbeatStream.UptimeTracker.LastSeen = time.Now().UTC()
|
||||||
|
}
|
||||||
|
|
||||||
|
key := ix.genKey(rec.DID)
|
||||||
|
data, err := json.Marshal(rec)
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
if err := ix.DHT.PutValue(ctx, key, data); err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
cancel()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
cancel()
|
||||||
|
|
||||||
|
// Secondary index: /name/<name> → DID, so peers can resolve by human-readable name.
|
||||||
|
if rec.Name != "" {
|
||||||
|
ctx2, cancel2 := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
if err := ix.DHT.PutValue(ctx2, ix.genNameKey(rec.Name), []byte(rec.DID)); err != nil {
|
||||||
|
logger.Err(err).Str("name", rec.Name).Msg("indexer: failed to write name index")
|
||||||
|
}
|
||||||
|
cancel2()
|
||||||
|
}
|
||||||
|
// Secondary index: /pid/<peerID> → DID, so peers can resolve by libp2p PeerID.
|
||||||
|
if rec.PeerID != "" {
|
||||||
|
ctx3, cancel3 := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
if err := ix.DHT.PutValue(ctx3, ix.genPIDKey(rec.PeerID), []byte(rec.DID)); err != nil {
|
||||||
|
logger.Err(err).Str("pid", rec.PeerID).Msg("indexer: failed to write pid index")
|
||||||
|
}
|
||||||
|
cancel3()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *IndexerService) handleNodeGet(s network.Stream) {
|
||||||
|
|
||||||
|
defer s.Close()
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
|
||||||
|
var req GetValue
|
||||||
|
if err := json.NewDecoder(s).Decode(&req); err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
resp := GetResponse{Found: false, Records: map[string]PeerRecord{}}
|
||||||
|
|
||||||
|
keys := []string{}
|
||||||
|
// Name substring search — scan in-memory connected nodes first, then DHT exact match.
|
||||||
|
if req.Name != "" {
|
||||||
|
if req.Search {
|
||||||
|
for _, did := range ix.LookupNameIndex(strings.ToLower(req.Name)) {
|
||||||
|
keys = append(keys, did)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// 2. DHT exact-name lookup: covers nodes that published but aren't currently connected.
|
||||||
|
nameCtx, nameCancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||||
|
if ch, err := ix.DHT.SearchValue(nameCtx, ix.genNameKey(req.Name)); err == nil {
|
||||||
|
for did := range ch {
|
||||||
|
keys = append(keys, string(did))
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
nameCancel()
|
||||||
|
}
|
||||||
|
} else if req.PeerID != "" {
|
||||||
|
pidCtx, pidCancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||||
|
if did, err := ix.DHT.GetValue(pidCtx, ix.genPIDKey(req.PeerID.String())); err == nil {
|
||||||
|
keys = append(keys, string(did))
|
||||||
|
}
|
||||||
|
pidCancel()
|
||||||
|
} else {
|
||||||
|
keys = append(keys, req.Key)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DHT record fetch by DID key (covers exact-name and PeerID paths).
|
||||||
|
if len(keys) > 0 {
|
||||||
|
for _, k := range keys {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
c, err := ix.DHT.GetValue(ctx, ix.genKey(k))
|
||||||
|
cancel()
|
||||||
|
if err == nil {
|
||||||
|
var rec PeerRecord
|
||||||
|
if json.Unmarshal(c, &rec) == nil {
|
||||||
|
// Filter by PeerID only when one was explicitly specified.
|
||||||
|
if req.PeerID == "" || rec.PeerID == req.PeerID.String() {
|
||||||
|
resp.Records[rec.PeerID] = rec
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else if req.Name == "" && req.PeerID == "" {
|
||||||
|
logger.Err(err).Msg("Failed to fetch PeerRecord from DHT " + req.Key)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resp.Found = len(resp.Records) > 0
|
||||||
|
_ = json.NewEncoder(s).Encode(resp)
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleGetNatives returns this indexer's configured native addresses,
|
||||||
|
// excluding any in the request's Exclude list.
|
||||||
|
func (ix *IndexerService) handleGetNatives(s network.Stream) {
|
||||||
|
defer s.Close()
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
|
||||||
|
var req common.GetIndexerNativesRequest
|
||||||
|
if err := json.NewDecoder(s).Decode(&req); err != nil {
|
||||||
|
logger.Err(err).Msg("indexer get natives: decode")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
excludeSet := make(map[string]struct{}, len(req.Exclude))
|
||||||
|
for _, e := range req.Exclude {
|
||||||
|
excludeSet[e] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
resp := common.GetIndexerNativesResponse{}
|
||||||
|
for _, addr := range strings.Split(conf.GetConfig().NativeIndexerAddresses, ",") {
|
||||||
|
addr = strings.TrimSpace(addr)
|
||||||
|
if addr == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if _, excluded := excludeSet[addr]; !excluded {
|
||||||
|
resp.Natives = append(resp.Natives, addr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := json.NewEncoder(s).Encode(resp); err != nil {
|
||||||
|
logger.Err(err).Msg("indexer get natives: encode response")
|
||||||
|
}
|
||||||
|
}
|
||||||
168
daemons/node/indexer/nameindex.go
Normal file
168
daemons/node/indexer/nameindex.go
Normal file
@@ -0,0 +1,168 @@
|
|||||||
|
package indexer
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
)
|
||||||
|
|
||||||
|
// TopicNameIndex is the GossipSub topic shared by regular indexers to exchange
|
||||||
|
// add/delete events for the distributed name→peerID mapping.
|
||||||
|
const TopicNameIndex = "oc-name-index"
|
||||||
|
|
||||||
|
// nameIndexDedupWindow suppresses re-emission of the same (action, name, peerID)
|
||||||
|
// tuple within this window, reducing duplicate events when a node is registered
|
||||||
|
// with multiple indexers simultaneously.
|
||||||
|
const nameIndexDedupWindow = 30 * time.Second
|
||||||
|
|
||||||
|
// NameIndexAction indicates whether a name mapping is being added or removed.
|
||||||
|
type NameIndexAction string
|
||||||
|
|
||||||
|
const (
|
||||||
|
NameIndexAdd NameIndexAction = "add"
|
||||||
|
NameIndexDelete NameIndexAction = "delete"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NameIndexEvent is published on TopicNameIndex by each indexer when a node
|
||||||
|
// registers (add) or is evicted by the GC (delete).
|
||||||
|
type NameIndexEvent struct {
|
||||||
|
Action NameIndexAction `json:"action"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
PeerID string `json:"peer_id"`
|
||||||
|
DID string `json:"did"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// nameIndexState holds the local in-memory name index and the sender-side
|
||||||
|
// deduplication tracker.
|
||||||
|
type nameIndexState struct {
|
||||||
|
// index: name → peerID → DID, built from events received from all indexers.
|
||||||
|
index map[string]map[string]string
|
||||||
|
indexMu sync.RWMutex
|
||||||
|
|
||||||
|
// emitted tracks the last emission time for each (action, name, peerID) key
|
||||||
|
// to suppress duplicates within nameIndexDedupWindow.
|
||||||
|
emitted map[string]time.Time
|
||||||
|
emittedMu sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// shouldEmit returns true if the (action, name, peerID) tuple has not been
|
||||||
|
// emitted within nameIndexDedupWindow, updating the tracker if so.
|
||||||
|
func (s *nameIndexState) shouldEmit(action NameIndexAction, name, peerID string) bool {
|
||||||
|
key := string(action) + ":" + name + ":" + peerID
|
||||||
|
s.emittedMu.Lock()
|
||||||
|
defer s.emittedMu.Unlock()
|
||||||
|
if t, ok := s.emitted[key]; ok && time.Since(t) < nameIndexDedupWindow {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
s.emitted[key] = time.Now()
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// onEvent applies a received NameIndexEvent to the local index.
|
||||||
|
// "add" inserts/updates the mapping; "delete" removes it.
|
||||||
|
// Operations are idempotent — duplicate events from multiple indexers are harmless.
|
||||||
|
func (s *nameIndexState) onEvent(evt NameIndexEvent) {
|
||||||
|
if evt.Name == "" || evt.PeerID == "" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
s.indexMu.Lock()
|
||||||
|
defer s.indexMu.Unlock()
|
||||||
|
switch evt.Action {
|
||||||
|
case NameIndexAdd:
|
||||||
|
if s.index[evt.Name] == nil {
|
||||||
|
s.index[evt.Name] = map[string]string{}
|
||||||
|
}
|
||||||
|
s.index[evt.Name][evt.PeerID] = evt.DID
|
||||||
|
case NameIndexDelete:
|
||||||
|
if s.index[evt.Name] != nil {
|
||||||
|
delete(s.index[evt.Name], evt.PeerID)
|
||||||
|
if len(s.index[evt.Name]) == 0 {
|
||||||
|
delete(s.index, evt.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// initNameIndex joins TopicNameIndex and starts consuming events.
|
||||||
|
// Must be called after ix.PS is ready.
|
||||||
|
func (ix *IndexerService) initNameIndex(ps *pubsub.PubSub) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
ix.nameIndex = &nameIndexState{
|
||||||
|
index: map[string]map[string]string{},
|
||||||
|
emitted: map[string]time.Time{},
|
||||||
|
}
|
||||||
|
|
||||||
|
ps.RegisterTopicValidator(TopicNameIndex, func(_ context.Context, _ pp.ID, _ *pubsub.Message) bool {
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
topic, err := ps.Join(TopicNameIndex)
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err).Msg("name index: failed to join topic")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.Lock()
|
||||||
|
ix.LongLivedStreamRecordedService.LongLivedPubSubService.LongLivedPubSubs[TopicNameIndex] = topic
|
||||||
|
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.Unlock()
|
||||||
|
|
||||||
|
common.SubscribeEvents(
|
||||||
|
ix.LongLivedStreamRecordedService.LongLivedPubSubService,
|
||||||
|
context.Background(),
|
||||||
|
TopicNameIndex,
|
||||||
|
-1,
|
||||||
|
func(_ context.Context, evt NameIndexEvent, _ string) {
|
||||||
|
ix.nameIndex.onEvent(evt)
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// publishNameEvent emits a NameIndexEvent on TopicNameIndex, subject to the
|
||||||
|
// sender-side deduplication window.
|
||||||
|
func (ix *IndexerService) publishNameEvent(action NameIndexAction, name, peerID, did string) {
|
||||||
|
if ix.nameIndex == nil || name == "" || peerID == "" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if !ix.nameIndex.shouldEmit(action, name, peerID) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.RLock()
|
||||||
|
topic := ix.LongLivedStreamRecordedService.LongLivedPubSubService.LongLivedPubSubs[TopicNameIndex]
|
||||||
|
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.RUnlock()
|
||||||
|
if topic == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
evt := NameIndexEvent{Action: action, Name: name, PeerID: peerID, DID: did}
|
||||||
|
b, err := json.Marshal(evt)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
_ = topic.Publish(context.Background(), b)
|
||||||
|
}
|
||||||
|
|
||||||
|
// LookupNameIndex searches the distributed name index for peers whose name
|
||||||
|
// contains needle (case-insensitive). Returns peerID → DID for matched peers.
|
||||||
|
// Returns nil if the name index is not initialised (e.g. native indexers).
|
||||||
|
func (ix *IndexerService) LookupNameIndex(needle string) map[string]string {
|
||||||
|
if ix.nameIndex == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
result := map[string]string{}
|
||||||
|
needleLow := strings.ToLower(needle)
|
||||||
|
ix.nameIndex.indexMu.RLock()
|
||||||
|
defer ix.nameIndex.indexMu.RUnlock()
|
||||||
|
for name, peers := range ix.nameIndex.index {
|
||||||
|
if strings.Contains(strings.ToLower(name), needleLow) {
|
||||||
|
for peerID, did := range peers {
|
||||||
|
result[peerID] = did
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
579
daemons/node/indexer/native.go
Normal file
579
daemons/node/indexer/native.go
Normal file
@@ -0,0 +1,579 @@
|
|||||||
|
package indexer
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"math/rand"
|
||||||
|
"slices"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||||
|
"github.com/libp2p/go-libp2p/core/network"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// IndexerTTL is the lifetime of a live-indexer cache entry. Set to 50% above
|
||||||
|
// the recommended 60s heartbeat interval so a single delayed renewal does not
|
||||||
|
// evict a healthy indexer from the native's cache.
|
||||||
|
IndexerTTL = 90 * time.Second
|
||||||
|
// offloadInterval is how often the native checks if it can release responsible peers.
|
||||||
|
offloadInterval = 30 * time.Second
|
||||||
|
// dhtRefreshInterval is how often the background goroutine queries the DHT for
|
||||||
|
// known-but-expired indexer entries (written by neighbouring natives).
|
||||||
|
dhtRefreshInterval = 30 * time.Second
|
||||||
|
// maxFallbackPeers caps how many peers the native will accept in self-delegation
|
||||||
|
// mode. Beyond this limit the native refuses to act as a fallback indexer so it
|
||||||
|
// is not overwhelmed during prolonged indexer outages.
|
||||||
|
maxFallbackPeers = 50
|
||||||
|
)
|
||||||
|
|
||||||
|
// liveIndexerEntry tracks a registered indexer in the native's in-memory cache and DHT.
|
||||||
|
type liveIndexerEntry struct {
|
||||||
|
PeerID string `json:"peer_id"`
|
||||||
|
Addr string `json:"addr"`
|
||||||
|
ExpiresAt time.Time `json:"expires_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// NativeState holds runtime state specific to native indexer operation.
|
||||||
|
type NativeState struct {
|
||||||
|
liveIndexers map[string]*liveIndexerEntry // keyed by PeerID, local cache with TTL
|
||||||
|
liveIndexersMu sync.RWMutex
|
||||||
|
responsiblePeers map[pp.ID]struct{} // peers for which the native is fallback indexer
|
||||||
|
responsibleMu sync.RWMutex
|
||||||
|
// knownPeerIDs accumulates all indexer PeerIDs ever seen (local stream or gossip).
|
||||||
|
// Used by refreshIndexersFromDHT to re-hydrate expired entries from the shared DHT,
|
||||||
|
// including entries written by other natives.
|
||||||
|
knownPeerIDs map[string]string
|
||||||
|
knownMu sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func newNativeState() *NativeState {
|
||||||
|
return &NativeState{
|
||||||
|
liveIndexers: map[string]*liveIndexerEntry{},
|
||||||
|
responsiblePeers: map[pp.ID]struct{}{},
|
||||||
|
knownPeerIDs: map[string]string{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// IndexerRecordValidator validates indexer DHT entries under the "indexer" namespace.
|
||||||
|
type IndexerRecordValidator struct{}
|
||||||
|
|
||||||
|
func (v IndexerRecordValidator) Validate(_ string, value []byte) error {
|
||||||
|
var e liveIndexerEntry
|
||||||
|
if err := json.Unmarshal(value, &e); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if e.Addr == "" {
|
||||||
|
return errors.New("missing addr")
|
||||||
|
}
|
||||||
|
if e.ExpiresAt.Before(time.Now().UTC()) {
|
||||||
|
return errors.New("expired indexer record")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v IndexerRecordValidator) Select(_ string, values [][]byte) (int, error) {
|
||||||
|
var newest time.Time
|
||||||
|
index := 0
|
||||||
|
for i, val := range values {
|
||||||
|
var e liveIndexerEntry
|
||||||
|
if err := json.Unmarshal(val, &e); err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if e.ExpiresAt.After(newest) {
|
||||||
|
newest = e.ExpiresAt
|
||||||
|
index = i
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return index, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// InitNative registers native-specific stream handlers and starts background loops.
|
||||||
|
// Must be called after DHT is initialized.
|
||||||
|
func (ix *IndexerService) InitNative() {
|
||||||
|
ix.Native = newNativeState()
|
||||||
|
ix.Host.SetStreamHandler(common.ProtocolHeartbeat, ix.HandleHeartbeat) // specific heartbeat for Indexer.
|
||||||
|
ix.Host.SetStreamHandler(common.ProtocolNativeSubscription, ix.handleNativeSubscription)
|
||||||
|
ix.Host.SetStreamHandler(common.ProtocolNativeGetIndexers, ix.handleNativeGetIndexers)
|
||||||
|
ix.Host.SetStreamHandler(common.ProtocolNativeConsensus, ix.handleNativeConsensus)
|
||||||
|
ix.Host.SetStreamHandler(common.ProtocolNativeGetPeers, ix.handleNativeGetPeers)
|
||||||
|
ix.Host.SetStreamHandler(common.ProtocolIndexerGetNatives, ix.handleGetNatives)
|
||||||
|
ix.subscribeIndexerRegistry()
|
||||||
|
// Ensure long connections to other configured natives (native-to-native mesh).
|
||||||
|
common.EnsureNativePeers(ix.Host)
|
||||||
|
go ix.runOffloadLoop()
|
||||||
|
go ix.refreshIndexersFromDHT()
|
||||||
|
}
|
||||||
|
|
||||||
|
// subscribeIndexerRegistry joins the PubSub topic used by natives to gossip newly
|
||||||
|
// registered indexer PeerIDs to one another, enabling cross-native DHT discovery.
|
||||||
|
func (ix *IndexerService) subscribeIndexerRegistry() {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
ix.PS.RegisterTopicValidator(common.TopicIndexerRegistry, func(_ context.Context, _ pp.ID, msg *pubsub.Message) bool {
|
||||||
|
// Reject empty or syntactically invalid multiaddrs before they reach the
|
||||||
|
// message loop. A compromised native could otherwise gossip arbitrary data.
|
||||||
|
addr := string(msg.Data)
|
||||||
|
if addr == "" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
_, err := pp.AddrInfoFromString(addr)
|
||||||
|
return err == nil
|
||||||
|
})
|
||||||
|
topic, err := ix.PS.Join(common.TopicIndexerRegistry)
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err).Msg("native: failed to join indexer registry topic")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
sub, err := topic.Subscribe()
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err).Msg("native: failed to subscribe to indexer registry topic")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ix.PubsubMu.Lock()
|
||||||
|
ix.LongLivedPubSubs[common.TopicIndexerRegistry] = topic
|
||||||
|
ix.PubsubMu.Unlock()
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
for {
|
||||||
|
msg, err := sub.Next(context.Background())
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
addr := string(msg.Data)
|
||||||
|
if addr == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if peer, err := pp.AddrInfoFromString(addr); err == nil {
|
||||||
|
ix.Native.knownMu.Lock()
|
||||||
|
ix.Native.knownPeerIDs[peer.ID.String()] = addr
|
||||||
|
ix.Native.knownMu.Unlock()
|
||||||
|
|
||||||
|
}
|
||||||
|
// A neighbouring native registered this PeerID; add to known set for DHT refresh.
|
||||||
|
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleNativeSubscription stores an indexer's alive registration in the local cache
|
||||||
|
// immediately, then persists it to the DHT asynchronously.
|
||||||
|
// The stream is temporary: indexer sends one IndexerRegistration and closes.
|
||||||
|
func (ix *IndexerService) handleNativeSubscription(s network.Stream) {
|
||||||
|
defer s.Close()
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
|
||||||
|
logger.Info().Msg("Subscription")
|
||||||
|
|
||||||
|
var reg common.IndexerRegistration
|
||||||
|
if err := json.NewDecoder(s).Decode(®); err != nil {
|
||||||
|
logger.Err(err).Msg("native subscription: decode")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
logger.Info().Msg("Subscription " + reg.Addr)
|
||||||
|
|
||||||
|
if reg.Addr == "" {
|
||||||
|
logger.Error().Msg("native subscription: missing addr")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if reg.PeerID == "" {
|
||||||
|
ad, err := pp.AddrInfoFromString(reg.Addr)
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err).Msg("native subscription: invalid addr")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
reg.PeerID = ad.ID.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build entry with a fresh TTL — must happen before the cache write so the 66s
|
||||||
|
// window is not consumed by DHT retries.
|
||||||
|
entry := &liveIndexerEntry{
|
||||||
|
PeerID: reg.PeerID,
|
||||||
|
Addr: reg.Addr,
|
||||||
|
ExpiresAt: time.Now().UTC().Add(IndexerTTL),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update local cache and known set immediately so concurrent GetIndexers calls
|
||||||
|
// can already see this indexer without waiting for the DHT write to complete.
|
||||||
|
ix.Native.liveIndexersMu.Lock()
|
||||||
|
_, isRenewal := ix.Native.liveIndexers[reg.PeerID]
|
||||||
|
ix.Native.liveIndexers[reg.PeerID] = entry
|
||||||
|
ix.Native.liveIndexersMu.Unlock()
|
||||||
|
|
||||||
|
ix.Native.knownMu.Lock()
|
||||||
|
ix.Native.knownPeerIDs[reg.PeerID] = reg.Addr
|
||||||
|
ix.Native.knownMu.Unlock()
|
||||||
|
|
||||||
|
// Gossip PeerID to neighbouring natives so they discover it via DHT.
|
||||||
|
ix.PubsubMu.RLock()
|
||||||
|
topic := ix.LongLivedPubSubs[common.TopicIndexerRegistry]
|
||||||
|
ix.PubsubMu.RUnlock()
|
||||||
|
if topic != nil {
|
||||||
|
if err := topic.Publish(context.Background(), []byte(reg.Addr)); err != nil {
|
||||||
|
logger.Err(err).Msg("native subscription: registry gossip publish")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if isRenewal {
|
||||||
|
logger.Debug().Str("peer", reg.PeerID).Msg("native: indexer TTL renewed : " + fmt.Sprintf("%v", len(ix.Native.liveIndexers)))
|
||||||
|
} else {
|
||||||
|
logger.Info().Str("peer", reg.PeerID).Msg("native: indexer registered : " + fmt.Sprintf("%v", len(ix.Native.liveIndexers)))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Persist in DHT asynchronously — retries must not block the handler or consume
|
||||||
|
// the local cache TTL.
|
||||||
|
key := ix.genIndexerKey(reg.PeerID)
|
||||||
|
data, err := json.Marshal(entry)
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err).Msg("native subscription: marshal entry")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
go func() {
|
||||||
|
for {
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
|
if err := ix.DHT.PutValue(ctx, key, data); err != nil {
|
||||||
|
cancel()
|
||||||
|
logger.Err(err).Msg("native subscription: DHT put " + key)
|
||||||
|
if strings.Contains(err.Error(), "failed to find any peer in table") {
|
||||||
|
time.Sleep(10 * time.Second)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
cancel()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleNativeGetIndexers returns this native's own list of reachable indexers.
|
||||||
|
// Self-delegation (native acting as temporary fallback indexer) is only permitted
|
||||||
|
// for nodes — never for peers that are themselves registered indexers in knownPeerIDs.
|
||||||
|
// The consensus across natives is the responsibility of the requesting node/indexer.
|
||||||
|
func (ix *IndexerService) handleNativeGetIndexers(s network.Stream) {
|
||||||
|
defer s.Close()
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
|
||||||
|
var req common.GetIndexersRequest
|
||||||
|
if err := json.NewDecoder(s).Decode(&req); err != nil {
|
||||||
|
logger.Err(err).Msg("native get indexers: decode")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if req.Count <= 0 {
|
||||||
|
req.Count = 3
|
||||||
|
}
|
||||||
|
callerPeerID := s.Conn().RemotePeer().String()
|
||||||
|
reachable := ix.reachableLiveIndexers(req.Count, callerPeerID)
|
||||||
|
var resp common.GetIndexersResponse
|
||||||
|
|
||||||
|
if len(reachable) == 0 {
|
||||||
|
// No live indexers reachable — try to self-delegate.
|
||||||
|
if ix.selfDelegate(s.Conn().RemotePeer(), &resp) {
|
||||||
|
logger.Info().Str("peer", callerPeerID).Msg("native: no indexers, acting as fallback for node")
|
||||||
|
} else {
|
||||||
|
// Fallback pool saturated: return empty so the caller retries another
|
||||||
|
// native instead of piling more load onto this one.
|
||||||
|
logger.Warn().Str("peer", callerPeerID).Int("pool", maxFallbackPeers).Msg(
|
||||||
|
"native: fallback pool saturated, refusing self-delegation")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
rand.Shuffle(len(reachable), func(i, j int) { reachable[i], reachable[j] = reachable[j], reachable[i] })
|
||||||
|
if req.Count > len(reachable) {
|
||||||
|
req.Count = len(reachable)
|
||||||
|
}
|
||||||
|
resp.Indexers = reachable[:req.Count]
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := json.NewEncoder(s).Encode(resp); err != nil {
|
||||||
|
logger.Err(err).Msg("native get indexers: encode response")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleNativeConsensus answers a consensus challenge from a node/indexer.
|
||||||
|
// It returns:
|
||||||
|
// - Trusted: which of the candidates it considers alive.
|
||||||
|
// - Suggestions: extras it knows and trusts that were not in the candidate list.
|
||||||
|
func (ix *IndexerService) handleNativeConsensus(s network.Stream) {
|
||||||
|
defer s.Close()
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
|
||||||
|
var req common.ConsensusRequest
|
||||||
|
if err := json.NewDecoder(s).Decode(&req); err != nil {
|
||||||
|
logger.Err(err).Msg("native consensus: decode")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
myList := ix.reachableLiveIndexers(-1, s.Conn().RemotePeer().String())
|
||||||
|
mySet := make(map[string]struct{}, len(myList))
|
||||||
|
for _, addr := range myList {
|
||||||
|
mySet[addr] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
trusted := []string{}
|
||||||
|
candidateSet := make(map[string]struct{}, len(req.Candidates))
|
||||||
|
for _, addr := range req.Candidates {
|
||||||
|
candidateSet[addr] = struct{}{}
|
||||||
|
if _, ok := mySet[addr]; ok {
|
||||||
|
trusted = append(trusted, addr) // candidate we also confirm as reachable
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extras we trust but that the requester didn't include → suggestions.
|
||||||
|
suggestions := []string{}
|
||||||
|
for _, addr := range myList {
|
||||||
|
if _, inCandidates := candidateSet[addr]; !inCandidates {
|
||||||
|
suggestions = append(suggestions, addr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resp := common.ConsensusResponse{Trusted: trusted, Suggestions: suggestions}
|
||||||
|
if err := json.NewEncoder(s).Encode(resp); err != nil {
|
||||||
|
logger.Err(err).Msg("native consensus: encode response")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// selfDelegate marks the caller as a responsible peer and exposes this native's own
|
||||||
|
// address as its temporary indexer. Returns false when the fallback pool is saturated
|
||||||
|
// (maxFallbackPeers reached) — the caller must return an empty response so the node
|
||||||
|
// retries later instead of pinning indefinitely to an overloaded native.
|
||||||
|
func (ix *IndexerService) selfDelegate(remotePeer pp.ID, resp *common.GetIndexersResponse) bool {
|
||||||
|
ix.Native.responsibleMu.Lock()
|
||||||
|
defer ix.Native.responsibleMu.Unlock()
|
||||||
|
if len(ix.Native.responsiblePeers) >= maxFallbackPeers {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
ix.Native.responsiblePeers[remotePeer] = struct{}{}
|
||||||
|
resp.IsSelfFallback = true
|
||||||
|
resp.Indexers = []string{ix.Host.Addrs()[len(ix.Host.Addrs())-1].String() + "/p2p/" + ix.Host.ID().String()}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// reachableLiveIndexers returns the multiaddrs of non-expired, pingable indexers
|
||||||
|
// from the local cache (kept fresh by refreshIndexersFromDHT in background).
|
||||||
|
func (ix *IndexerService) reachableLiveIndexers(count int, from ...string) []string {
|
||||||
|
ix.Native.liveIndexersMu.RLock()
|
||||||
|
now := time.Now().UTC()
|
||||||
|
candidates := []*liveIndexerEntry{}
|
||||||
|
for _, e := range ix.Native.liveIndexers {
|
||||||
|
fmt.Println("liveIndexers", slices.Contains(from, e.PeerID), from, e.PeerID)
|
||||||
|
if e.ExpiresAt.After(now) && !slices.Contains(from, e.PeerID) {
|
||||||
|
candidates = append(candidates, e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ix.Native.liveIndexersMu.RUnlock()
|
||||||
|
|
||||||
|
fmt.Println("midway...", candidates, from, ix.Native.knownPeerIDs)
|
||||||
|
|
||||||
|
if (count > 0 && len(candidates) < count) || count < 0 {
|
||||||
|
ix.Native.knownMu.RLock()
|
||||||
|
for k, v := range ix.Native.knownPeerIDs {
|
||||||
|
// Include peers whose liveIndexers entry is absent OR expired.
|
||||||
|
// A non-nil but expired entry means the peer was once known but
|
||||||
|
// has since timed out — PeerIsAlive below will decide if it's back.
|
||||||
|
fmt.Println("knownPeerIDs", slices.Contains(from, k), from, k)
|
||||||
|
if !slices.Contains(from, k) {
|
||||||
|
candidates = append(candidates, &liveIndexerEntry{
|
||||||
|
PeerID: k,
|
||||||
|
Addr: v,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ix.Native.knownMu.RUnlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("midway...1", candidates)
|
||||||
|
|
||||||
|
reachable := []string{}
|
||||||
|
for _, e := range candidates {
|
||||||
|
ad, err := pp.AddrInfoFromString(e.Addr)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if common.PeerIsAlive(ix.Host, *ad) {
|
||||||
|
reachable = append(reachable, e.Addr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return reachable
|
||||||
|
}
|
||||||
|
|
||||||
|
// refreshIndexersFromDHT runs in background and queries the shared DHT for every known
|
||||||
|
// indexer PeerID whose local cache entry is missing or expired. This supplements the
|
||||||
|
// local cache with entries written by neighbouring natives.
|
||||||
|
func (ix *IndexerService) refreshIndexersFromDHT() {
|
||||||
|
t := time.NewTicker(dhtRefreshInterval)
|
||||||
|
defer t.Stop()
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
for range t.C {
|
||||||
|
ix.Native.knownMu.RLock()
|
||||||
|
peerIDs := make([]string, 0, len(ix.Native.knownPeerIDs))
|
||||||
|
for pid := range ix.Native.knownPeerIDs {
|
||||||
|
peerIDs = append(peerIDs, pid)
|
||||||
|
}
|
||||||
|
ix.Native.knownMu.RUnlock()
|
||||||
|
|
||||||
|
now := time.Now().UTC()
|
||||||
|
for _, pid := range peerIDs {
|
||||||
|
ix.Native.liveIndexersMu.RLock()
|
||||||
|
existing := ix.Native.liveIndexers[pid]
|
||||||
|
ix.Native.liveIndexersMu.RUnlock()
|
||||||
|
if existing != nil && existing.ExpiresAt.After(now) {
|
||||||
|
continue // still fresh in local cache
|
||||||
|
}
|
||||||
|
key := ix.genIndexerKey(pid)
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||||
|
ch, err := ix.DHT.SearchValue(ctx, key)
|
||||||
|
if err != nil {
|
||||||
|
cancel()
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var best *liveIndexerEntry
|
||||||
|
for b := range ch {
|
||||||
|
var e liveIndexerEntry
|
||||||
|
if err := json.Unmarshal(b, &e); err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if e.ExpiresAt.After(time.Now().UTC()) {
|
||||||
|
if best == nil || e.ExpiresAt.After(best.ExpiresAt) {
|
||||||
|
best = &e
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cancel()
|
||||||
|
if best != nil {
|
||||||
|
ix.Native.liveIndexersMu.Lock()
|
||||||
|
ix.Native.liveIndexers[best.PeerID] = best
|
||||||
|
ix.Native.liveIndexersMu.Unlock()
|
||||||
|
logger.Info().Str("peer", best.PeerID).Msg("native: refreshed indexer from DHT")
|
||||||
|
} else {
|
||||||
|
// DHT has no fresh entry — peer is gone, prune from known set.
|
||||||
|
ix.Native.knownMu.Lock()
|
||||||
|
delete(ix.Native.knownPeerIDs, pid)
|
||||||
|
ix.Native.knownMu.Unlock()
|
||||||
|
logger.Info().Str("peer", pid).Msg("native: pruned stale peer from knownPeerIDs")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *IndexerService) genIndexerKey(peerID string) string {
|
||||||
|
return "/indexer/" + peerID
|
||||||
|
}
|
||||||
|
|
||||||
|
// runOffloadLoop periodically checks if real indexers are available and releases
|
||||||
|
// responsible peers so they can reconnect to actual indexers on their next attempt.
|
||||||
|
func (ix *IndexerService) runOffloadLoop() {
|
||||||
|
t := time.NewTicker(offloadInterval)
|
||||||
|
defer t.Stop()
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
for range t.C {
|
||||||
|
fmt.Println("runOffloadLoop", ix.Native.responsiblePeers)
|
||||||
|
ix.Native.responsibleMu.RLock()
|
||||||
|
count := len(ix.Native.responsiblePeers)
|
||||||
|
ix.Native.responsibleMu.RUnlock()
|
||||||
|
if count == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ix.Native.responsibleMu.RLock()
|
||||||
|
peerIDS := []string{}
|
||||||
|
for p := range ix.Native.responsiblePeers {
|
||||||
|
peerIDS = append(peerIDS, p.String())
|
||||||
|
}
|
||||||
|
fmt.Println("COUNT --> ", count, len(ix.reachableLiveIndexers(-1, peerIDS...)))
|
||||||
|
ix.Native.responsibleMu.RUnlock()
|
||||||
|
if len(ix.reachableLiveIndexers(-1, peerIDS...)) > 0 {
|
||||||
|
ix.Native.responsibleMu.RLock()
|
||||||
|
released := ix.Native.responsiblePeers
|
||||||
|
ix.Native.responsibleMu.RUnlock()
|
||||||
|
|
||||||
|
// Reset (not Close) heartbeat streams of released peers.
|
||||||
|
// Close() only half-closes the native's write direction — the peer's write
|
||||||
|
// direction stays open and sendHeartbeat never sees an error.
|
||||||
|
// Reset() abruptly terminates both directions, making the peer's next
|
||||||
|
// json.Encode return an error which triggers replenishIndexersFromNative.
|
||||||
|
ix.StreamMU.Lock()
|
||||||
|
if streams := ix.StreamRecords[common.ProtocolHeartbeat]; streams != nil {
|
||||||
|
for pid := range released {
|
||||||
|
if rec, ok := streams[pid]; ok {
|
||||||
|
if rec.HeartbeatStream != nil && rec.HeartbeatStream.Stream != nil {
|
||||||
|
rec.HeartbeatStream.Stream.Reset()
|
||||||
|
}
|
||||||
|
ix.Native.responsibleMu.Lock()
|
||||||
|
delete(ix.Native.responsiblePeers, pid)
|
||||||
|
ix.Native.responsibleMu.Unlock()
|
||||||
|
|
||||||
|
delete(streams, pid)
|
||||||
|
logger.Info().Str("peer", pid.String()).Str("proto", string(common.ProtocolHeartbeat)).Msg(
|
||||||
|
"native: offload — stream reset, peer will reconnect to real indexer")
|
||||||
|
} else {
|
||||||
|
// No recorded heartbeat stream for this peer: either it never
|
||||||
|
// passed the score check (new peer, uptime=0 → score<75) or the
|
||||||
|
// stream was GC'd. We cannot send a Reset signal, so close the
|
||||||
|
// whole connection instead — this makes the peer's sendHeartbeat
|
||||||
|
// return an error, which triggers replenishIndexersFromNative and
|
||||||
|
// migrates it to a real indexer.
|
||||||
|
ix.Native.responsibleMu.Lock()
|
||||||
|
delete(ix.Native.responsiblePeers, pid)
|
||||||
|
ix.Native.responsibleMu.Unlock()
|
||||||
|
go ix.Host.Network().ClosePeer(pid)
|
||||||
|
logger.Info().Str("peer", pid.String()).Msg(
|
||||||
|
"native: offload — no heartbeat stream, closing connection so peer re-requests real indexers")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
ix.StreamMU.Unlock()
|
||||||
|
|
||||||
|
logger.Info().Int("released", count).Msg("native: offloaded responsible peers to real indexers")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// handleNativeGetPeers returns a random selection of this native's known native
|
||||||
|
// contacts, excluding any in the request's Exclude list.
|
||||||
|
func (ix *IndexerService) handleNativeGetPeers(s network.Stream) {
|
||||||
|
defer s.Close()
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
|
||||||
|
var req common.GetNativePeersRequest
|
||||||
|
if err := json.NewDecoder(s).Decode(&req); err != nil {
|
||||||
|
logger.Err(err).Msg("native get peers: decode")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if req.Count <= 0 {
|
||||||
|
req.Count = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
excludeSet := make(map[string]struct{}, len(req.Exclude))
|
||||||
|
for _, e := range req.Exclude {
|
||||||
|
excludeSet[e] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
common.StreamNativeMu.RLock()
|
||||||
|
candidates := make([]string, 0, len(common.StaticNatives))
|
||||||
|
for addr := range common.StaticNatives {
|
||||||
|
if _, excluded := excludeSet[addr]; !excluded {
|
||||||
|
candidates = append(candidates, addr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
common.StreamNativeMu.RUnlock()
|
||||||
|
|
||||||
|
rand.Shuffle(len(candidates), func(i, j int) { candidates[i], candidates[j] = candidates[j], candidates[i] })
|
||||||
|
if req.Count > len(candidates) {
|
||||||
|
req.Count = len(candidates)
|
||||||
|
}
|
||||||
|
|
||||||
|
resp := common.GetNativePeersResponse{Peers: candidates[:req.Count]}
|
||||||
|
if err := json.NewEncoder(s).Encode(resp); err != nil {
|
||||||
|
logger.Err(err).Msg("native get peers: encode response")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartNativeRegistration starts a goroutine that periodically registers this
|
||||||
|
// indexer with all configured native indexers (every RecommendedHeartbeatInterval).
|
||||||
103
daemons/node/indexer/service.go
Normal file
103
daemons/node/indexer/service.go
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
package indexer
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"oc-discovery/conf"
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
dht "github.com/libp2p/go-libp2p-kad-dht"
|
||||||
|
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||||
|
record "github.com/libp2p/go-libp2p-record"
|
||||||
|
"github.com/libp2p/go-libp2p/core/host"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
)
|
||||||
|
|
||||||
|
// IndexerService manages the indexer node's state: stream records, DHT, pubsub.
|
||||||
|
type IndexerService struct {
|
||||||
|
*common.LongLivedStreamRecordedService[PeerRecord]
|
||||||
|
PS *pubsub.PubSub
|
||||||
|
DHT *dht.IpfsDHT
|
||||||
|
isStrictIndexer bool
|
||||||
|
mu sync.RWMutex
|
||||||
|
IsNative bool
|
||||||
|
Native *NativeState // non-nil when IsNative == true
|
||||||
|
nameIndex *nameIndexState
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewIndexerService creates an IndexerService.
|
||||||
|
// If ps is nil, this is a strict indexer (no pre-existing gossip sub from a node).
|
||||||
|
func NewIndexerService(h host.Host, ps *pubsub.PubSub, maxNode int, isNative bool) *IndexerService {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
logger.Info().Msg("open indexer mode...")
|
||||||
|
var err error
|
||||||
|
ix := &IndexerService{
|
||||||
|
LongLivedStreamRecordedService: common.NewStreamRecordedService[PeerRecord](h, maxNode),
|
||||||
|
isStrictIndexer: ps == nil,
|
||||||
|
IsNative: isNative,
|
||||||
|
}
|
||||||
|
if ps == nil {
|
||||||
|
ps, err = pubsub.NewGossipSub(context.Background(), ix.Host)
|
||||||
|
if err != nil {
|
||||||
|
panic(err) // can't run your indexer without a propagation pubsub
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ix.PS = ps
|
||||||
|
|
||||||
|
if ix.isStrictIndexer && !isNative {
|
||||||
|
logger.Info().Msg("connect to indexers as strict indexer...")
|
||||||
|
common.ConnectToIndexers(h, conf.GetConfig().MinIndexer, conf.GetConfig().MaxIndexer, ix.Host.ID())
|
||||||
|
logger.Info().Msg("subscribe to decentralized search flow as strict indexer...")
|
||||||
|
go ix.SubscribeToSearch(ix.PS, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !isNative {
|
||||||
|
logger.Info().Msg("init distributed name index...")
|
||||||
|
ix.initNameIndex(ps)
|
||||||
|
ix.LongLivedStreamRecordedService.AfterDelete = func(pid pp.ID, name, did string) {
|
||||||
|
ix.publishNameEvent(NameIndexDelete, name, pid.String(), did)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if ix.DHT, err = dht.New(
|
||||||
|
context.Background(),
|
||||||
|
ix.Host,
|
||||||
|
dht.Mode(dht.ModeServer),
|
||||||
|
dht.ProtocolPrefix("oc"), // 🔥 réseau privé
|
||||||
|
dht.Validator(record.NamespacedValidator{
|
||||||
|
"node": PeerRecordValidator{},
|
||||||
|
"indexer": IndexerRecordValidator{}, // for native indexer registry
|
||||||
|
"name": DefaultValidator{},
|
||||||
|
"pid": DefaultValidator{},
|
||||||
|
}),
|
||||||
|
); err != nil {
|
||||||
|
logger.Info().Msg(err.Error())
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// InitNative must happen after DHT is ready
|
||||||
|
if isNative {
|
||||||
|
ix.InitNative()
|
||||||
|
} else {
|
||||||
|
ix.initNodeHandler()
|
||||||
|
// Register with configured natives so this indexer appears in their cache
|
||||||
|
if nativeAddrs := conf.GetConfig().NativeIndexerAddresses; nativeAddrs != "" {
|
||||||
|
common.StartNativeRegistration(ix.Host, nativeAddrs)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ix
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *IndexerService) Close() {
|
||||||
|
ix.DHT.Close()
|
||||||
|
ix.PS.UnregisterTopicValidator(common.TopicPubSubSearch)
|
||||||
|
if ix.nameIndex != nil {
|
||||||
|
ix.PS.UnregisterTopicValidator(TopicNameIndex)
|
||||||
|
}
|
||||||
|
for _, s := range ix.StreamRecords {
|
||||||
|
for _, ss := range s {
|
||||||
|
ss.HeartbeatStream.Stream.Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
64
daemons/node/indexer/validator.go
Normal file
64
daemons/node/indexer/validator.go
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
package indexer
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type DefaultValidator struct{}
|
||||||
|
|
||||||
|
func (v DefaultValidator) Validate(key string, value []byte) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v DefaultValidator) Select(key string, values [][]byte) (int, error) {
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type PeerRecordValidator struct{}
|
||||||
|
|
||||||
|
func (v PeerRecordValidator) Validate(key string, value []byte) error {
|
||||||
|
|
||||||
|
var rec PeerRecord
|
||||||
|
if err := json.Unmarshal(value, &rec); err != nil {
|
||||||
|
return errors.New("invalid json")
|
||||||
|
}
|
||||||
|
|
||||||
|
// PeerID must exist
|
||||||
|
if rec.PeerID == "" {
|
||||||
|
return errors.New("missing peerID")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Expiry check
|
||||||
|
if rec.ExpiryDate.Before(time.Now().UTC()) {
|
||||||
|
return errors.New("record expired")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Signature verification
|
||||||
|
if _, err := rec.Verify(); err != nil {
|
||||||
|
return errors.New("invalid signature")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (v PeerRecordValidator) Select(key string, values [][]byte) (int, error) {
|
||||||
|
|
||||||
|
var newest time.Time
|
||||||
|
index := 0
|
||||||
|
|
||||||
|
for i, val := range values {
|
||||||
|
var rec PeerRecord
|
||||||
|
if err := json.Unmarshal(val, &rec); err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if rec.ExpiryDate.After(newest) {
|
||||||
|
newest = rec.ExpiryDate
|
||||||
|
index = i
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return index, nil
|
||||||
|
}
|
||||||
225
daemons/node/nats.go
Normal file
225
daemons/node/nats.go
Normal file
@@ -0,0 +1,225 @@
|
|||||||
|
package node
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
"oc-discovery/daemons/node/stream"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/config"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
"github.com/libp2p/go-libp2p/core/protocol"
|
||||||
|
)
|
||||||
|
|
||||||
|
type configPayload struct {
|
||||||
|
PeerID string `json:"source_peer_id"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type executionConsidersPayload struct {
|
||||||
|
PeerIDs []string `json:"peer_ids"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func ListenNATS(n *Node) {
|
||||||
|
tools.NewNATSCaller().ListenNats(map[tools.NATSMethod]func(tools.NATSResponse){
|
||||||
|
/*tools.VERIFY_RESOURCE: func(resp tools.NATSResponse) {
|
||||||
|
if resp.FromApp == config.GetAppName() {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if res, err := resources.ToResource(resp.Datatype.EnumIndex(), resp.Payload); err == nil {
|
||||||
|
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
|
||||||
|
p := access.LoadOne(res.GetCreatorID())
|
||||||
|
realP := p.ToPeer()
|
||||||
|
if realP == nil {
|
||||||
|
return
|
||||||
|
} else if realP.Relation == peer.SELF {
|
||||||
|
pubKey, err := common.PubKeyFromString(realP.PublicKey) // extract pubkey from pubkey str
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ok, _ := pubKey.Verify(resp.Payload, res.GetSignature())
|
||||||
|
if b, err := json.Marshal(stream.Verify{
|
||||||
|
IsVerified: ok,
|
||||||
|
}); err == nil {
|
||||||
|
tools.NewNATSCaller().SetNATSPub(tools.VERIFY_RESOURCE, tools.NATSResponse{
|
||||||
|
FromApp: "oc-discovery",
|
||||||
|
Method: int(tools.VERIFY_RESOURCE),
|
||||||
|
Payload: b,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
} else if realP.Relation != peer.BLACKLIST {
|
||||||
|
n.StreamService.PublishVerifyResources(&resp.Datatype, resp.User, realP.PeerID, resp.Payload)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},*/
|
||||||
|
tools.CREATE_RESOURCE: func(resp tools.NATSResponse) {
|
||||||
|
if resp.FromApp == config.GetAppName() && resp.Datatype != tools.PEER && resp.Datatype != tools.WORKFLOW {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
m := map[string]interface{}{}
|
||||||
|
err := json.Unmarshal(resp.Payload, &m)
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
p := &peer.Peer{}
|
||||||
|
p = p.Deserialize(m, p).(*peer.Peer)
|
||||||
|
|
||||||
|
ad, err := pp.AddrInfoFromString(p.StreamAddress)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
n.StreamService.Mu.Lock()
|
||||||
|
defer n.StreamService.Mu.Unlock()
|
||||||
|
|
||||||
|
if p.Relation == peer.PARTNER {
|
||||||
|
n.StreamService.ConnectToPartner(p.StreamAddress)
|
||||||
|
} else {
|
||||||
|
ps := common.ProtocolStream{}
|
||||||
|
for p, s := range n.StreamService.Streams {
|
||||||
|
m := map[pp.ID]*common.Stream{}
|
||||||
|
for k := range s {
|
||||||
|
if ad.ID != k {
|
||||||
|
m[k] = s[k]
|
||||||
|
} else {
|
||||||
|
s[k].Stream.Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ps[p] = m
|
||||||
|
}
|
||||||
|
n.StreamService.Streams = ps
|
||||||
|
}
|
||||||
|
|
||||||
|
},
|
||||||
|
tools.PROPALGATION_EVENT: func(resp tools.NATSResponse) {
|
||||||
|
fmt.Println("PROPALGATION")
|
||||||
|
if resp.FromApp == config.GetAppName() {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
var propalgation tools.PropalgationMessage
|
||||||
|
err := json.Unmarshal(resp.Payload, &propalgation)
|
||||||
|
var dt *tools.DataType
|
||||||
|
if propalgation.DataType > 0 {
|
||||||
|
dtt := tools.DataType(propalgation.DataType)
|
||||||
|
dt = &dtt
|
||||||
|
}
|
||||||
|
fmt.Println("PROPALGATION ACT", propalgation.Action, propalgation.Action == tools.PB_CREATE, err)
|
||||||
|
if err == nil {
|
||||||
|
switch propalgation.Action {
|
||||||
|
case tools.PB_ADMIRALTY_CONFIG, tools.PB_MINIO_CONFIG:
|
||||||
|
var m configPayload
|
||||||
|
var proto protocol.ID = stream.ProtocolAdmiraltyConfigResource
|
||||||
|
if propalgation.Action == tools.PB_MINIO_CONFIG {
|
||||||
|
proto = stream.ProtocolMinioConfigResource
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(resp.Payload, &m); err == nil {
|
||||||
|
peers, _ := n.GetPeerRecord(context.Background(), m.PeerID)
|
||||||
|
for _, p := range peers {
|
||||||
|
n.StreamService.PublishCommon(&resp.Datatype, resp.User,
|
||||||
|
p.PeerID, proto, resp.Payload)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case tools.PB_CREATE, tools.PB_UPDATE, tools.PB_DELETE:
|
||||||
|
fmt.Println(propalgation.Action, dt, resp.User, propalgation.Payload)
|
||||||
|
fmt.Println(n.StreamService.ToPartnerPublishEvent(
|
||||||
|
context.Background(),
|
||||||
|
propalgation.Action,
|
||||||
|
dt, resp.User,
|
||||||
|
propalgation.Payload,
|
||||||
|
))
|
||||||
|
case tools.PB_CONSIDERS:
|
||||||
|
switch resp.Datatype {
|
||||||
|
case tools.BOOKING, tools.PURCHASE_RESOURCE, tools.WORKFLOW_EXECUTION:
|
||||||
|
var m executionConsidersPayload
|
||||||
|
if err := json.Unmarshal(resp.Payload, &m); err == nil {
|
||||||
|
for _, p := range m.PeerIDs {
|
||||||
|
peers, _ := n.GetPeerRecord(context.Background(), p)
|
||||||
|
for _, pp := range peers {
|
||||||
|
n.StreamService.PublishCommon(&resp.Datatype, resp.User,
|
||||||
|
pp.PeerID, stream.ProtocolConsidersResource, resp.Payload)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
// minio / admiralty config considers — route back to OriginID.
|
||||||
|
var m struct {
|
||||||
|
OriginID string `json:"origin_id"`
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(propalgation.Payload, &m); err == nil && m.OriginID != "" {
|
||||||
|
peers, _ := n.GetPeerRecord(context.Background(), m.OriginID)
|
||||||
|
for _, p := range peers {
|
||||||
|
n.StreamService.PublishCommon(nil, resp.User,
|
||||||
|
p.PeerID, stream.ProtocolConsidersResource, propalgation.Payload)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case tools.PB_PLANNER:
|
||||||
|
m := map[string]interface{}{}
|
||||||
|
if err := json.Unmarshal(resp.Payload, &m); err == nil {
|
||||||
|
b := []byte{}
|
||||||
|
if len(m) > 1 {
|
||||||
|
b = resp.Payload
|
||||||
|
}
|
||||||
|
if m["peer_id"] == nil { // send to every active stream
|
||||||
|
n.StreamService.Mu.Lock()
|
||||||
|
if n.StreamService.Streams[stream.ProtocolSendPlanner] != nil {
|
||||||
|
for pid := range n.StreamService.Streams[stream.ProtocolSendPlanner] {
|
||||||
|
n.StreamService.PublishCommon(nil, resp.User, pid.String(), stream.ProtocolSendPlanner, b)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
n.StreamService.PublishCommon(nil, resp.User, fmt.Sprintf("%v", m["peer_id"]), stream.ProtocolSendPlanner, b)
|
||||||
|
}
|
||||||
|
n.StreamService.Mu.Unlock()
|
||||||
|
}
|
||||||
|
case tools.PB_CLOSE_PLANNER:
|
||||||
|
m := map[string]interface{}{}
|
||||||
|
if err := json.Unmarshal(resp.Payload, &m); err == nil {
|
||||||
|
n.StreamService.Mu.Lock()
|
||||||
|
if pid, err := pp.Decode(fmt.Sprintf("%v", m["peer_id"])); err == nil {
|
||||||
|
if n.StreamService.Streams[stream.ProtocolSendPlanner] != nil && n.StreamService.Streams[stream.ProtocolSendPlanner][pid] != nil {
|
||||||
|
n.StreamService.Streams[stream.ProtocolSendPlanner][pid].Stream.Close()
|
||||||
|
delete(n.StreamService.Streams[stream.ProtocolSendPlanner], pid)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
n.StreamService.Mu.Unlock()
|
||||||
|
}
|
||||||
|
case tools.PB_SEARCH:
|
||||||
|
if propalgation.DataType == int(tools.PEER) {
|
||||||
|
m := map[string]interface{}{}
|
||||||
|
if err := json.Unmarshal(propalgation.Payload, &m); err == nil {
|
||||||
|
if peers, err := n.GetPeerRecord(context.Background(), fmt.Sprintf("%v", m["search"])); err == nil {
|
||||||
|
for _, p := range peers {
|
||||||
|
if b, err := json.Marshal(p); err == nil {
|
||||||
|
go tools.NewNATSCaller().SetNATSPub(tools.SEARCH_EVENT, tools.NATSResponse{
|
||||||
|
FromApp: "oc-discovery",
|
||||||
|
Datatype: tools.DataType(tools.PEER),
|
||||||
|
Method: int(tools.SEARCH_EVENT),
|
||||||
|
Payload: b,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
} else {
|
||||||
|
m := map[string]interface{}{}
|
||||||
|
if err := json.Unmarshal(propalgation.Payload, &m); err == nil {
|
||||||
|
n.PubSubService.SearchPublishEvent(
|
||||||
|
context.Background(),
|
||||||
|
dt,
|
||||||
|
fmt.Sprintf("%v", m["type"]),
|
||||||
|
resp.User,
|
||||||
|
fmt.Sprintf("%v", m["search"]),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
||||||
337
daemons/node/node.go
Normal file
337
daemons/node/node.go
Normal file
@@ -0,0 +1,337 @@
|
|||||||
|
package node
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"maps"
|
||||||
|
"oc-discovery/conf"
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
"oc-discovery/daemons/node/indexer"
|
||||||
|
"oc-discovery/daemons/node/pubsub"
|
||||||
|
"oc-discovery/daemons/node/stream"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/dbs"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/libp2p/go-libp2p"
|
||||||
|
pubsubs "github.com/libp2p/go-libp2p-pubsub"
|
||||||
|
"github.com/libp2p/go-libp2p/core/crypto"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
"github.com/libp2p/go-libp2p/core/protocol"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Node struct {
|
||||||
|
*common.LongLivedStreamRecordedService[interface{}] // change type of stream
|
||||||
|
PS *pubsubs.PubSub
|
||||||
|
IndexerService *indexer.IndexerService
|
||||||
|
PubSubService *pubsub.PubSubService
|
||||||
|
StreamService *stream.StreamService
|
||||||
|
PeerID pp.ID
|
||||||
|
isIndexer bool
|
||||||
|
peerRecord *indexer.PeerRecord
|
||||||
|
|
||||||
|
Mu sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func InitNode(isNode bool, isIndexer bool, isNativeIndexer bool) (*Node, error) {
|
||||||
|
if !isNode && !isIndexer {
|
||||||
|
return nil, errors.New("wait... what ? your node need to at least something. Retry we can't be friend in that case")
|
||||||
|
}
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
logger.Info().Msg("retrieving private key...")
|
||||||
|
priv, err := tools.LoadKeyFromFilePrivate() // your node private key
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
logger.Info().Msg("retrieving psk file...")
|
||||||
|
psk, err := common.LoadPSKFromFile() // network common private Network. Public OC PSK is Public Network
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
logger.Info().Msg("open a host...")
|
||||||
|
h, err := libp2p.New(
|
||||||
|
libp2p.PrivateNetwork(psk),
|
||||||
|
libp2p.Identity(priv),
|
||||||
|
libp2p.ListenAddrStrings(
|
||||||
|
fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", conf.GetConfig().NodeEndpointPort),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
logger.Info().Msg("Host open on " + h.ID().String())
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.New("no host no node")
|
||||||
|
}
|
||||||
|
node := &Node{
|
||||||
|
PeerID: h.ID(),
|
||||||
|
isIndexer: isIndexer,
|
||||||
|
LongLivedStreamRecordedService: common.NewStreamRecordedService[interface{}](h, 1000),
|
||||||
|
}
|
||||||
|
// Register the bandwidth probe handler so any peer measuring this node's
|
||||||
|
// throughput can open a dedicated probe stream and read the echo.
|
||||||
|
h.SetStreamHandler(common.ProtocolBandwidthProbe, common.HandleBandwidthProbe)
|
||||||
|
var ps *pubsubs.PubSub
|
||||||
|
if isNode {
|
||||||
|
logger.Info().Msg("generate opencloud node...")
|
||||||
|
ps, err = pubsubs.NewGossipSub(context.Background(), node.Host)
|
||||||
|
if err != nil {
|
||||||
|
panic(err) // can't run your node without a propalgation pubsub, of state of node.
|
||||||
|
}
|
||||||
|
node.PS = ps
|
||||||
|
// buildRecord returns a fresh signed PeerRecord as JSON, embedded in each
|
||||||
|
// heartbeat so the receiving indexer can republish it to the DHT directly.
|
||||||
|
// peerRecord is nil until claimInfo runs, so the first ~20s heartbeats carry
|
||||||
|
// no record — that's fine, claimInfo publishes once synchronously at startup.
|
||||||
|
buildRecord := func() json.RawMessage {
|
||||||
|
if node.peerRecord == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
priv, err := tools.LoadKeyFromFilePrivate()
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
fresh := *node.peerRecord
|
||||||
|
fresh.PeerRecordPayload.ExpiryDate = time.Now().UTC().Add(2 * time.Minute)
|
||||||
|
payload, _ := json.Marshal(fresh.PeerRecordPayload)
|
||||||
|
fresh.Signature, err = priv.Sign(payload)
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
b, _ := json.Marshal(fresh)
|
||||||
|
return json.RawMessage(b)
|
||||||
|
}
|
||||||
|
logger.Info().Msg("connect to indexers...")
|
||||||
|
common.ConnectToIndexers(node.Host, conf.GetConfig().MinIndexer, conf.GetConfig().MaxIndexer, node.PeerID, buildRecord)
|
||||||
|
logger.Info().Msg("claims my node...")
|
||||||
|
if _, err := node.claimInfo(conf.GetConfig().Name, conf.GetConfig().Hostname); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
logger.Info().Msg("subscribe to decentralized search flow...")
|
||||||
|
logger.Info().Msg("run garbage collector...")
|
||||||
|
node.StartGC(30 * time.Second)
|
||||||
|
|
||||||
|
if node.StreamService, err = stream.InitStream(context.Background(), node.Host, node.PeerID, 1000, node); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if node.PubSubService, err = pubsub.InitPubSub(context.Background(), node.Host, node.PS, node, node.StreamService); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
f := func(ctx context.Context, evt common.Event, topic string) {
|
||||||
|
if p, err := node.GetPeerRecord(ctx, evt.From); err == nil && len(p) > 0 {
|
||||||
|
node.StreamService.SendResponse(p[0], &evt)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
node.SubscribeToSearch(node.PS, &f)
|
||||||
|
logger.Info().Msg("connect to NATS")
|
||||||
|
go ListenNATS(node)
|
||||||
|
logger.Info().Msg("Node is actually running.")
|
||||||
|
}
|
||||||
|
if isIndexer {
|
||||||
|
logger.Info().Msg("generate opencloud indexer...")
|
||||||
|
node.IndexerService = indexer.NewIndexerService(node.Host, ps, 500, isNativeIndexer)
|
||||||
|
}
|
||||||
|
return node, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Node) Close() {
|
||||||
|
if d.isIndexer && d.IndexerService != nil {
|
||||||
|
d.IndexerService.Close()
|
||||||
|
}
|
||||||
|
d.PubSubService.Close()
|
||||||
|
d.StreamService.Close()
|
||||||
|
d.Host.Close()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Node) publishPeerRecord(
|
||||||
|
rec *indexer.PeerRecord,
|
||||||
|
) error {
|
||||||
|
priv, err := tools.LoadKeyFromFilePrivate() // your node private key
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
common.StreamMuIndexes.RLock()
|
||||||
|
indexerSnapshot := make([]*pp.AddrInfo, 0, len(common.StaticIndexers))
|
||||||
|
for _, ad := range common.StaticIndexers {
|
||||||
|
indexerSnapshot = append(indexerSnapshot, ad)
|
||||||
|
}
|
||||||
|
common.StreamMuIndexes.RUnlock()
|
||||||
|
|
||||||
|
for _, ad := range indexerSnapshot {
|
||||||
|
var err error
|
||||||
|
if common.StreamIndexers, err = common.TempStream(d.Host, *ad, common.ProtocolPublish, "", common.StreamIndexers, map[protocol.ID]*common.ProtocolInfo{},
|
||||||
|
&common.StreamMuIndexes); err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
stream := common.StreamIndexers[common.ProtocolPublish][ad.ID]
|
||||||
|
base := indexer.PeerRecordPayload{
|
||||||
|
Name: rec.Name,
|
||||||
|
DID: rec.DID,
|
||||||
|
PubKey: rec.PubKey,
|
||||||
|
ExpiryDate: time.Now().UTC().Add(2 * time.Minute),
|
||||||
|
}
|
||||||
|
payload, _ := json.Marshal(base)
|
||||||
|
rec.PeerRecordPayload = base
|
||||||
|
rec.Signature, err = priv.Sign(payload)
|
||||||
|
if err := json.NewEncoder(stream.Stream).Encode(&rec); err != nil { // then publish on stream
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Node) GetPeerRecord(
|
||||||
|
ctx context.Context,
|
||||||
|
pidOrdid string,
|
||||||
|
) ([]*peer.Peer, error) {
|
||||||
|
var err error
|
||||||
|
var info map[string]indexer.PeerRecord
|
||||||
|
common.StreamMuIndexes.RLock()
|
||||||
|
indexerSnapshot2 := make([]*pp.AddrInfo, 0, len(common.StaticIndexers))
|
||||||
|
for _, ad := range common.StaticIndexers {
|
||||||
|
indexerSnapshot2 = append(indexerSnapshot2, ad)
|
||||||
|
}
|
||||||
|
common.StreamMuIndexes.RUnlock()
|
||||||
|
|
||||||
|
// Build the GetValue request: if pidOrdid is neither a UUID DID nor a libp2p
|
||||||
|
// PeerID, treat it as a human-readable name and let the indexer resolve it.
|
||||||
|
getReq := indexer.GetValue{Key: pidOrdid}
|
||||||
|
isNameSearch := false
|
||||||
|
if pidR, pidErr := pp.Decode(pidOrdid); pidErr == nil {
|
||||||
|
getReq.PeerID = pidR
|
||||||
|
} else if _, uuidErr := uuid.Parse(pidOrdid); uuidErr != nil {
|
||||||
|
// Not a UUID DID → treat pidOrdid as a name substring search.
|
||||||
|
getReq.Name = pidOrdid
|
||||||
|
getReq.Key = ""
|
||||||
|
isNameSearch = true
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, ad := range indexerSnapshot2 {
|
||||||
|
if common.StreamIndexers, err = common.TempStream(d.Host, *ad, common.ProtocolGet, "",
|
||||||
|
common.StreamIndexers, map[protocol.ID]*common.ProtocolInfo{}, &common.StreamMuIndexes); err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
stream := common.StreamIndexers[common.ProtocolGet][ad.ID]
|
||||||
|
if err := json.NewEncoder(stream.Stream).Encode(getReq); err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var resp indexer.GetResponse
|
||||||
|
if err := json.NewDecoder(stream.Stream).Decode(&resp); err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if resp.Found {
|
||||||
|
if info == nil {
|
||||||
|
info = resp.Records
|
||||||
|
} else {
|
||||||
|
// Aggregate results from all indexers for name searches.
|
||||||
|
maps.Copy(info, resp.Records)
|
||||||
|
}
|
||||||
|
// For exact lookups (PeerID / DID) stop at the first hit.
|
||||||
|
if !isNameSearch {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
var ps []*peer.Peer
|
||||||
|
for _, pr := range info {
|
||||||
|
if pk, err := pr.Verify(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
} else if ok, p, err := pr.ExtractPeer(d.PeerID.String(), pr.PeerID, pk); err != nil {
|
||||||
|
return nil, err
|
||||||
|
} else {
|
||||||
|
if ok {
|
||||||
|
d.publishPeerRecord(&pr)
|
||||||
|
}
|
||||||
|
ps = append(ps, p)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return ps, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Node) claimInfo(
|
||||||
|
name string,
|
||||||
|
endPoint string, // TODO : endpoint is not necesserry StreamAddress
|
||||||
|
) (*peer.Peer, error) {
|
||||||
|
if endPoint == "" {
|
||||||
|
return nil, errors.New("no endpoint found for peer")
|
||||||
|
}
|
||||||
|
did := uuid.New().String()
|
||||||
|
|
||||||
|
peers := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil).Search(&dbs.Filters{
|
||||||
|
And: map[string][]dbs.Filter{ // search by name if no filters are provided
|
||||||
|
"peer_id": {{Operator: dbs.EQUAL.String(), Value: d.Host.ID().String()}},
|
||||||
|
},
|
||||||
|
}, "", false)
|
||||||
|
if len(peers.Data) > 0 {
|
||||||
|
did = peers.Data[0].GetID() // if already existing set up did as made
|
||||||
|
}
|
||||||
|
priv, err := tools.LoadKeyFromFilePrivate()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
pub, err := tools.LoadKeyFromFilePublic()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
pubBytes, err := crypto.MarshalPublicKey(pub)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
now := time.Now().UTC()
|
||||||
|
expiry := now.Add(150 * time.Second)
|
||||||
|
|
||||||
|
pRec := indexer.PeerRecordPayload{
|
||||||
|
Name: name,
|
||||||
|
DID: did, // REAL PEER ID
|
||||||
|
PubKey: pubBytes,
|
||||||
|
ExpiryDate: expiry,
|
||||||
|
}
|
||||||
|
d.PeerID = d.Host.ID()
|
||||||
|
payload, _ := json.Marshal(pRec)
|
||||||
|
|
||||||
|
rec := &indexer.PeerRecord{
|
||||||
|
PeerRecordPayload: pRec,
|
||||||
|
}
|
||||||
|
rec.Signature, err = priv.Sign(payload)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
rec.PeerID = d.Host.ID().String()
|
||||||
|
rec.APIUrl = endPoint
|
||||||
|
rec.StreamAddress = "/ip4/" + conf.GetConfig().Hostname + "/tcp/" + fmt.Sprintf("%v", conf.GetConfig().NodeEndpointPort) + "/p2p/" + rec.PeerID
|
||||||
|
rec.NATSAddress = oclib.GetConfig().NATSUrl
|
||||||
|
rec.WalletAddress = "my-wallet"
|
||||||
|
|
||||||
|
if err := d.publishPeerRecord(rec); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
d.peerRecord = rec
|
||||||
|
if _, err := rec.Verify(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
} else {
|
||||||
|
_, p, err := rec.ExtractPeer(did, did, pub)
|
||||||
|
return p, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
TODO:
|
||||||
|
- Le booking est un flow neuf décentralisé :
|
||||||
|
On check on attend une réponse, on valide, il passe par discovery, on relais.
|
||||||
|
- Le shared workspace est une affaire de décentralisation,
|
||||||
|
on communique avec les shared les mouvements
|
||||||
|
- Un shared remplace la notion de partnership à l'échelle de partnershipping
|
||||||
|
-> quand on share un workspace on devient partenaire temporaire
|
||||||
|
qu'on le soit originellement ou non.
|
||||||
|
-> on a alors les mêmes privilèges.
|
||||||
|
- Les orchestrations admiralty ont le même fonctionnement.
|
||||||
|
Un evenement provoque alors une création de clé de service.
|
||||||
|
|
||||||
|
On doit pouvoir crud avec verification de signature un DBobject.
|
||||||
|
*/
|
||||||
40
daemons/node/pubsub/handler.go
Normal file
40
daemons/node/pubsub/handler.go
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
package pubsub
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (ps *PubSubService) handleEvent(ctx context.Context, topicName string, evt *common.Event) error {
|
||||||
|
action := ps.getTopicName(topicName)
|
||||||
|
if err := ps.handleEventSearch(ctx, evt, action); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps *PubSubService) handleEventSearch( // only : on partner followings. 3 canals for every partner.
|
||||||
|
ctx context.Context,
|
||||||
|
evt *common.Event,
|
||||||
|
action tools.PubSubAction,
|
||||||
|
) error {
|
||||||
|
if !(action == tools.PB_SEARCH) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if p, err := ps.Node.GetPeerRecord(ctx, evt.From); err == nil && len(p) > 0 { // peerFrom is Unique
|
||||||
|
if err := evt.Verify(p[0]); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
switch action {
|
||||||
|
case tools.PB_SEARCH: // when someone ask for search.
|
||||||
|
if err := ps.StreamService.SendResponse(p[0], evt); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
65
daemons/node/pubsub/publish.go
Normal file
65
daemons/node/pubsub/publish.go
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
package pubsub
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"oc-discovery/daemons/node/stream"
|
||||||
|
"oc-discovery/models"
|
||||||
|
|
||||||
|
"cloud.o-forge.io/core/oc-lib/dbs"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (ps *PubSubService) SearchPublishEvent(
|
||||||
|
ctx context.Context, dt *tools.DataType, typ string, user string, search string) error {
|
||||||
|
b, err := json.Marshal(map[string]string{"search": search})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
switch typ {
|
||||||
|
case "known": // define Search Strategy
|
||||||
|
return ps.StreamService.PublishesCommon(dt, user, &dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
|
||||||
|
And: map[string][]dbs.Filter{
|
||||||
|
"": {{Operator: dbs.NOT.String(), Value: dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
|
||||||
|
And: map[string][]dbs.Filter{
|
||||||
|
"relation": {{Operator: dbs.EQUAL.String(), Value: peer.BLACKLIST}},
|
||||||
|
},
|
||||||
|
}}},
|
||||||
|
},
|
||||||
|
}, b, stream.ProtocolSearchResource) //if partners focus only them*/
|
||||||
|
case "partner": // define Search Strategy
|
||||||
|
return ps.StreamService.PublishesCommon(dt, user, &dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
|
||||||
|
And: map[string][]dbs.Filter{
|
||||||
|
"relation": {{Operator: dbs.EQUAL.String(), Value: peer.PARTNER}},
|
||||||
|
},
|
||||||
|
}, b, stream.ProtocolSearchResource)
|
||||||
|
case "all": // Gossip PubSub
|
||||||
|
b, err := json.Marshal(map[string]string{"search": search})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return ps.publishEvent(ctx, dt, tools.PB_SEARCH, user, b)
|
||||||
|
default:
|
||||||
|
return errors.New("no type of research found")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps *PubSubService) publishEvent(
|
||||||
|
ctx context.Context, dt *tools.DataType, action tools.PubSubAction, user string, payload []byte,
|
||||||
|
) error {
|
||||||
|
priv, err := tools.LoadKeyFromFilePrivate()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
msg, _ := json.Marshal(models.NewEvent(action.String(), ps.Host.ID().String(), dt, user, payload, priv))
|
||||||
|
topic, err := ps.PS.Join(action.String())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return topic.Publish(ctx, msg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO REVIEW PUBLISHING + ADD SEARCH ON PUBLIC : YES
|
||||||
|
// TODO : Search should verify DataType
|
||||||
48
daemons/node/pubsub/service.go
Normal file
48
daemons/node/pubsub/service.go
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
package pubsub
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
"oc-discovery/daemons/node/stream"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||||
|
"github.com/libp2p/go-libp2p/core/host"
|
||||||
|
)
|
||||||
|
|
||||||
|
type PubSubService struct {
|
||||||
|
*common.LongLivedPubSubService
|
||||||
|
Node common.DiscoveryPeer
|
||||||
|
Host host.Host
|
||||||
|
PS *pubsub.PubSub
|
||||||
|
StreamService *stream.StreamService
|
||||||
|
Subscription []string
|
||||||
|
mutex sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func InitPubSub(ctx context.Context, h host.Host, ps *pubsub.PubSub, node common.DiscoveryPeer, streamService *stream.StreamService) (*PubSubService, error) {
|
||||||
|
service := &PubSubService{
|
||||||
|
LongLivedPubSubService: common.NewLongLivedPubSubService(h),
|
||||||
|
Node: node,
|
||||||
|
StreamService: streamService,
|
||||||
|
PS: ps,
|
||||||
|
}
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
logger.Info().Msg("subscribe to events...")
|
||||||
|
service.initSubscribeEvents(ctx)
|
||||||
|
return service, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps *PubSubService) getTopicName(topicName string) tools.PubSubAction {
|
||||||
|
ns := strings.Split(topicName, ".")
|
||||||
|
if len(ns) > 0 {
|
||||||
|
return tools.GetActionString(ns[0])
|
||||||
|
}
|
||||||
|
return tools.NONE
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *PubSubService) Close() {
|
||||||
|
}
|
||||||
45
daemons/node/pubsub/subscribe.go
Normal file
45
daemons/node/pubsub/subscribe.go
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
package pubsub
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (ps *PubSubService) initSubscribeEvents(ctx context.Context) error {
|
||||||
|
if err := ps.subscribeEvents(ctx, nil, tools.PB_SEARCH, ""); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// generic function to subscribe to DHT flow of event
|
||||||
|
func (ps *PubSubService) subscribeEvents(
|
||||||
|
ctx context.Context, dt *tools.DataType, action tools.PubSubAction, peerID string,
|
||||||
|
) error {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
// define a name app.action#peerID
|
||||||
|
name := action.String() + "#" + peerID
|
||||||
|
if dt != nil { // if a datatype is precised then : app.action.datatype#peerID
|
||||||
|
name = action.String() + "." + (*dt).String() + "#" + peerID
|
||||||
|
}
|
||||||
|
f := func(ctx context.Context, evt common.Event, topicName string) {
|
||||||
|
if p, err := ps.Node.GetPeerRecord(ctx, evt.From); err == nil && len(p) > 0 {
|
||||||
|
if err := ps.processEvent(ctx, p[0], &evt, topicName); err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return common.SubscribeEvents(ps.LongLivedPubSubService, ctx, name, -1, f)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps *PubSubService) processEvent(
|
||||||
|
ctx context.Context, p *peer.Peer, event *common.Event, topicName string) error {
|
||||||
|
if err := event.Verify(p); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return ps.handleEvent(ctx, topicName, event)
|
||||||
|
}
|
||||||
218
daemons/node/stream/handler.go
Normal file
218
daemons/node/stream/handler.go
Normal file
@@ -0,0 +1,218 @@
|
|||||||
|
package stream
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"crypto/subtle"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/booking/planner"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/resources"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Verify struct {
|
||||||
|
IsVerified bool `json:"is_verified"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps *StreamService) handleEvent(protocol string, evt *common.Event) error {
|
||||||
|
fmt.Println("handleEvent")
|
||||||
|
ps.handleEventFromPartner(evt, protocol)
|
||||||
|
/*if protocol == ProtocolVerifyResource {
|
||||||
|
if evt.DataType == -1 {
|
||||||
|
tools.NewNATSCaller().SetNATSPub(tools.VERIFY_RESOURCE, tools.NATSResponse{
|
||||||
|
FromApp: "oc-discovery",
|
||||||
|
Method: int(tools.VERIFY_RESOURCE),
|
||||||
|
Payload: evt.Payload,
|
||||||
|
})
|
||||||
|
} else if err := ps.verifyResponse(evt); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}*/
|
||||||
|
if protocol == ProtocolSendPlanner {
|
||||||
|
if err := ps.sendPlanner(evt); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if protocol == ProtocolSearchResource && evt.DataType > -1 {
|
||||||
|
if err := ps.retrieveResponse(evt); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if protocol == ProtocolConsidersResource {
|
||||||
|
if err := ps.pass(evt, tools.PB_CONSIDERS); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if protocol == ProtocolAdmiraltyConfigResource {
|
||||||
|
if err := ps.pass(evt, tools.PB_ADMIRALTY_CONFIG); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if protocol == ProtocolMinioConfigResource {
|
||||||
|
if err := ps.pass(evt, tools.PB_MINIO_CONFIG); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return errors.New("no action authorized available : " + protocol)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (abs *StreamService) verifyResponse(event *common.Event) error { //
|
||||||
|
res, err := resources.ToResource(int(event.DataType), event.Payload)
|
||||||
|
if err != nil || res == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
verify := Verify{
|
||||||
|
IsVerified: false,
|
||||||
|
}
|
||||||
|
access := oclib.NewRequestAdmin(oclib.LibDataEnum(event.DataType), nil)
|
||||||
|
data := access.LoadOne(res.GetID())
|
||||||
|
if data.Err == "" && data.Data != nil {
|
||||||
|
if b, err := json.Marshal(data.Data); err == nil {
|
||||||
|
if res2, err := resources.ToResource(int(event.DataType), b); err == nil {
|
||||||
|
verify.IsVerified = subtle.ConstantTimeCompare(res.GetSignature(), res2.GetSignature()) == 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if b, err := json.Marshal(verify); err == nil {
|
||||||
|
abs.PublishCommon(nil, "", event.From, ProtocolVerifyResource, b)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (abs *StreamService) sendPlanner(event *common.Event) error { //
|
||||||
|
if len(event.Payload) == 0 {
|
||||||
|
if plan, err := planner.GenerateShallow(&tools.APIRequest{Admin: true}); err == nil {
|
||||||
|
if b, err := json.Marshal(plan); err == nil {
|
||||||
|
abs.PublishCommon(nil, event.User, event.From, ProtocolSendPlanner, b)
|
||||||
|
} else {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
m := map[string]interface{}{}
|
||||||
|
if err := json.Unmarshal(event.Payload, &m); err == nil {
|
||||||
|
m["peer_id"] = event.From
|
||||||
|
if pl, err := json.Marshal(m); err == nil {
|
||||||
|
if b, err := json.Marshal(tools.PropalgationMessage{
|
||||||
|
DataType: -1,
|
||||||
|
Action: tools.PB_PLANNER,
|
||||||
|
Payload: pl,
|
||||||
|
}); err == nil {
|
||||||
|
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
|
||||||
|
FromApp: "oc-discovery",
|
||||||
|
Datatype: tools.DataType(oclib.BOOKING),
|
||||||
|
Method: int(tools.PROPALGATION_EVENT),
|
||||||
|
Payload: b,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (abs *StreamService) retrieveResponse(event *common.Event) error { //
|
||||||
|
res, err := resources.ToResource(int(event.DataType), event.Payload)
|
||||||
|
if err != nil || res == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
b, err := json.Marshal(res.Serialize(res))
|
||||||
|
go tools.NewNATSCaller().SetNATSPub(tools.SEARCH_EVENT, tools.NATSResponse{
|
||||||
|
FromApp: "oc-discovery",
|
||||||
|
Datatype: tools.DataType(event.DataType),
|
||||||
|
Method: int(tools.SEARCH_EVENT),
|
||||||
|
Payload: b,
|
||||||
|
})
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (abs *StreamService) pass(event *common.Event, action tools.PubSubAction) error { //
|
||||||
|
if b, err := json.Marshal(&tools.PropalgationMessage{
|
||||||
|
Action: action,
|
||||||
|
DataType: int(event.DataType),
|
||||||
|
Payload: event.Payload,
|
||||||
|
}); err == nil {
|
||||||
|
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
|
||||||
|
FromApp: "oc-discovery",
|
||||||
|
Datatype: tools.DataType(event.DataType),
|
||||||
|
Method: int(tools.PROPALGATION_EVENT),
|
||||||
|
Payload: b,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps *StreamService) handleEventFromPartner(evt *common.Event, protocol string) error {
|
||||||
|
switch protocol {
|
||||||
|
case ProtocolSearchResource:
|
||||||
|
if evt.DataType < 0 {
|
||||||
|
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
|
||||||
|
peers := access.Search(nil, evt.From, false)
|
||||||
|
if len(peers.Data) > 0 {
|
||||||
|
p := peers.Data[0].(*peer.Peer)
|
||||||
|
// TODO : something if peer is missing in our side !
|
||||||
|
ps.SendResponse(p, evt)
|
||||||
|
} else if p, err := ps.Node.GetPeerRecord(context.Background(), evt.From); err == nil && len(p) > 0 { // peer from is peerID
|
||||||
|
ps.SendResponse(p[0], evt)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case ProtocolCreateResource, ProtocolUpdateResource:
|
||||||
|
fmt.Println("RECEIVED Protocol.Update")
|
||||||
|
go tools.NewNATSCaller().SetNATSPub(tools.CREATE_RESOURCE, tools.NATSResponse{
|
||||||
|
FromApp: "oc-discovery",
|
||||||
|
Datatype: tools.DataType(evt.DataType),
|
||||||
|
Method: int(tools.CREATE_RESOURCE),
|
||||||
|
Payload: evt.Payload,
|
||||||
|
})
|
||||||
|
case ProtocolDeleteResource:
|
||||||
|
go tools.NewNATSCaller().SetNATSPub(tools.REMOVE_RESOURCE, tools.NATSResponse{
|
||||||
|
FromApp: "oc-discovery",
|
||||||
|
Datatype: tools.DataType(evt.DataType),
|
||||||
|
Method: int(tools.REMOVE_RESOURCE),
|
||||||
|
Payload: evt.Payload,
|
||||||
|
})
|
||||||
|
default:
|
||||||
|
return errors.New("no action authorized available : " + protocol)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (abs *StreamService) SendResponse(p *peer.Peer, event *common.Event) error {
|
||||||
|
dts := []oclib.LibDataEnum{oclib.LibDataEnum(event.DataType)}
|
||||||
|
if event.DataType == -1 { // expect all resources
|
||||||
|
dts = []oclib.LibDataEnum{
|
||||||
|
oclib.LibDataEnum(oclib.COMPUTE_RESOURCE),
|
||||||
|
oclib.LibDataEnum(oclib.STORAGE_RESOURCE),
|
||||||
|
oclib.LibDataEnum(oclib.PROCESSING_RESOURCE),
|
||||||
|
oclib.LibDataEnum(oclib.DATA_RESOURCE),
|
||||||
|
oclib.LibDataEnum(oclib.WORKFLOW_RESOURCE)}
|
||||||
|
}
|
||||||
|
var m map[string]string
|
||||||
|
err := json.Unmarshal(event.Payload, &m)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, dt := range dts {
|
||||||
|
access := oclib.NewRequestAdmin(oclib.LibDataEnum(event.DataType), nil)
|
||||||
|
peerID := p.GetID()
|
||||||
|
searched := access.Search(abs.FilterPeer(peerID, m["search"]), "", false)
|
||||||
|
for _, ss := range searched.Data {
|
||||||
|
if j, err := json.Marshal(ss); err == nil {
|
||||||
|
if event.DataType != -1 {
|
||||||
|
ndt := tools.DataType(dt.EnumIndex())
|
||||||
|
abs.PublishCommon(&ndt, event.User, peerID, ProtocolSearchResource, j)
|
||||||
|
} else {
|
||||||
|
abs.PublishCommon(nil, event.User, peerID, ProtocolSearchResource, j)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
141
daemons/node/stream/publish.go
Normal file
141
daemons/node/stream/publish.go
Normal file
@@ -0,0 +1,141 @@
|
|||||||
|
package stream
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/dbs"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
"github.com/libp2p/go-libp2p/core/protocol"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (ps *StreamService) PublishesCommon(dt *tools.DataType, user string, filter *dbs.Filters, resource []byte, protos ...protocol.ID) error {
|
||||||
|
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
|
||||||
|
p := access.Search(filter, "", false)
|
||||||
|
for _, pes := range p.Data {
|
||||||
|
for _, proto := range protos {
|
||||||
|
if _, err := ps.PublishCommon(dt, user, pes.(*peer.Peer).PeerID, proto, resource); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps *StreamService) PublishCommon(dt *tools.DataType, user string, toPeerID string, proto protocol.ID, resource []byte) (*common.Stream, error) {
|
||||||
|
fmt.Println("PublishCommon")
|
||||||
|
if toPeerID == ps.Key.String() {
|
||||||
|
return nil, errors.New("Can't send to ourself !")
|
||||||
|
}
|
||||||
|
|
||||||
|
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
|
||||||
|
p := access.Search(&dbs.Filters{
|
||||||
|
And: map[string][]dbs.Filter{ // search by name if no filters are provided
|
||||||
|
"peer_id": {{Operator: dbs.EQUAL.String(), Value: toPeerID}},
|
||||||
|
},
|
||||||
|
}, toPeerID, false)
|
||||||
|
var pe *peer.Peer
|
||||||
|
if len(p.Data) > 0 && p.Data[0].(*peer.Peer).Relation != peer.BLACKLIST {
|
||||||
|
pe = p.Data[0].(*peer.Peer)
|
||||||
|
} else if pps, err := ps.Node.GetPeerRecord(context.Background(), toPeerID); err == nil && len(pps) > 0 {
|
||||||
|
pe = pps[0]
|
||||||
|
}
|
||||||
|
if pe != nil {
|
||||||
|
ad, err := pp.AddrInfoFromString(p.Data[0].(*peer.Peer).StreamAddress)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return ps.write(toPeerID, ad, dt, user, resource, proto)
|
||||||
|
}
|
||||||
|
return nil, errors.New("peer unvalid " + toPeerID)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps *StreamService) ToPartnerPublishEvent(
|
||||||
|
ctx context.Context, action tools.PubSubAction, dt *tools.DataType, user string, payload []byte) error {
|
||||||
|
if *dt == tools.PEER {
|
||||||
|
var p peer.Peer
|
||||||
|
if err := json.Unmarshal(payload, &p); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
pid, err := pp.Decode(p.PeerID)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if pe, err := oclib.GetMySelf(); err != nil {
|
||||||
|
return err
|
||||||
|
} else if pe.GetID() == p.GetID() {
|
||||||
|
return fmt.Errorf("can't send to ourself")
|
||||||
|
} else {
|
||||||
|
pe.Relation = p.Relation
|
||||||
|
pe.Verify = false
|
||||||
|
if b2, err := json.Marshal(pe); err == nil {
|
||||||
|
if _, err := ps.PublishCommon(dt, user, p.PeerID, ProtocolUpdateResource, b2); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if p.Relation == peer.PARTNER {
|
||||||
|
if ps.Streams[ProtocolHeartbeatPartner] == nil {
|
||||||
|
ps.Streams[ProtocolHeartbeatPartner] = map[pp.ID]*common.Stream{}
|
||||||
|
}
|
||||||
|
fmt.Println("SHOULD CONNECT")
|
||||||
|
ps.ConnectToPartner(p.StreamAddress)
|
||||||
|
} else if ps.Streams[ProtocolHeartbeatPartner] != nil && ps.Streams[ProtocolHeartbeatPartner][pid] != nil {
|
||||||
|
for _, pids := range ps.Streams {
|
||||||
|
if pids[pid] != nil {
|
||||||
|
delete(pids, pid)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
ks := []protocol.ID{}
|
||||||
|
for k := range protocolsPartners {
|
||||||
|
ks = append(ks, k)
|
||||||
|
}
|
||||||
|
ps.PublishesCommon(dt, user, &dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
|
||||||
|
And: map[string][]dbs.Filter{
|
||||||
|
"relation": {{Operator: dbs.EQUAL.String(), Value: peer.PARTNER}},
|
||||||
|
},
|
||||||
|
}, payload, ks...)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *StreamService) write(
|
||||||
|
did string,
|
||||||
|
peerID *pp.AddrInfo,
|
||||||
|
dt *tools.DataType,
|
||||||
|
user string,
|
||||||
|
payload []byte,
|
||||||
|
proto protocol.ID) (*common.Stream, error) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
var err error
|
||||||
|
pts := map[protocol.ID]*common.ProtocolInfo{}
|
||||||
|
for k, v := range protocols {
|
||||||
|
pts[k] = v
|
||||||
|
}
|
||||||
|
for k, v := range protocolsPartners {
|
||||||
|
pts[k] = v
|
||||||
|
}
|
||||||
|
// should create a very temp stream
|
||||||
|
if s.Streams, err = common.TempStream(s.Host, *peerID, proto, did, s.Streams, pts, &s.Mu); err != nil {
|
||||||
|
return nil, errors.New("no stream available for protocol " + fmt.Sprintf("%v", proto) + " from PID " + peerID.ID.String())
|
||||||
|
|
||||||
|
}
|
||||||
|
stream := s.Streams[proto][peerID.ID]
|
||||||
|
evt := common.NewEvent(string(proto), peerID.ID.String(), dt, user, payload)
|
||||||
|
fmt.Println("SEND EVENT ", evt.From, evt.DataType, evt.Timestamp)
|
||||||
|
if err := json.NewEncoder(stream.Stream).Encode(evt); err != nil {
|
||||||
|
stream.Stream.Close()
|
||||||
|
logger.Err(err)
|
||||||
|
return stream, nil
|
||||||
|
}
|
||||||
|
return stream, nil
|
||||||
|
}
|
||||||
305
daemons/node/stream/service.go
Normal file
305
daemons/node/stream/service.go
Normal file
@@ -0,0 +1,305 @@
|
|||||||
|
package stream
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"oc-discovery/conf"
|
||||||
|
"oc-discovery/daemons/node/common"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/dbs"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/peer"
|
||||||
|
"cloud.o-forge.io/core/oc-lib/models/utils"
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/libp2p/go-libp2p/core/host"
|
||||||
|
"github.com/libp2p/go-libp2p/core/network"
|
||||||
|
pp "github.com/libp2p/go-libp2p/core/peer"
|
||||||
|
"github.com/libp2p/go-libp2p/core/protocol"
|
||||||
|
ma "github.com/multiformats/go-multiaddr"
|
||||||
|
)
|
||||||
|
|
||||||
|
const ProtocolConsidersResource = "/opencloud/resource/considers/1.0"
|
||||||
|
const ProtocolMinioConfigResource = "/opencloud/minio/config/1.0"
|
||||||
|
const ProtocolAdmiraltyConfigResource = "/opencloud/admiralty/config/1.0"
|
||||||
|
|
||||||
|
const ProtocolSearchResource = "/opencloud/resource/search/1.0"
|
||||||
|
const ProtocolCreateResource = "/opencloud/resource/create/1.0"
|
||||||
|
const ProtocolUpdateResource = "/opencloud/resource/update/1.0"
|
||||||
|
const ProtocolDeleteResource = "/opencloud/resource/delete/1.0"
|
||||||
|
|
||||||
|
const ProtocolSendPlanner = "/opencloud/resource/planner/1.0"
|
||||||
|
const ProtocolVerifyResource = "/opencloud/resource/verify/1.0"
|
||||||
|
const ProtocolHeartbeatPartner = "/opencloud/resource/heartbeat/partner/1.0"
|
||||||
|
|
||||||
|
var protocols = map[protocol.ID]*common.ProtocolInfo{
|
||||||
|
ProtocolConsidersResource: {WaitResponse: false, TTL: 3 * time.Second},
|
||||||
|
ProtocolSendPlanner: {WaitResponse: true, TTL: 24 * time.Hour},
|
||||||
|
ProtocolSearchResource: {WaitResponse: true, TTL: 1 * time.Minute},
|
||||||
|
ProtocolVerifyResource: {WaitResponse: true, TTL: 1 * time.Minute},
|
||||||
|
ProtocolMinioConfigResource: {WaitResponse: true, TTL: 1 * time.Minute},
|
||||||
|
ProtocolAdmiraltyConfigResource: {WaitResponse: true, TTL: 1 * time.Minute},
|
||||||
|
}
|
||||||
|
|
||||||
|
var protocolsPartners = map[protocol.ID]*common.ProtocolInfo{
|
||||||
|
ProtocolCreateResource: {TTL: 3 * time.Second},
|
||||||
|
ProtocolUpdateResource: {TTL: 3 * time.Second},
|
||||||
|
ProtocolDeleteResource: {TTL: 3 * time.Second},
|
||||||
|
}
|
||||||
|
|
||||||
|
type StreamService struct {
|
||||||
|
Key pp.ID
|
||||||
|
Host host.Host
|
||||||
|
Node common.DiscoveryPeer
|
||||||
|
Streams common.ProtocolStream
|
||||||
|
maxNodesConn int
|
||||||
|
Mu sync.RWMutex
|
||||||
|
// Stream map[protocol.ID]map[pp.ID]*daemons.Stream
|
||||||
|
}
|
||||||
|
|
||||||
|
func InitStream(ctx context.Context, h host.Host, key pp.ID, maxNode int, node common.DiscoveryPeer) (*StreamService, error) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
service := &StreamService{
|
||||||
|
Key: key,
|
||||||
|
Node: node,
|
||||||
|
Host: h,
|
||||||
|
Streams: common.ProtocolStream{},
|
||||||
|
maxNodesConn: maxNode,
|
||||||
|
}
|
||||||
|
logger.Info().Msg("handle to partner heartbeat protocol...")
|
||||||
|
service.Host.SetStreamHandler(ProtocolHeartbeatPartner, service.HandlePartnerHeartbeat)
|
||||||
|
for proto := range protocols {
|
||||||
|
service.Host.SetStreamHandler(proto, service.HandleResponse)
|
||||||
|
}
|
||||||
|
logger.Info().Msg("connect to partners...")
|
||||||
|
service.connectToPartners() // we set up a stream
|
||||||
|
go service.StartGC(8 * time.Second)
|
||||||
|
return service, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *StreamService) HandleResponse(stream network.Stream) {
|
||||||
|
s.Mu.Lock()
|
||||||
|
stream.Protocol()
|
||||||
|
if s.Streams[stream.Protocol()] == nil {
|
||||||
|
s.Streams[stream.Protocol()] = map[pp.ID]*common.Stream{}
|
||||||
|
}
|
||||||
|
expiry := 1 * time.Minute
|
||||||
|
|
||||||
|
if protocols[stream.Protocol()] != nil {
|
||||||
|
expiry = protocols[stream.Protocol()].TTL
|
||||||
|
} else if protocolsPartners[stream.Protocol()] != nil {
|
||||||
|
expiry = protocolsPartners[stream.Protocol()].TTL
|
||||||
|
}
|
||||||
|
|
||||||
|
s.Streams[stream.Protocol()][stream.Conn().RemotePeer()] = &common.Stream{
|
||||||
|
Stream: stream,
|
||||||
|
Expiry: time.Now().UTC().Add(expiry + 1*time.Minute),
|
||||||
|
}
|
||||||
|
s.Mu.Unlock()
|
||||||
|
|
||||||
|
go s.readLoop(s.Streams[stream.Protocol()][stream.Conn().RemotePeer()],
|
||||||
|
stream.Conn().RemotePeer(),
|
||||||
|
stream.Protocol(), protocols[stream.Protocol()])
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *StreamService) HandlePartnerHeartbeat(stream network.Stream) {
|
||||||
|
s.Mu.Lock()
|
||||||
|
if s.Streams[ProtocolHeartbeatPartner] == nil {
|
||||||
|
s.Streams[ProtocolHeartbeatPartner] = map[pp.ID]*common.Stream{}
|
||||||
|
}
|
||||||
|
streams := s.Streams[ProtocolHeartbeatPartner]
|
||||||
|
streamsAnonym := map[pp.ID]common.HeartBeatStreamed{}
|
||||||
|
for k, v := range streams {
|
||||||
|
streamsAnonym[k] = v
|
||||||
|
}
|
||||||
|
s.Mu.Unlock()
|
||||||
|
pid, hb, err := common.CheckHeartbeat(s.Host, stream, json.NewDecoder(stream), streamsAnonym, &s.Mu, s.maxNodesConn)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
s.Mu.Lock()
|
||||||
|
defer s.Mu.Unlock()
|
||||||
|
// if record already seen update last seen
|
||||||
|
if rec, ok := streams[*pid]; ok {
|
||||||
|
rec.DID = hb.DID
|
||||||
|
rec.Expiry = time.Now().UTC().Add(10 * time.Second)
|
||||||
|
} else { // if not in stream ?
|
||||||
|
val, err := stream.Conn().RemoteMultiaddr().ValueForProtocol(ma.P_IP4)
|
||||||
|
if err == nil {
|
||||||
|
s.ConnectToPartner(val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// GC is already running via InitStream — starting a new ticker goroutine on
|
||||||
|
// every heartbeat would leak an unbounded number of goroutines.
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *StreamService) connectToPartners() error {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
for proto, info := range protocolsPartners {
|
||||||
|
f := func(ss network.Stream) {
|
||||||
|
if s.Streams[proto] == nil {
|
||||||
|
s.Streams[proto] = map[pp.ID]*common.Stream{}
|
||||||
|
}
|
||||||
|
s.Streams[proto][ss.Conn().RemotePeer()] = &common.Stream{
|
||||||
|
Stream: ss,
|
||||||
|
Expiry: time.Now().UTC().Add(10 * time.Second),
|
||||||
|
}
|
||||||
|
go s.readLoop(s.Streams[proto][ss.Conn().RemotePeer()], ss.Conn().RemotePeer(), proto, info)
|
||||||
|
}
|
||||||
|
logger.Info().Msg("SetStreamHandler " + string(proto))
|
||||||
|
s.Host.SetStreamHandler(proto, f)
|
||||||
|
}
|
||||||
|
peers, err := s.searchPeer(fmt.Sprintf("%v", peer.PARTNER.EnumIndex()))
|
||||||
|
if err != nil {
|
||||||
|
logger.Err(err)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, p := range peers {
|
||||||
|
s.ConnectToPartner(p.StreamAddress)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *StreamService) ConnectToPartner(address string) {
|
||||||
|
logger := oclib.GetLogger()
|
||||||
|
if ad, err := pp.AddrInfoFromString(address); err == nil {
|
||||||
|
logger.Info().Msg("Connect to Partner " + ProtocolHeartbeatPartner + " " + address)
|
||||||
|
common.SendHeartbeat(context.Background(), ProtocolHeartbeatPartner, conf.GetConfig().Name,
|
||||||
|
s.Host, s.Streams, map[string]*pp.AddrInfo{address: ad}, nil, 20*time.Second)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *StreamService) searchPeer(search string) ([]*peer.Peer, error) {
|
||||||
|
ps := []*peer.Peer{}
|
||||||
|
if conf.GetConfig().PeerIDS != "" {
|
||||||
|
for _, peerID := range strings.Split(conf.GetConfig().PeerIDS, ",") {
|
||||||
|
ppID := strings.Split(peerID, "/")
|
||||||
|
ps = append(ps, &peer.Peer{
|
||||||
|
AbstractObject: utils.AbstractObject{
|
||||||
|
UUID: uuid.New().String(),
|
||||||
|
Name: ppID[1],
|
||||||
|
},
|
||||||
|
PeerID: ppID[len(ppID)-1],
|
||||||
|
StreamAddress: peerID,
|
||||||
|
Relation: peer.PARTNER,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
|
||||||
|
peers := access.Search(nil, search, false)
|
||||||
|
for _, p := range peers.Data {
|
||||||
|
ps = append(ps, p.(*peer.Peer))
|
||||||
|
}
|
||||||
|
return ps, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ix *StreamService) Close() {
|
||||||
|
for _, s := range ix.Streams {
|
||||||
|
for _, ss := range s {
|
||||||
|
ss.Stream.Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *StreamService) StartGC(interval time.Duration) {
|
||||||
|
go func() {
|
||||||
|
t := time.NewTicker(interval)
|
||||||
|
defer t.Stop()
|
||||||
|
for range t.C {
|
||||||
|
s.gc()
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *StreamService) gc() {
|
||||||
|
s.Mu.Lock()
|
||||||
|
defer s.Mu.Unlock()
|
||||||
|
now := time.Now().UTC()
|
||||||
|
|
||||||
|
if s.Streams[ProtocolHeartbeatPartner] == nil {
|
||||||
|
s.Streams[ProtocolHeartbeatPartner] = map[pp.ID]*common.Stream{}
|
||||||
|
}
|
||||||
|
streams := s.Streams[ProtocolHeartbeatPartner]
|
||||||
|
for pid, rec := range streams {
|
||||||
|
if now.After(rec.Expiry) {
|
||||||
|
for _, sstreams := range s.Streams {
|
||||||
|
if sstreams[pid] != nil {
|
||||||
|
sstreams[pid].Stream.Close()
|
||||||
|
delete(sstreams, pid)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ps *StreamService) readLoop(s *common.Stream, id pp.ID, proto protocol.ID, protocolInfo *common.ProtocolInfo) {
|
||||||
|
defer s.Stream.Close()
|
||||||
|
defer func() {
|
||||||
|
ps.Mu.Lock()
|
||||||
|
defer ps.Mu.Unlock()
|
||||||
|
delete(ps.Streams[proto], id)
|
||||||
|
}()
|
||||||
|
loop := true
|
||||||
|
if !protocolInfo.PersistantStream && !protocolInfo.WaitResponse { // 2 sec is enough... to wait a response
|
||||||
|
time.AfterFunc(2*time.Second, func() {
|
||||||
|
loop = false
|
||||||
|
})
|
||||||
|
}
|
||||||
|
for {
|
||||||
|
if !loop {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
var evt common.Event
|
||||||
|
if err := json.NewDecoder(s.Stream).Decode(&evt); err != nil {
|
||||||
|
// Any decode error (EOF, reset, malformed JSON) terminates the loop;
|
||||||
|
// continuing on a dead/closed stream creates an infinite spin.
|
||||||
|
return
|
||||||
|
}
|
||||||
|
ps.handleEvent(evt.Type, &evt)
|
||||||
|
if protocolInfo.WaitResponse && !protocolInfo.PersistantStream {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (abs *StreamService) FilterPeer(peerID string, search string) *dbs.Filters {
|
||||||
|
id, err := oclib.GetMySelf()
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
filter := map[string][]dbs.Filter{
|
||||||
|
"creator_id": {{Operator: dbs.EQUAL.String(), Value: id}}, // is my resource...
|
||||||
|
"": {{Operator: dbs.OR.String(), Value: &dbs.Filters{
|
||||||
|
Or: map[string][]dbs.Filter{
|
||||||
|
"abstractobject.access_mode": {{Operator: dbs.EQUAL.String(), Value: 1}}, // if public
|
||||||
|
"abstractinstanciatedresource.instances": {{Operator: dbs.ELEMMATCH.String(), Value: &dbs.Filters{ // or got a partners instances
|
||||||
|
And: map[string][]dbs.Filter{
|
||||||
|
"resourceinstance.partnerships": {{Operator: dbs.ELEMMATCH.String(), Value: &dbs.Filters{
|
||||||
|
And: map[string][]dbs.Filter{
|
||||||
|
"resourcepartnership.peer_groups." + peerID: {{Operator: dbs.EXISTS.String(), Value: true}},
|
||||||
|
},
|
||||||
|
}}},
|
||||||
|
},
|
||||||
|
}}},
|
||||||
|
},
|
||||||
|
}}},
|
||||||
|
}
|
||||||
|
if search != "" {
|
||||||
|
filter[" "] = []dbs.Filter{{Operator: dbs.OR.String(), Value: &dbs.Filters{
|
||||||
|
Or: map[string][]dbs.Filter{ // filter by like name, short_description, description, owner, url if no filters are provided
|
||||||
|
"abstractintanciatedresource.abstractresource.abstractobject.name": {{Operator: dbs.LIKE.String(), Value: search}},
|
||||||
|
"abstractintanciatedresource.abstractresource.type": {{Operator: dbs.LIKE.String(), Value: search}},
|
||||||
|
"abstractintanciatedresource.abstractresource.short_description": {{Operator: dbs.LIKE.String(), Value: search}},
|
||||||
|
"abstractintanciatedresource.abstractresource.description": {{Operator: dbs.LIKE.String(), Value: search}},
|
||||||
|
"abstractintanciatedresource.abstractresource.owners.name": {{Operator: dbs.LIKE.String(), Value: search}},
|
||||||
|
"abstractintanciatedresource.abstractresource.abstractobject.creator_id": {{Operator: dbs.EQUAL.String(), Value: search}},
|
||||||
|
},
|
||||||
|
}}}
|
||||||
|
}
|
||||||
|
return &dbs.Filters{
|
||||||
|
And: filter,
|
||||||
|
}
|
||||||
|
}
|
||||||
33
demo-discovery.sh
Executable file
33
demo-discovery.sh
Executable file
@@ -0,0 +1,33 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
IMAGE_BASE_NAME="oc-discovery"
|
||||||
|
DOCKERFILE_PATH="."
|
||||||
|
|
||||||
|
docker network create \
|
||||||
|
--subnet=172.40.0.0/24 \
|
||||||
|
discovery
|
||||||
|
|
||||||
|
for i in $(seq ${1:-0} ${2:-3}); do
|
||||||
|
NUM=$((i + 1))
|
||||||
|
PORT=$((4000 + $NUM))
|
||||||
|
|
||||||
|
IMAGE_NAME="${IMAGE_BASE_NAME}:${NUM}"
|
||||||
|
|
||||||
|
|
||||||
|
echo "▶ Building image ${IMAGE_NAME} with CONF_NUM=${NUM}"
|
||||||
|
docker build \
|
||||||
|
--build-arg CONF_NUM=${NUM} \
|
||||||
|
-t "${IMAGE_BASE_NAME}_${NUM}" \
|
||||||
|
${DOCKERFILE_PATH}
|
||||||
|
|
||||||
|
docker kill "${IMAGE_BASE_NAME}_${NUM}" | true
|
||||||
|
docker rm "${IMAGE_BASE_NAME}_${NUM}" | true
|
||||||
|
|
||||||
|
echo "▶ Running container ${IMAGE_NAME} on port ${PORT}:${PORT}"
|
||||||
|
docker run -d \
|
||||||
|
--network="${3:-oc}" \
|
||||||
|
-p ${PORT}:${PORT} \
|
||||||
|
--name "${IMAGE_BASE_NAME}_${NUM}" \
|
||||||
|
"${IMAGE_BASE_NAME}_${NUM}"
|
||||||
|
|
||||||
|
docker network connect --ip "172.40.0.${NUM}" discovery "${IMAGE_BASE_NAME}_${NUM}"
|
||||||
|
done
|
||||||
@@ -1,10 +0,0 @@
|
|||||||
{
|
|
||||||
"port": 8080,
|
|
||||||
"redisurl":"localhost:6379",
|
|
||||||
"redispassword":"",
|
|
||||||
"zincurl":"http://localhost:4080",
|
|
||||||
"zinclogin":"admin",
|
|
||||||
"zincpassword":"admin",
|
|
||||||
"identityfile":"/app/identity.json",
|
|
||||||
"defaultpeers":"/app/peers.json"
|
|
||||||
}
|
|
||||||
@@ -1,10 +1,14 @@
|
|||||||
version: '3.4'
|
version: '3.4'
|
||||||
|
|
||||||
services:
|
services:
|
||||||
ocdiscovery:
|
oc-schedulerd:
|
||||||
image: 'ocdiscovery:latest'
|
image: 'oc-discovery:latest'
|
||||||
ports:
|
ports:
|
||||||
- 8088:8080
|
- 9002:8080
|
||||||
container_name: ocdiscovery
|
container_name: oc-discovery
|
||||||
|
networks:
|
||||||
|
- oc
|
||||||
|
|
||||||
|
networks:
|
||||||
|
oc:
|
||||||
|
external: true
|
||||||
@@ -1,10 +0,0 @@
|
|||||||
{
|
|
||||||
"port": 8080,
|
|
||||||
"redisurl":"localhost:6379",
|
|
||||||
"redispassword":"",
|
|
||||||
"zincurl":"http://localhost:4080",
|
|
||||||
"zinclogin":"admin",
|
|
||||||
"zincpassword":"admin",
|
|
||||||
"identityfile":"/app/identity.json",
|
|
||||||
"defaultpeers":"/app/peers.json"
|
|
||||||
}
|
|
||||||
6
docker_discovery1.json
Normal file
6
docker_discovery1.json
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"MONGO_URL":"mongodb://mongo:27017/",
|
||||||
|
"MONGO_DATABASE":"DC_myDC",
|
||||||
|
"NATS_URL": "nats://nats:4222",
|
||||||
|
"NODE_MODE": "indexer"
|
||||||
|
}
|
||||||
10
docker_discovery10.json
Normal file
10
docker_discovery10.json
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
{
|
||||||
|
"MONGO_URL":"mongodb://mongo:27017/",
|
||||||
|
"MONGO_DATABASE":"DC_myDC",
|
||||||
|
"NATS_URL": "nats://nats:4222",
|
||||||
|
"NODE_MODE": "node",
|
||||||
|
"NODE_ENDPOINT_PORT": 4010,
|
||||||
|
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu",
|
||||||
|
"MIN_INDEXER": 2,
|
||||||
|
"PEER_IDS": "/ip4/172.40.0.9/tcp/4009/p2p/12D3KooWGnQfKwX9E4umCPE8dUKZuig4vw5BndDowRLEbGmcZyta"
|
||||||
|
}
|
||||||
8
docker_discovery2.json
Normal file
8
docker_discovery2.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"MONGO_URL":"mongodb://mongo:27017/",
|
||||||
|
"MONGO_DATABASE":"DC_myDC",
|
||||||
|
"NATS_URL": "nats://nats:4222",
|
||||||
|
"NODE_MODE": "indexer",
|
||||||
|
"NODE_ENDPOINT_PORT": 4002,
|
||||||
|
"INDEXER_ADDRESSES": "/ip4/172.40.0.1/tcp/4001/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
|
||||||
|
}
|
||||||
8
docker_discovery3.json
Normal file
8
docker_discovery3.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"MONGO_URL":"mongodb://mongo:27017/",
|
||||||
|
"MONGO_DATABASE":"DC_myDC",
|
||||||
|
"NATS_URL": "nats://nats:4222",
|
||||||
|
"NODE_MODE": "node",
|
||||||
|
"NODE_ENDPOINT_PORT": 4003,
|
||||||
|
"INDEXER_ADDRESSES": "/ip4/172.40.0.2/tcp/4002/p2p/12D3KooWC3GNStak8KCYtJq11Dxiq45EJV53z1ZvKetMcZBeBX6u"
|
||||||
|
}
|
||||||
9
docker_discovery4.json
Normal file
9
docker_discovery4.json
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
"MONGO_URL":"mongodb://mongo:27017/",
|
||||||
|
"MONGO_DATABASE":"DC_myDC",
|
||||||
|
"NATS_URL": "nats://nats:4222",
|
||||||
|
"NODE_MODE": "node",
|
||||||
|
"NODE_ENDPOINT_PORT": 4004,
|
||||||
|
"INDEXER_ADDRESSES": "/ip4/172.40.0.1/tcp/4001/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu",
|
||||||
|
"PEER_IDS": "/ip4/172.40.0.3/tcp/4003/p2p/12D3KooWBh9kZrekBAE5G33q4jCLNRAzygem3gP1mMdK8mhoCTaw"
|
||||||
|
}
|
||||||
7
docker_discovery5.json
Normal file
7
docker_discovery5.json
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"MONGO_URL":"mongodb://mongo:27017/",
|
||||||
|
"MONGO_DATABASE":"DC_myDC",
|
||||||
|
"NATS_URL": "nats://nats:4222",
|
||||||
|
"NODE_MODE": "native-indexer",
|
||||||
|
"NODE_ENDPOINT_PORT": 4005
|
||||||
|
}
|
||||||
8
docker_discovery6.json
Normal file
8
docker_discovery6.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"MONGO_URL":"mongodb://mongo:27017/",
|
||||||
|
"MONGO_DATABASE":"DC_myDC",
|
||||||
|
"NATS_URL": "nats://nats:4222",
|
||||||
|
"NODE_MODE": "native-indexer",
|
||||||
|
"NODE_ENDPOINT_PORT": 4006,
|
||||||
|
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
|
||||||
|
}
|
||||||
8
docker_discovery7.json
Normal file
8
docker_discovery7.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"MONGO_URL":"mongodb://mongo:27017/",
|
||||||
|
"MONGO_DATABASE":"DC_myDC",
|
||||||
|
"NATS_URL": "nats://nats:4222",
|
||||||
|
"NODE_MODE": "indexer",
|
||||||
|
"NODE_ENDPOINT_PORT": 4007,
|
||||||
|
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.6/tcp/4006/p2p/12D3KooWC3GNStak8KCYtJq11Dxiq45EJV53z1ZvKetMcZBeBX6u"
|
||||||
|
}
|
||||||
8
docker_discovery8.json
Normal file
8
docker_discovery8.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"MONGO_URL":"mongodb://mongo:27017/",
|
||||||
|
"MONGO_DATABASE":"DC_myDC",
|
||||||
|
"NATS_URL": "nats://nats:4222",
|
||||||
|
"NODE_MODE": "indexer",
|
||||||
|
"NODE_ENDPOINT_PORT": 4008,
|
||||||
|
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
|
||||||
|
}
|
||||||
8
docker_discovery9.json
Normal file
8
docker_discovery9.json
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
{
|
||||||
|
"MONGO_URL":"mongodb://mongo:27017/",
|
||||||
|
"MONGO_DATABASE":"DC_myDC",
|
||||||
|
"NATS_URL": "nats://nats:4222",
|
||||||
|
"NODE_MODE": "node",
|
||||||
|
"NODE_ENDPOINT_PORT": 4009,
|
||||||
|
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.6/tcp/4006/p2p/12D3KooWC3GNStak8KCYtJq11Dxiq45EJV53z1ZvKetMcZBeBX6u,/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
|
||||||
|
}
|
||||||
56
docs/diagrams/01_node_init.mmd
Normal file
56
docs/diagrams/01_node_init.mmd
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Node Initialization — Pair A (InitNode)
|
||||||
|
|
||||||
|
participant MainA as main (Pair A)
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant libp2pA as libp2p (Pair A)
|
||||||
|
participant DBA as DB Pair A (oc-lib)
|
||||||
|
participant NATSA as NATS A
|
||||||
|
participant IndexerA as Indexer (partagé)
|
||||||
|
participant StreamA as StreamService A
|
||||||
|
participant PubSubA as PubSubService A
|
||||||
|
|
||||||
|
MainA->>NodeA: InitNode(isNode, isIndexer, isNativeIndexer)
|
||||||
|
|
||||||
|
NodeA->>NodeA: LoadKeyFromFilePrivate() → priv
|
||||||
|
NodeA->>NodeA: LoadPSKFromFile() → psk
|
||||||
|
|
||||||
|
NodeA->>libp2pA: New(PrivateNetwork(psk), Identity(priv), ListenAddr:4001)
|
||||||
|
libp2pA-->>NodeA: host A (PeerID_A)
|
||||||
|
|
||||||
|
Note over NodeA: isNode == true
|
||||||
|
|
||||||
|
NodeA->>libp2pA: NewGossipSub(ctx, host)
|
||||||
|
libp2pA-->>NodeA: ps (GossipSub)
|
||||||
|
|
||||||
|
NodeA->>IndexerA: ConnectToIndexers → SendHeartbeat /opencloud/heartbeat/1.0
|
||||||
|
Note over IndexerA: Heartbeat long-lived établi<br/>Score qualité calculé (bw + uptime + diversité)
|
||||||
|
IndexerA-->>NodeA: OK
|
||||||
|
|
||||||
|
NodeA->>NodeA: claimInfo(name, hostname)
|
||||||
|
NodeA->>IndexerA: TempStream /opencloud/record/publish/1.0
|
||||||
|
NodeA->>IndexerA: json.Encode(PeerRecord A signé)
|
||||||
|
IndexerA->>IndexerA: DHT.PutValue("/node/"+DID_A, record)
|
||||||
|
|
||||||
|
NodeA->>DBA: NewRequestAdmin(PEER).Search(SELF)
|
||||||
|
DBA-->>NodeA: peer A local (ou UUID généré)
|
||||||
|
|
||||||
|
NodeA->>NodeA: StartGC(30s) — GC sur StreamRecords
|
||||||
|
|
||||||
|
NodeA->>StreamA: InitStream(ctx, host, PeerID_A, 1000, nodeA)
|
||||||
|
StreamA->>StreamA: SetStreamHandler(heartbeat/partner, search, planner, ...)
|
||||||
|
StreamA->>DBA: Search(PEER, PARTNER) → liste partenaires
|
||||||
|
DBA-->>StreamA: [] (aucun partenaire au démarrage)
|
||||||
|
StreamA-->>NodeA: StreamService A
|
||||||
|
|
||||||
|
NodeA->>PubSubA: InitPubSub(ctx, host, ps, nodeA, streamA)
|
||||||
|
PubSubA->>PubSubA: subscribeEvents(PB_SEARCH, timeout=-1)
|
||||||
|
PubSubA-->>NodeA: PubSubService A
|
||||||
|
|
||||||
|
NodeA->>NodeA: SubscribeToSearch(ps, callback)
|
||||||
|
Note over NodeA: callback: GetPeerRecord(evt.From)<br/>→ StreamService.SendResponse
|
||||||
|
|
||||||
|
NodeA->>NATSA: ListenNATS(nodeA)
|
||||||
|
Note over NATSA: Enregistre handlers:<br/>CREATE_RESOURCE, PROPALGATION_EVENT
|
||||||
|
|
||||||
|
NodeA-->>MainA: *Node A prêt
|
||||||
58
docs/diagrams/01_node_init.puml
Normal file
58
docs/diagrams/01_node_init.puml
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
@startuml
|
||||||
|
title Node Initialization — Pair A (InitNode)
|
||||||
|
|
||||||
|
participant "main (Pair A)" as MainA
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "libp2p (Pair A)" as libp2pA
|
||||||
|
participant "DB Pair A (oc-lib)" as DBA
|
||||||
|
participant "NATS A" as NATSA
|
||||||
|
participant "Indexer (partagé)" as IndexerA
|
||||||
|
participant "StreamService A" as StreamA
|
||||||
|
participant "PubSubService A" as PubSubA
|
||||||
|
|
||||||
|
MainA -> NodeA: InitNode(isNode, isIndexer, isNativeIndexer)
|
||||||
|
|
||||||
|
NodeA -> NodeA: LoadKeyFromFilePrivate() → priv
|
||||||
|
NodeA -> NodeA: LoadPSKFromFile() → psk
|
||||||
|
|
||||||
|
NodeA -> libp2pA: New(PrivateNetwork(psk), Identity(priv), ListenAddr:4001)
|
||||||
|
libp2pA --> NodeA: host A (PeerID_A)
|
||||||
|
|
||||||
|
note over NodeA: isNode == true
|
||||||
|
|
||||||
|
NodeA -> libp2pA: NewGossipSub(ctx, host)
|
||||||
|
libp2pA --> NodeA: ps (GossipSub)
|
||||||
|
|
||||||
|
NodeA -> IndexerA: ConnectToIndexers → SendHeartbeat /opencloud/heartbeat/1.0
|
||||||
|
note over IndexerA: Heartbeat long-lived établi\nScore qualité calculé (bw + uptime + diversité)
|
||||||
|
IndexerA --> NodeA: OK
|
||||||
|
|
||||||
|
NodeA -> NodeA: claimInfo(name, hostname)
|
||||||
|
NodeA -> IndexerA: TempStream /opencloud/record/publish/1.0
|
||||||
|
NodeA -> IndexerA: json.Encode(PeerRecord A signé)
|
||||||
|
IndexerA -> IndexerA: DHT.PutValue("/node/"+DID_A, record)
|
||||||
|
|
||||||
|
NodeA -> DBA: NewRequestAdmin(PEER).Search(SELF)
|
||||||
|
DBA --> NodeA: peer A local (ou UUID généré)
|
||||||
|
|
||||||
|
NodeA -> NodeA: StartGC(30s) — GC sur StreamRecords
|
||||||
|
|
||||||
|
NodeA -> StreamA: InitStream(ctx, host, PeerID_A, 1000, nodeA)
|
||||||
|
StreamA -> StreamA: SetStreamHandler(heartbeat/partner, search, planner, ...)
|
||||||
|
StreamA -> DBA: Search(PEER, PARTNER) → liste partenaires
|
||||||
|
DBA --> StreamA: [] (aucun partenaire au démarrage)
|
||||||
|
StreamA --> NodeA: StreamService A
|
||||||
|
|
||||||
|
NodeA -> PubSubA: InitPubSub(ctx, host, ps, nodeA, streamA)
|
||||||
|
PubSubA -> PubSubA: subscribeEvents(PB_SEARCH, timeout=-1)
|
||||||
|
PubSubA --> NodeA: PubSubService A
|
||||||
|
|
||||||
|
NodeA -> NodeA: SubscribeToSearch(ps, callback)
|
||||||
|
note over NodeA: callback: GetPeerRecord(evt.From)\n→ StreamService.SendResponse
|
||||||
|
|
||||||
|
NodeA -> NATSA: ListenNATS(nodeA)
|
||||||
|
note over NATSA: Enregistre handlers:\nCREATE_RESOURCE, PROPALGATION_EVENT
|
||||||
|
|
||||||
|
NodeA --> MainA: *Node A prêt
|
||||||
|
|
||||||
|
@enduml
|
||||||
38
docs/diagrams/02_node_claim.mmd
Normal file
38
docs/diagrams/02_node_claim.mmd
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Node Claim — Pair A publie son PeerRecord (claimInfo + publishPeerRecord)
|
||||||
|
|
||||||
|
participant DBA as DB Pair A (oc-lib)
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant IndexerA as Indexer (partagé)
|
||||||
|
participant DHT as DHT Kademlia
|
||||||
|
participant NATSA as NATS A
|
||||||
|
|
||||||
|
NodeA->>DBA: NewRequestAdmin(PEER).Search(SELF)
|
||||||
|
DBA-->>NodeA: existing peer (DID_A) ou nouveau UUID
|
||||||
|
|
||||||
|
NodeA->>NodeA: LoadKeyFromFilePrivate() → priv A
|
||||||
|
NodeA->>NodeA: LoadKeyFromFilePublic() → pub A
|
||||||
|
NodeA->>NodeA: crypto.MarshalPublicKey(pub A) → pubBytes
|
||||||
|
|
||||||
|
NodeA->>NodeA: Build PeerRecord A {<br/> Name, DID, PubKey,<br/> PeerID: PeerID_A,<br/> APIUrl: hostname,<br/> StreamAddress: /ip4/.../tcp/4001/p2p/PeerID_A,<br/> NATSAddress, WalletAddress<br/>}
|
||||||
|
|
||||||
|
NodeA->>NodeA: sha256(json(rec)) → hash
|
||||||
|
NodeA->>NodeA: priv.Sign(hash) → signature
|
||||||
|
NodeA->>NodeA: rec.ExpiryDate = now + 150s
|
||||||
|
|
||||||
|
loop Pour chaque StaticIndexer (Indexer A, B, …)
|
||||||
|
NodeA->>IndexerA: TempStream /opencloud/record/publish/1.0
|
||||||
|
NodeA->>IndexerA: json.Encode(PeerRecord A signé)
|
||||||
|
|
||||||
|
IndexerA->>IndexerA: Verify signature
|
||||||
|
IndexerA->>IndexerA: Check heartbeat stream actif pour PeerID_A
|
||||||
|
IndexerA->>DHT: PutValue("/node/"+DID_A, PeerRecord A)
|
||||||
|
DHT-->>IndexerA: ok
|
||||||
|
end
|
||||||
|
|
||||||
|
NodeA->>NodeA: rec.ExtractPeer(DID_A, DID_A, pub A)
|
||||||
|
NodeA->>NATSA: SetNATSPub(CREATE_RESOURCE, {PEER, Peer A JSON})
|
||||||
|
NATSA->>DBA: Upsert Peer A (SearchAttr: peer_id)
|
||||||
|
DBA-->>NATSA: ok
|
||||||
|
|
||||||
|
NodeA-->>NodeA: *peer.Peer A (SELF)
|
||||||
40
docs/diagrams/02_node_claim.puml
Normal file
40
docs/diagrams/02_node_claim.puml
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
@startuml
|
||||||
|
title Node Claim — Pair A publie son PeerRecord (claimInfo + publishPeerRecord)
|
||||||
|
|
||||||
|
participant "DB Pair A (oc-lib)" as DBA
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "Indexer (partagé)" as IndexerA
|
||||||
|
participant "DHT Kademlia" as DHT
|
||||||
|
participant "NATS A" as NATSA
|
||||||
|
|
||||||
|
NodeA -> DBA: NewRequestAdmin(PEER).Search(SELF)
|
||||||
|
DBA --> NodeA: existing peer (DID_A) ou nouveau UUID
|
||||||
|
|
||||||
|
NodeA -> NodeA: LoadKeyFromFilePrivate() → priv A
|
||||||
|
NodeA -> NodeA: LoadKeyFromFilePublic() → pub A
|
||||||
|
NodeA -> NodeA: crypto.MarshalPublicKey(pub A) → pubBytes
|
||||||
|
|
||||||
|
NodeA -> NodeA: Build PeerRecord A {\n Name, DID, PubKey,\n PeerID: PeerID_A,\n APIUrl: hostname,\n StreamAddress: /ip4/.../tcp/4001/p2p/PeerID_A,\n NATSAddress, WalletAddress\n}
|
||||||
|
|
||||||
|
NodeA -> NodeA: sha256(json(rec)) → hash
|
||||||
|
NodeA -> NodeA: priv.Sign(hash) → signature
|
||||||
|
NodeA -> NodeA: rec.ExpiryDate = now + 150s
|
||||||
|
|
||||||
|
loop Pour chaque StaticIndexer (Indexer A, B, ...)
|
||||||
|
NodeA -> IndexerA: TempStream /opencloud/record/publish/1.0
|
||||||
|
NodeA -> IndexerA: json.Encode(PeerRecord A signé)
|
||||||
|
|
||||||
|
IndexerA -> IndexerA: Verify signature
|
||||||
|
IndexerA -> IndexerA: Check heartbeat stream actif pour PeerID_A
|
||||||
|
IndexerA -> DHT: PutValue("/node/"+DID_A, PeerRecord A)
|
||||||
|
DHT --> IndexerA: ok
|
||||||
|
end
|
||||||
|
|
||||||
|
NodeA -> NodeA: rec.ExtractPeer(DID_A, DID_A, pub A)
|
||||||
|
NodeA -> NATSA: SetNATSPub(CREATE_RESOURCE, {PEER, Peer A JSON})
|
||||||
|
NATSA -> DBA: Upsert Peer A (SearchAttr: peer_id)
|
||||||
|
DBA --> NATSA: ok
|
||||||
|
|
||||||
|
NodeA --> NodeA: *peer.Peer A (SELF)
|
||||||
|
|
||||||
|
@enduml
|
||||||
47
docs/diagrams/03_indexer_heartbeat.mmd
Normal file
47
docs/diagrams/03_indexer_heartbeat.mmd
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Indexer — Heartbeat double (Pair A + Pair B → Indexer partagé)
|
||||||
|
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant NodeB as Node B
|
||||||
|
participant Indexer as IndexerService (partagé)
|
||||||
|
|
||||||
|
Note over NodeA,NodeB: Chaque pair tick toutes les 20s
|
||||||
|
|
||||||
|
par Pair A heartbeat
|
||||||
|
NodeA->>Indexer: NewStream /opencloud/heartbeat/1.0
|
||||||
|
NodeA->>Indexer: json.Encode(Heartbeat A {Name, DID_A, PeerID_A, IndexersBinded})
|
||||||
|
|
||||||
|
Indexer->>Indexer: CheckHeartbeat(host, stream, streams, mu, maxNodes)
|
||||||
|
Note over Indexer: len(peers) < maxNodes ?
|
||||||
|
|
||||||
|
Indexer->>Indexer: getBandwidthChallenge(512–2048 bytes, stream)
|
||||||
|
Indexer->>NodeA: Write(random payload)
|
||||||
|
NodeA->>Indexer: Echo(same payload)
|
||||||
|
Indexer->>Indexer: Mesure round-trip → Mbps A
|
||||||
|
|
||||||
|
Indexer->>Indexer: getDiversityRate(host, IndexersBinded_A)
|
||||||
|
Note over Indexer: /24 subnet diversity des indexeurs liés
|
||||||
|
|
||||||
|
Indexer->>Indexer: ComputeIndexerScore(uptimeA%, MbpsA%, diversityA%)
|
||||||
|
Note over Indexer: Score = 0.4×uptime + 0.4×bpms + 0.2×diversity
|
||||||
|
|
||||||
|
alt Score A < 75
|
||||||
|
Indexer->>NodeA: (close stream)
|
||||||
|
else Score A ≥ 75
|
||||||
|
Indexer->>Indexer: StreamRecord[PeerID_A] = {DID_A, Heartbeat, UptimeTracker}
|
||||||
|
end
|
||||||
|
and Pair B heartbeat
|
||||||
|
NodeB->>Indexer: NewStream /opencloud/heartbeat/1.0
|
||||||
|
NodeB->>Indexer: json.Encode(Heartbeat B {Name, DID_B, PeerID_B, IndexersBinded})
|
||||||
|
|
||||||
|
Indexer->>Indexer: CheckHeartbeat → getBandwidthChallenge
|
||||||
|
Indexer->>NodeB: Write(random payload)
|
||||||
|
NodeB->>Indexer: Echo(same payload)
|
||||||
|
Indexer->>Indexer: ComputeIndexerScore(uptimeB%, MbpsB%, diversityB%)
|
||||||
|
|
||||||
|
alt Score B ≥ 75
|
||||||
|
Indexer->>Indexer: StreamRecord[PeerID_B] = {DID_B, Heartbeat, UptimeTracker}
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
Note over Indexer: Les deux pairs sont désormais<br/>enregistrés avec leurs streams actifs
|
||||||
49
docs/diagrams/03_indexer_heartbeat.puml
Normal file
49
docs/diagrams/03_indexer_heartbeat.puml
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
@startuml
|
||||||
|
title Indexer — Heartbeat double (Pair A + Pair B → Indexer partagé)
|
||||||
|
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "Node B" as NodeB
|
||||||
|
participant "IndexerService (partagé)" as Indexer
|
||||||
|
|
||||||
|
note over NodeA,NodeB: Chaque pair tick toutes les 20s
|
||||||
|
|
||||||
|
par Pair A heartbeat
|
||||||
|
NodeA -> Indexer: NewStream /opencloud/heartbeat/1.0
|
||||||
|
NodeA -> Indexer: json.Encode(Heartbeat A {Name, DID_A, PeerID_A, IndexersBinded})
|
||||||
|
|
||||||
|
Indexer -> Indexer: CheckHeartbeat(host, stream, streams, mu, maxNodes)
|
||||||
|
note over Indexer: len(peers) < maxNodes ?
|
||||||
|
|
||||||
|
Indexer -> Indexer: getBandwidthChallenge(512-2048 bytes, stream)
|
||||||
|
Indexer -> NodeA: Write(random payload)
|
||||||
|
NodeA -> Indexer: Echo(same payload)
|
||||||
|
Indexer -> Indexer: Mesure round-trip → Mbps A
|
||||||
|
|
||||||
|
Indexer -> Indexer: getDiversityRate(host, IndexersBinded_A)
|
||||||
|
note over Indexer: /24 subnet diversity des indexeurs liés
|
||||||
|
|
||||||
|
Indexer -> Indexer: ComputeIndexerScore(uptimeA%, MbpsA%, diversityA%)
|
||||||
|
note over Indexer: Score = 0.4×uptime + 0.4×bpms + 0.2×diversity
|
||||||
|
|
||||||
|
alt Score A < 75
|
||||||
|
Indexer -> NodeA: (close stream)
|
||||||
|
else Score A >= 75
|
||||||
|
Indexer -> Indexer: StreamRecord[PeerID_A] = {DID_A, Heartbeat, UptimeTracker}
|
||||||
|
end
|
||||||
|
else Pair B heartbeat
|
||||||
|
NodeB -> Indexer: NewStream /opencloud/heartbeat/1.0
|
||||||
|
NodeB -> Indexer: json.Encode(Heartbeat B {Name, DID_B, PeerID_B, IndexersBinded})
|
||||||
|
|
||||||
|
Indexer -> Indexer: CheckHeartbeat → getBandwidthChallenge
|
||||||
|
Indexer -> NodeB: Write(random payload)
|
||||||
|
NodeB -> Indexer: Echo(same payload)
|
||||||
|
Indexer -> Indexer: ComputeIndexerScore(uptimeB%, MbpsB%, diversityB%)
|
||||||
|
|
||||||
|
alt Score B >= 75
|
||||||
|
Indexer -> Indexer: StreamRecord[PeerID_B] = {DID_B, Heartbeat, UptimeTracker}
|
||||||
|
end
|
||||||
|
end par
|
||||||
|
|
||||||
|
note over Indexer: Les deux pairs sont désormais\nenregistrés avec leurs streams actifs
|
||||||
|
|
||||||
|
@enduml
|
||||||
41
docs/diagrams/04_indexer_publish.mmd
Normal file
41
docs/diagrams/04_indexer_publish.mmd
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Indexer — Pair A publie, Pair B publie (handleNodePublish → DHT)
|
||||||
|
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant NodeB as Node B
|
||||||
|
participant Indexer as IndexerService (partagé)
|
||||||
|
participant DHT as DHT Kademlia
|
||||||
|
|
||||||
|
Note over NodeA: Après claimInfo ou refresh TTL
|
||||||
|
|
||||||
|
par Pair A publie son PeerRecord
|
||||||
|
NodeA->>Indexer: TempStream /opencloud/record/publish/1.0
|
||||||
|
NodeA->>Indexer: json.Encode(PeerRecord A {DID_A, PeerID_A, PubKey_A, Expiry, Sig_A})
|
||||||
|
|
||||||
|
Indexer->>Indexer: Verify sig_A (reconstruit rec minimal, pubKey_A.Verify)
|
||||||
|
Indexer->>Indexer: Check StreamRecords[Heartbeat][PeerID_A] existe
|
||||||
|
|
||||||
|
alt Heartbeat actif pour A
|
||||||
|
Indexer->>Indexer: StreamRecord A → DID_A, Record=PeerRecord A, LastSeen=now
|
||||||
|
Indexer->>DHT: PutValue("/node/"+DID_A, PeerRecord A JSON)
|
||||||
|
DHT-->>Indexer: ok
|
||||||
|
else Pas de heartbeat
|
||||||
|
Indexer->>NodeA: (erreur "no heartbeat", stream close)
|
||||||
|
end
|
||||||
|
and Pair B publie son PeerRecord
|
||||||
|
NodeB->>Indexer: TempStream /opencloud/record/publish/1.0
|
||||||
|
NodeB->>Indexer: json.Encode(PeerRecord B {DID_B, PeerID_B, PubKey_B, Expiry, Sig_B})
|
||||||
|
|
||||||
|
Indexer->>Indexer: Verify sig_B
|
||||||
|
Indexer->>Indexer: Check StreamRecords[Heartbeat][PeerID_B] existe
|
||||||
|
|
||||||
|
alt Heartbeat actif pour B
|
||||||
|
Indexer->>Indexer: StreamRecord B → DID_B, Record=PeerRecord B, LastSeen=now
|
||||||
|
Indexer->>DHT: PutValue("/node/"+DID_B, PeerRecord B JSON)
|
||||||
|
DHT-->>Indexer: ok
|
||||||
|
else Pas de heartbeat
|
||||||
|
Indexer->>NodeB: (erreur "no heartbeat", stream close)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
Note over DHT: DHT contient maintenant<br/>"/node/DID_A" et "/node/DID_B"
|
||||||
43
docs/diagrams/04_indexer_publish.puml
Normal file
43
docs/diagrams/04_indexer_publish.puml
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
@startuml
|
||||||
|
title Indexer — Pair A publie, Pair B publie (handleNodePublish → DHT)
|
||||||
|
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "Node B" as NodeB
|
||||||
|
participant "IndexerService (partagé)" as Indexer
|
||||||
|
participant "DHT Kademlia" as DHT
|
||||||
|
|
||||||
|
note over NodeA: Après claimInfo ou refresh TTL
|
||||||
|
|
||||||
|
par Pair A publie son PeerRecord
|
||||||
|
NodeA -> Indexer: TempStream /opencloud/record/publish/1.0
|
||||||
|
NodeA -> Indexer: json.Encode(PeerRecord A {DID_A, PeerID_A, PubKey_A, Expiry, Sig_A})
|
||||||
|
|
||||||
|
Indexer -> Indexer: Verify sig_A (reconstruit rec minimal, pubKey_A.Verify)
|
||||||
|
Indexer -> Indexer: Check StreamRecords[Heartbeat][PeerID_A] existe
|
||||||
|
|
||||||
|
alt Heartbeat actif pour A
|
||||||
|
Indexer -> Indexer: StreamRecord A → DID_A, Record=PeerRecord A, LastSeen=now
|
||||||
|
Indexer -> DHT: PutValue("/node/"+DID_A, PeerRecord A JSON)
|
||||||
|
DHT --> Indexer: ok
|
||||||
|
else Pas de heartbeat
|
||||||
|
Indexer -> NodeA: (erreur "no heartbeat", stream close)
|
||||||
|
end
|
||||||
|
else Pair B publie son PeerRecord
|
||||||
|
NodeB -> Indexer: TempStream /opencloud/record/publish/1.0
|
||||||
|
NodeB -> Indexer: json.Encode(PeerRecord B {DID_B, PeerID_B, PubKey_B, Expiry, Sig_B})
|
||||||
|
|
||||||
|
Indexer -> Indexer: Verify sig_B
|
||||||
|
Indexer -> Indexer: Check StreamRecords[Heartbeat][PeerID_B] existe
|
||||||
|
|
||||||
|
alt Heartbeat actif pour B
|
||||||
|
Indexer -> Indexer: StreamRecord B → DID_B, Record=PeerRecord B, LastSeen=now
|
||||||
|
Indexer -> DHT: PutValue("/node/"+DID_B, PeerRecord B JSON)
|
||||||
|
DHT --> Indexer: ok
|
||||||
|
else Pas de heartbeat
|
||||||
|
Indexer -> NodeB: (erreur "no heartbeat", stream close)
|
||||||
|
end
|
||||||
|
end par
|
||||||
|
|
||||||
|
note over DHT: DHT contient maintenant\n"/node/DID_A" et "/node/DID_B"
|
||||||
|
|
||||||
|
@enduml
|
||||||
49
docs/diagrams/05_indexer_get.mmd
Normal file
49
docs/diagrams/05_indexer_get.mmd
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Indexer — Pair A résout Pair B (GetPeerRecord + handleNodeGet)
|
||||||
|
|
||||||
|
participant NATSA as NATS A
|
||||||
|
participant DBA as DB Pair A (oc-lib)
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant Indexer as IndexerService (partagé)
|
||||||
|
participant DHT as DHT Kademlia
|
||||||
|
participant NATSA2 as NATS A (retour)
|
||||||
|
|
||||||
|
Note over NodeA: Déclenché par : NATS PB_SEARCH PEER<br/>ou callback SubscribeToSearch
|
||||||
|
|
||||||
|
NodeA->>DBA: NewRequestAdmin(PEER).Search(DID_B ou PeerID_B)
|
||||||
|
DBA-->>NodeA: Peer B local (si connu) → résout DID_B + PeerID_B<br/>sinon utilise la valeur brute
|
||||||
|
|
||||||
|
loop Pour chaque StaticIndexer
|
||||||
|
NodeA->>Indexer: TempStream /opencloud/record/get/1.0
|
||||||
|
NodeA->>Indexer: json.Encode(GetValue{Key: DID_B, PeerID: PeerID_B})
|
||||||
|
|
||||||
|
Indexer->>Indexer: key = "/node/" + DID_B
|
||||||
|
Indexer->>DHT: SearchValue(ctx 10s, "/node/"+DID_B)
|
||||||
|
DHT-->>Indexer: channel de bytes (PeerRecord B)
|
||||||
|
|
||||||
|
loop Pour chaque résultat DHT
|
||||||
|
Indexer->>Indexer: Unmarshal → PeerRecord B
|
||||||
|
alt PeerRecord.PeerID == PeerID_B
|
||||||
|
Indexer->>Indexer: resp.Found=true, resp.Records[PeerID_B]=PeerRecord B
|
||||||
|
Indexer->>Indexer: StreamRecord B.LastSeen = now (si heartbeat actif)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
Indexer->>NodeA: json.Encode(GetResponse{Found:true, Records:{PeerID_B: PeerRecord B}})
|
||||||
|
end
|
||||||
|
|
||||||
|
loop Pour chaque PeerRecord retourné
|
||||||
|
NodeA->>NodeA: rec.Verify() → valide signature de B
|
||||||
|
NodeA->>NodeA: rec.ExtractPeer(ourDID_A, DID_B, pubKey_B)
|
||||||
|
|
||||||
|
alt ourDID_A == DID_B (c'est notre propre entrée)
|
||||||
|
Note over NodeA: Republier pour rafraîchir le TTL
|
||||||
|
NodeA->>Indexer: publishPeerRecord(rec) [refresh 2 min]
|
||||||
|
end
|
||||||
|
|
||||||
|
NodeA->>NATSA2: SetNATSPub(CREATE_RESOURCE, {PEER, Peer B JSON,<br/>SearchAttr:"peer_id"})
|
||||||
|
NATSA2->>DBA: Upsert Peer B dans DB A
|
||||||
|
DBA-->>NATSA2: ok
|
||||||
|
end
|
||||||
|
|
||||||
|
NodeA-->>NodeA: []*peer.Peer → [Peer B]
|
||||||
51
docs/diagrams/05_indexer_get.puml
Normal file
51
docs/diagrams/05_indexer_get.puml
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
@startuml
|
||||||
|
title Indexer — Pair A résout Pair B (GetPeerRecord + handleNodeGet)
|
||||||
|
|
||||||
|
participant "NATS A" as NATSA
|
||||||
|
participant "DB Pair A (oc-lib)" as DBA
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "IndexerService (partagé)" as Indexer
|
||||||
|
participant "DHT Kademlia" as DHT
|
||||||
|
participant "NATS A (retour)" as NATSA2
|
||||||
|
|
||||||
|
note over NodeA: Déclenché par : NATS PB_SEARCH PEER\nou callback SubscribeToSearch
|
||||||
|
|
||||||
|
NodeA -> DBA: NewRequestAdmin(PEER).Search(DID_B ou PeerID_B)
|
||||||
|
DBA --> NodeA: Peer B local (si connu) → résout DID_B + PeerID_B\nsinon utilise la valeur brute
|
||||||
|
|
||||||
|
loop Pour chaque StaticIndexer
|
||||||
|
NodeA -> Indexer: TempStream /opencloud/record/get/1.0
|
||||||
|
NodeA -> Indexer: json.Encode(GetValue{Key: DID_B, PeerID: PeerID_B})
|
||||||
|
|
||||||
|
Indexer -> Indexer: key = "/node/" + DID_B
|
||||||
|
Indexer -> DHT: SearchValue(ctx 10s, "/node/"+DID_B)
|
||||||
|
DHT --> Indexer: channel de bytes (PeerRecord B)
|
||||||
|
|
||||||
|
loop Pour chaque résultat DHT
|
||||||
|
Indexer -> Indexer: Unmarshal → PeerRecord B
|
||||||
|
alt PeerRecord.PeerID == PeerID_B
|
||||||
|
Indexer -> Indexer: resp.Found=true, resp.Records[PeerID_B]=PeerRecord B
|
||||||
|
Indexer -> Indexer: StreamRecord B.LastSeen = now (si heartbeat actif)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
Indexer -> NodeA: json.Encode(GetResponse{Found:true, Records:{PeerID_B: PeerRecord B}})
|
||||||
|
end
|
||||||
|
|
||||||
|
loop Pour chaque PeerRecord retourné
|
||||||
|
NodeA -> NodeA: rec.Verify() → valide signature de B
|
||||||
|
NodeA -> NodeA: rec.ExtractPeer(ourDID_A, DID_B, pubKey_B)
|
||||||
|
|
||||||
|
alt ourDID_A == DID_B (c'est notre propre entrée)
|
||||||
|
note over NodeA: Republier pour rafraîchir le TTL
|
||||||
|
NodeA -> Indexer: publishPeerRecord(rec) [refresh 2 min]
|
||||||
|
end
|
||||||
|
|
||||||
|
NodeA -> NATSA2: SetNATSPub(CREATE_RESOURCE, {PEER, Peer B JSON,\nSearchAttr:"peer_id"})
|
||||||
|
NATSA2 -> DBA: Upsert Peer B dans DB A
|
||||||
|
DBA --> NATSA2: ok
|
||||||
|
end
|
||||||
|
|
||||||
|
NodeA --> NodeA: []*peer.Peer → [Peer B]
|
||||||
|
|
||||||
|
@enduml
|
||||||
39
docs/diagrams/06_native_registration.mmd
Normal file
39
docs/diagrams/06_native_registration.mmd
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Native Indexer — Enregistrement d'un Indexer auprès du Native
|
||||||
|
|
||||||
|
participant IndexerA as Indexer A
|
||||||
|
participant IndexerB as Indexer B
|
||||||
|
participant Native as Native Indexer (partagé)
|
||||||
|
participant DHT as DHT Kademlia
|
||||||
|
participant PubSub as GossipSub (oc-indexer-registry)
|
||||||
|
|
||||||
|
Note over IndexerA,IndexerB: Au démarrage + toutes les 60s (StartNativeRegistration)
|
||||||
|
|
||||||
|
par Indexer A s'enregistre
|
||||||
|
IndexerA->>IndexerA: Build IndexerRegistration{PeerID_A, Addr_A}
|
||||||
|
IndexerA->>Native: NewStream /opencloud/native/subscribe/1.0
|
||||||
|
IndexerA->>Native: json.Encode(IndexerRegistration A)
|
||||||
|
|
||||||
|
Native->>Native: Decode → liveIndexerEntry{PeerID_A, Addr_A, ExpiresAt=now+66s}
|
||||||
|
Native->>DHT: PutValue("/indexer/"+PeerID_A, entry A)
|
||||||
|
DHT-->>Native: ok
|
||||||
|
Native->>Native: liveIndexers[PeerID_A] = entry A
|
||||||
|
Native->>Native: knownPeerIDs[PeerID_A] = {}
|
||||||
|
|
||||||
|
Native->>PubSub: topic.Publish([]byte(PeerID_A))
|
||||||
|
Note over PubSub: Gossipé aux autres Natives<br/>→ ils ajoutent PeerID_A à knownPeerIDs<br/>→ refresh DHT au prochain tick 30s
|
||||||
|
IndexerA->>Native: stream.Close()
|
||||||
|
and Indexer B s'enregistre
|
||||||
|
IndexerB->>IndexerB: Build IndexerRegistration{PeerID_B, Addr_B}
|
||||||
|
IndexerB->>Native: NewStream /opencloud/native/subscribe/1.0
|
||||||
|
IndexerB->>Native: json.Encode(IndexerRegistration B)
|
||||||
|
|
||||||
|
Native->>Native: Decode → liveIndexerEntry{PeerID_B, Addr_B, ExpiresAt=now+66s}
|
||||||
|
Native->>DHT: PutValue("/indexer/"+PeerID_B, entry B)
|
||||||
|
DHT-->>Native: ok
|
||||||
|
Native->>Native: liveIndexers[PeerID_B] = entry B
|
||||||
|
Native->>PubSub: topic.Publish([]byte(PeerID_B))
|
||||||
|
IndexerB->>Native: stream.Close()
|
||||||
|
end
|
||||||
|
|
||||||
|
Note over Native: liveIndexers = {PeerID_A: entryA, PeerID_B: entryB}
|
||||||
41
docs/diagrams/06_native_registration.puml
Normal file
41
docs/diagrams/06_native_registration.puml
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
@startuml
|
||||||
|
title Native Indexer — Enregistrement d'un Indexer auprès du Native
|
||||||
|
|
||||||
|
participant "Indexer A" as IndexerA
|
||||||
|
participant "Indexer B" as IndexerB
|
||||||
|
participant "Native Indexer (partagé)" as Native
|
||||||
|
participant "DHT Kademlia" as DHT
|
||||||
|
participant "GossipSub (oc-indexer-registry)" as PubSub
|
||||||
|
|
||||||
|
note over IndexerA,IndexerB: Au démarrage + toutes les 60s (StartNativeRegistration)
|
||||||
|
|
||||||
|
par Indexer A s'enregistre
|
||||||
|
IndexerA -> IndexerA: Build IndexerRegistration{PeerID_A, Addr_A}
|
||||||
|
IndexerA -> Native: NewStream /opencloud/native/subscribe/1.0
|
||||||
|
IndexerA -> Native: json.Encode(IndexerRegistration A)
|
||||||
|
|
||||||
|
Native -> Native: Decode → liveIndexerEntry{PeerID_A, Addr_A, ExpiresAt=now+66s}
|
||||||
|
Native -> DHT: PutValue("/indexer/"+PeerID_A, entry A)
|
||||||
|
DHT --> Native: ok
|
||||||
|
Native -> Native: liveIndexers[PeerID_A] = entry A
|
||||||
|
Native -> Native: knownPeerIDs[PeerID_A] = {}
|
||||||
|
|
||||||
|
Native -> PubSub: topic.Publish([]byte(PeerID_A))
|
||||||
|
note over PubSub: Gossipé aux autres Natives\n→ ils ajoutent PeerID_A à knownPeerIDs\n→ refresh DHT au prochain tick 30s
|
||||||
|
IndexerA -> Native: stream.Close()
|
||||||
|
else Indexer B s'enregistre
|
||||||
|
IndexerB -> IndexerB: Build IndexerRegistration{PeerID_B, Addr_B}
|
||||||
|
IndexerB -> Native: NewStream /opencloud/native/subscribe/1.0
|
||||||
|
IndexerB -> Native: json.Encode(IndexerRegistration B)
|
||||||
|
|
||||||
|
Native -> Native: Decode → liveIndexerEntry{PeerID_B, Addr_B, ExpiresAt=now+66s}
|
||||||
|
Native -> DHT: PutValue("/indexer/"+PeerID_B, entry B)
|
||||||
|
DHT --> Native: ok
|
||||||
|
Native -> Native: liveIndexers[PeerID_B] = entry B
|
||||||
|
Native -> PubSub: topic.Publish([]byte(PeerID_B))
|
||||||
|
IndexerB -> Native: stream.Close()
|
||||||
|
end par
|
||||||
|
|
||||||
|
note over Native: liveIndexers = {PeerID_A: entryA, PeerID_B: entryB}
|
||||||
|
|
||||||
|
@enduml
|
||||||
60
docs/diagrams/07_native_get_consensus.mmd
Normal file
60
docs/diagrams/07_native_get_consensus.mmd
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Native — ConnectToNatives + Consensus (Pair A bootstrap)
|
||||||
|
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant Native1 as Native #1 (primary)
|
||||||
|
participant Native2 as Native #2
|
||||||
|
participant NativeN as Native #N
|
||||||
|
participant DHT as DHT Kademlia
|
||||||
|
|
||||||
|
Note over NodeA: NativeIndexerAddresses configuré<br/>Appelé pendant InitNode → ConnectToIndexers
|
||||||
|
|
||||||
|
NodeA->>NodeA: Parse NativeIndexerAddresses → StaticNatives
|
||||||
|
NodeA->>Native1: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
|
||||||
|
NodeA->>Native2: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
|
||||||
|
|
||||||
|
%% Étape 1 : récupérer un pool initial
|
||||||
|
NodeA->>Native1: Connect + NewStream /opencloud/native/indexers/1.0
|
||||||
|
NodeA->>Native1: json.Encode(GetIndexersRequest{Count: maxIndexer})
|
||||||
|
|
||||||
|
Native1->>Native1: reachableLiveIndexers()
|
||||||
|
Note over Native1: Filtre liveIndexers par TTL<br/>ping chaque candidat (PeerIsAlive)
|
||||||
|
|
||||||
|
alt Aucun indexer connu par Native1
|
||||||
|
Native1->>Native1: selfDelegate(NodeA.PeerID, resp)
|
||||||
|
Note over Native1: IsSelfFallback=true<br/>Indexers=[native1 addr]
|
||||||
|
Native1->>NodeA: GetIndexersResponse{IsSelfFallback:true, Indexers:[native1]}
|
||||||
|
NodeA->>NodeA: StaticIndexers[native1] = native1
|
||||||
|
Note over NodeA: Pas de consensus — native1 utilisé directement comme indexeur
|
||||||
|
else Indexers disponibles
|
||||||
|
Native1->>NodeA: GetIndexersResponse{Indexers:[Addr_IndexerA, Addr_IndexerB, ...]}
|
||||||
|
|
||||||
|
%% Étape 2 : consensus
|
||||||
|
Note over NodeA: clientSideConsensus(candidates)
|
||||||
|
|
||||||
|
par Requêtes consensus parallèles
|
||||||
|
NodeA->>Native1: NewStream /opencloud/native/consensus/1.0
|
||||||
|
NodeA->>Native1: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
|
||||||
|
Native1->>Native1: Croiser avec liveIndexers propres
|
||||||
|
Native1->>NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
|
||||||
|
and
|
||||||
|
NodeA->>Native2: NewStream /opencloud/native/consensus/1.0
|
||||||
|
NodeA->>Native2: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
|
||||||
|
Native2->>Native2: Croiser avec liveIndexers propres
|
||||||
|
Native2->>NodeA: ConsensusResponse{Trusted:[Addr_A], Suggestions:[Addr_C]}
|
||||||
|
and
|
||||||
|
NodeA->>NativeN: NewStream /opencloud/native/consensus/1.0
|
||||||
|
NodeA->>NativeN: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
|
||||||
|
NativeN->>NativeN: Croiser avec liveIndexers propres
|
||||||
|
NativeN->>NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
|
||||||
|
end
|
||||||
|
|
||||||
|
Note over NodeA: Aggrège les votes (timeout 4s)<br/>Addr_A → 3/3 votes → confirmé ✓<br/>Addr_B → 2/3 votes → confirmé ✓
|
||||||
|
|
||||||
|
alt confirmed < maxIndexer && suggestions disponibles
|
||||||
|
Note over NodeA: Round 2 — rechallenge avec suggestions
|
||||||
|
NodeA->>NodeA: clientSideConsensus(confirmed + sample(suggestions))
|
||||||
|
end
|
||||||
|
|
||||||
|
NodeA->>NodeA: StaticIndexers = adresses confirmées à majorité
|
||||||
|
end
|
||||||
62
docs/diagrams/07_native_get_consensus.puml
Normal file
62
docs/diagrams/07_native_get_consensus.puml
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
@startuml
|
||||||
|
title Native — ConnectToNatives + Consensus (Pair A bootstrap)
|
||||||
|
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "Native #1 (primary)" as Native1
|
||||||
|
participant "Native #2" as Native2
|
||||||
|
participant "Native #N" as NativeN
|
||||||
|
participant "DHT Kademlia" as DHT
|
||||||
|
|
||||||
|
note over NodeA: NativeIndexerAddresses configuré\nAppelé pendant InitNode → ConnectToIndexers
|
||||||
|
|
||||||
|
NodeA -> NodeA: Parse NativeIndexerAddresses → StaticNatives
|
||||||
|
NodeA -> Native1: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
|
||||||
|
NodeA -> Native2: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
|
||||||
|
|
||||||
|
' Étape 1 : récupérer un pool initial
|
||||||
|
NodeA -> Native1: Connect + NewStream /opencloud/native/indexers/1.0
|
||||||
|
NodeA -> Native1: json.Encode(GetIndexersRequest{Count: maxIndexer})
|
||||||
|
|
||||||
|
Native1 -> Native1: reachableLiveIndexers()
|
||||||
|
note over Native1: Filtre liveIndexers par TTL\nping chaque candidat (PeerIsAlive)
|
||||||
|
|
||||||
|
alt Aucun indexer connu par Native1
|
||||||
|
Native1 -> Native1: selfDelegate(NodeA.PeerID, resp)
|
||||||
|
note over Native1: IsSelfFallback=true\nIndexers=[native1 addr]
|
||||||
|
Native1 -> NodeA: GetIndexersResponse{IsSelfFallback:true, Indexers:[native1]}
|
||||||
|
NodeA -> NodeA: StaticIndexers[native1] = native1
|
||||||
|
note over NodeA: Pas de consensus — native1 utilisé directement comme indexeur
|
||||||
|
else Indexers disponibles
|
||||||
|
Native1 -> NodeA: GetIndexersResponse{Indexers:[Addr_IndexerA, Addr_IndexerB, ...]}
|
||||||
|
|
||||||
|
' Étape 2 : consensus
|
||||||
|
note over NodeA: clientSideConsensus(candidates)
|
||||||
|
|
||||||
|
par Requêtes consensus parallèles
|
||||||
|
NodeA -> Native1: NewStream /opencloud/native/consensus/1.0
|
||||||
|
NodeA -> Native1: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
|
||||||
|
Native1 -> Native1: Croiser avec liveIndexers propres
|
||||||
|
Native1 -> NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
|
||||||
|
else
|
||||||
|
NodeA -> Native2: NewStream /opencloud/native/consensus/1.0
|
||||||
|
NodeA -> Native2: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
|
||||||
|
Native2 -> Native2: Croiser avec liveIndexers propres
|
||||||
|
Native2 -> NodeA: ConsensusResponse{Trusted:[Addr_A], Suggestions:[Addr_C]}
|
||||||
|
else
|
||||||
|
NodeA -> NativeN: NewStream /opencloud/native/consensus/1.0
|
||||||
|
NodeA -> NativeN: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
|
||||||
|
NativeN -> NativeN: Croiser avec liveIndexers propres
|
||||||
|
NativeN -> NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
|
||||||
|
end par
|
||||||
|
|
||||||
|
note over NodeA: Aggrège les votes (timeout 4s)\nAddr_A → 3/3 votes → confirmé ✓\nAddr_B → 2/3 votes → confirmé ✓
|
||||||
|
|
||||||
|
alt confirmed < maxIndexer && suggestions disponibles
|
||||||
|
note over NodeA: Round 2 — rechallenge avec suggestions
|
||||||
|
NodeA -> NodeA: clientSideConsensus(confirmed + sample(suggestions))
|
||||||
|
end
|
||||||
|
|
||||||
|
NodeA -> NodeA: StaticIndexers = adresses confirmées à majorité
|
||||||
|
end
|
||||||
|
|
||||||
|
@enduml
|
||||||
49
docs/diagrams/08_nats_create_resource.mmd
Normal file
49
docs/diagrams/08_nats_create_resource.mmd
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title NATS — CREATE_RESOURCE : Pair A découvre Pair B et établit le stream
|
||||||
|
|
||||||
|
participant AppA as App Pair A (oc-api)
|
||||||
|
participant NATSA as NATS A
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant StreamA as StreamService A
|
||||||
|
participant NodeB as Node B
|
||||||
|
participant StreamB as StreamService B
|
||||||
|
participant DBA as DB Pair A (oc-lib)
|
||||||
|
|
||||||
|
Note over AppA: Pair B vient d'être découvert<br/>(via indexeur ou manuel)
|
||||||
|
|
||||||
|
AppA->>NATSA: Publish(CREATE_RESOURCE, {<br/> FromApp:"oc-api",<br/> Datatype:PEER,<br/> Payload: Peer B {StreamAddress_B, Relation:PARTNER}<br/>})
|
||||||
|
|
||||||
|
NATSA->>NodeA: ListenNATS callback → CREATE_RESOURCE
|
||||||
|
|
||||||
|
NodeA->>NodeA: resp.FromApp == "oc-discovery" ? → Non, continuer
|
||||||
|
NodeA->>NodeA: json.Unmarshal(payload) → peer.Peer B
|
||||||
|
NodeA->>NodeA: pp.AddrInfoFromString(B.StreamAddress)
|
||||||
|
Note over NodeA: ad_B = {ID: PeerID_B, Addrs: [...]}
|
||||||
|
|
||||||
|
NodeA->>StreamA: Mu.Lock()
|
||||||
|
|
||||||
|
alt peer B.Relation == PARTNER
|
||||||
|
NodeA->>StreamA: ConnectToPartner(B.StreamAddress)
|
||||||
|
StreamA->>StreamA: AddrInfoFromString(B.StreamAddress) → ad_B
|
||||||
|
StreamA->>NodeB: Connect (libp2p)
|
||||||
|
StreamA->>NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
|
||||||
|
StreamA->>NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A})
|
||||||
|
|
||||||
|
NodeB->>StreamB: HandlePartnerHeartbeat(stream)
|
||||||
|
StreamB->>StreamB: CheckHeartbeat → bandwidth challenge
|
||||||
|
StreamB->>StreamA: Echo(payload)
|
||||||
|
StreamB->>StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
|
||||||
|
|
||||||
|
StreamA->>StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
|
||||||
|
Note over StreamA,StreamB: Stream partner long-lived établi<br/>dans les deux sens
|
||||||
|
|
||||||
|
else peer B.Relation != PARTNER (révocation / blacklist)
|
||||||
|
Note over NodeA: Supprimer tous les streams vers Pair B
|
||||||
|
loop Pour chaque protocole dans Streams
|
||||||
|
NodeA->>StreamA: streams[proto][PeerID_B].Stream.Close()
|
||||||
|
NodeA->>StreamA: delete(streams[proto], PeerID_B)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
NodeA->>StreamA: Mu.Unlock()
|
||||||
|
NodeA->>DBA: (pas de write direct ici — géré par l'app source)
|
||||||
50
docs/diagrams/08_nats_create_resource.puml
Normal file
50
docs/diagrams/08_nats_create_resource.puml
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
@startuml
|
||||||
|
title NATS — CREATE_RESOURCE : Pair A découvre Pair B et établit le stream
|
||||||
|
|
||||||
|
participant "App Pair A (oc-api)" as AppA
|
||||||
|
participant "NATS A" as NATSA
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "StreamService A" as StreamA
|
||||||
|
participant "Node B" as NodeB
|
||||||
|
participant "StreamService B" as StreamB
|
||||||
|
participant "DB Pair A (oc-lib)" as DBA
|
||||||
|
|
||||||
|
note over AppA: Pair B vient d'être découvert\n(via indexeur ou manuel)
|
||||||
|
|
||||||
|
AppA -> NATSA: Publish(CREATE_RESOURCE, {\n FromApp:"oc-api",\n Datatype:PEER,\n Payload: Peer B {StreamAddress_B, Relation:PARTNER}\n})
|
||||||
|
|
||||||
|
NATSA -> NodeA: ListenNATS callback → CREATE_RESOURCE
|
||||||
|
|
||||||
|
NodeA -> NodeA: resp.FromApp == "oc-discovery" ? → Non, continuer
|
||||||
|
NodeA -> NodeA: json.Unmarshal(payload) → peer.Peer B
|
||||||
|
NodeA -> NodeA: pp.AddrInfoFromString(B.StreamAddress)
|
||||||
|
note over NodeA: ad_B = {ID: PeerID_B, Addrs: [...]}
|
||||||
|
|
||||||
|
NodeA -> StreamA: Mu.Lock()
|
||||||
|
|
||||||
|
alt peer B.Relation == PARTNER
|
||||||
|
NodeA -> StreamA: ConnectToPartner(B.StreamAddress)
|
||||||
|
StreamA -> StreamA: AddrInfoFromString(B.StreamAddress) → ad_B
|
||||||
|
StreamA -> NodeB: Connect (libp2p)
|
||||||
|
StreamA -> NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
|
||||||
|
StreamA -> NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A})
|
||||||
|
|
||||||
|
NodeB -> StreamB: HandlePartnerHeartbeat(stream)
|
||||||
|
StreamB -> StreamB: CheckHeartbeat → bandwidth challenge
|
||||||
|
StreamB -> StreamA: Echo(payload)
|
||||||
|
StreamB -> StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
|
||||||
|
|
||||||
|
StreamA -> StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
|
||||||
|
note over StreamA,StreamB: Stream partner long-lived établi\ndans les deux sens
|
||||||
|
else peer B.Relation != PARTNER (révocation / blacklist)
|
||||||
|
note over NodeA: Supprimer tous les streams vers Pair B
|
||||||
|
loop Pour chaque protocole dans Streams
|
||||||
|
NodeA -> StreamA: streams[proto][PeerID_B].Stream.Close()
|
||||||
|
NodeA -> StreamA: delete(streams[proto], PeerID_B)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
NodeA -> StreamA: Mu.Unlock()
|
||||||
|
NodeA -> DBA: (pas de write direct ici — géré par l'app source)
|
||||||
|
|
||||||
|
@enduml
|
||||||
66
docs/diagrams/09_nats_propagation.mmd
Normal file
66
docs/diagrams/09_nats_propagation.mmd
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title NATS — PROPALGATION_EVENT : Pair A propage vers Pair B
|
||||||
|
|
||||||
|
participant AppA as App Pair A
|
||||||
|
participant NATSA as NATS A
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant StreamA as StreamService A
|
||||||
|
participant NodeB as Node B
|
||||||
|
participant NATSB as NATS B
|
||||||
|
participant DBB as DB Pair B (oc-lib)
|
||||||
|
|
||||||
|
AppA->>NATSA: Publish(PROPALGATION_EVENT, {Action, DataType, Payload})
|
||||||
|
NATSA->>NodeA: ListenNATS callback → PROPALGATION_EVENT
|
||||||
|
NodeA->>NodeA: resp.FromApp != "oc-discovery" ? → continuer
|
||||||
|
NodeA->>NodeA: json.Unmarshal → PropalgationMessage{Action, DataType, Payload}
|
||||||
|
|
||||||
|
alt Action == PB_DELETE
|
||||||
|
NodeA->>StreamA: ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
|
||||||
|
StreamA->>StreamA: searchPeer(PARTNER) → [Pair B, ...]
|
||||||
|
StreamA->>NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
|
||||||
|
Note over NodeB: /opencloud/resource/delete/1.0
|
||||||
|
|
||||||
|
NodeB->>NodeB: handleEventFromPartner(evt, ProtocolDeleteResource)
|
||||||
|
NodeB->>NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
|
||||||
|
NATSB->>DBB: Supprimer ressource dans DB B
|
||||||
|
|
||||||
|
else Action == PB_UPDATE (via ProtocolUpdateResource)
|
||||||
|
NodeA->>StreamA: ToPartnerPublishEvent(PB_UPDATE, dt, user, payload)
|
||||||
|
StreamA->>NodeB: write → /opencloud/resource/update/1.0
|
||||||
|
NodeB->>NATSB: SetNATSPub(CREATE_RESOURCE, {DataType, resource JSON})
|
||||||
|
NATSB->>DBB: Upsert ressource dans DB B
|
||||||
|
|
||||||
|
else Action == PB_CONSIDERS + WORKFLOW_EXECUTION
|
||||||
|
NodeA->>NodeA: Unmarshal → executionConsidersPayload{PeerIDs:[PeerID_B, ...]}
|
||||||
|
loop Pour chaque peer_id cible
|
||||||
|
NodeA->>StreamA: PublishCommon(dt, user, PeerID_B, ProtocolConsidersResource, payload)
|
||||||
|
StreamA->>NodeB: write → /opencloud/resource/considers/1.0
|
||||||
|
NodeB->>NodeB: passConsidering(evt)
|
||||||
|
NodeB->>NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_CONSIDERS, dt, payload})
|
||||||
|
NATSB->>DBB: (traité par oc-workflow sur NATS B)
|
||||||
|
end
|
||||||
|
|
||||||
|
else Action == PB_PLANNER (broadcast)
|
||||||
|
NodeA->>NodeA: Unmarshal → {peer_id: nil, ...payload}
|
||||||
|
loop Pour chaque stream ProtocolSendPlanner ouvert
|
||||||
|
NodeA->>StreamA: PublishCommon(nil, user, pid, ProtocolSendPlanner, payload)
|
||||||
|
StreamA->>NodeB: write → /opencloud/resource/planner/1.0
|
||||||
|
end
|
||||||
|
|
||||||
|
else Action == PB_CLOSE_PLANNER
|
||||||
|
NodeA->>NodeA: Unmarshal → {peer_id: PeerID_B}
|
||||||
|
NodeA->>StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
|
||||||
|
NodeA->>StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
|
||||||
|
|
||||||
|
else Action == PB_SEARCH + DataType == PEER
|
||||||
|
NodeA->>NodeA: Unmarshal → {search: "..."}
|
||||||
|
NodeA->>NodeA: GetPeerRecord(ctx, search)
|
||||||
|
Note over NodeA: Résolution via DB A + Indexer + DHT
|
||||||
|
NodeA->>NATSA: SetNATSPub(SEARCH_EVENT, {PEER, PeerRecord JSON})
|
||||||
|
NATSA->>NATSA: (AppA reçoit le résultat)
|
||||||
|
|
||||||
|
else Action == PB_SEARCH + autre DataType
|
||||||
|
NodeA->>NodeA: Unmarshal → {type:"all"|"known"|"partner", search:"..."}
|
||||||
|
NodeA->>NodeA: PubSubService.SearchPublishEvent(ctx, dt, type, user, search)
|
||||||
|
Note over NodeA: Voir diagrammes 10 et 11
|
||||||
|
end
|
||||||
68
docs/diagrams/09_nats_propagation.puml
Normal file
68
docs/diagrams/09_nats_propagation.puml
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
@startuml
|
||||||
|
title NATS — PROPALGATION_EVENT : Pair A propage vers Pair B
|
||||||
|
|
||||||
|
participant "App Pair A" as AppA
|
||||||
|
participant "NATS A" as NATSA
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "StreamService A" as StreamA
|
||||||
|
participant "Node B" as NodeB
|
||||||
|
participant "NATS B" as NATSB
|
||||||
|
participant "DB Pair B (oc-lib)" as DBB
|
||||||
|
|
||||||
|
AppA -> NATSA: Publish(PROPALGATION_EVENT, {Action, DataType, Payload})
|
||||||
|
NATSA -> NodeA: ListenNATS callback → PROPALGATION_EVENT
|
||||||
|
NodeA -> NodeA: resp.FromApp != "oc-discovery" ? → continuer
|
||||||
|
NodeA -> NodeA: json.Unmarshal → PropalgationMessage{Action, DataType, Payload}
|
||||||
|
|
||||||
|
alt Action == PB_DELETE
|
||||||
|
NodeA -> StreamA: ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
|
||||||
|
StreamA -> StreamA: searchPeer(PARTNER) → [Pair B, ...]
|
||||||
|
StreamA -> NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
|
||||||
|
note over NodeB: /opencloud/resource/delete/1.0
|
||||||
|
|
||||||
|
NodeB -> NodeB: handleEventFromPartner(evt, ProtocolDeleteResource)
|
||||||
|
NodeB -> NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
|
||||||
|
NATSB -> DBB: Supprimer ressource dans DB B
|
||||||
|
|
||||||
|
else Action == PB_UPDATE (via ProtocolUpdateResource)
|
||||||
|
NodeA -> StreamA: ToPartnerPublishEvent(PB_UPDATE, dt, user, payload)
|
||||||
|
StreamA -> NodeB: write → /opencloud/resource/update/1.0
|
||||||
|
NodeB -> NATSB: SetNATSPub(CREATE_RESOURCE, {DataType, resource JSON})
|
||||||
|
NATSB -> DBB: Upsert ressource dans DB B
|
||||||
|
|
||||||
|
else Action == PB_CONSIDERS + WORKFLOW_EXECUTION
|
||||||
|
NodeA -> NodeA: Unmarshal → executionConsidersPayload{PeerIDs:[PeerID_B, ...]}
|
||||||
|
loop Pour chaque peer_id cible
|
||||||
|
NodeA -> StreamA: PublishCommon(dt, user, PeerID_B, ProtocolConsidersResource, payload)
|
||||||
|
StreamA -> NodeB: write → /opencloud/resource/considers/1.0
|
||||||
|
NodeB -> NodeB: passConsidering(evt)
|
||||||
|
NodeB -> NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_CONSIDERS, dt, payload})
|
||||||
|
NATSB -> DBB: (traité par oc-workflow sur NATS B)
|
||||||
|
end
|
||||||
|
|
||||||
|
else Action == PB_PLANNER (broadcast)
|
||||||
|
NodeA -> NodeA: Unmarshal → {peer_id: nil, ...payload}
|
||||||
|
loop Pour chaque stream ProtocolSendPlanner ouvert
|
||||||
|
NodeA -> StreamA: PublishCommon(nil, user, pid, ProtocolSendPlanner, payload)
|
||||||
|
StreamA -> NodeB: write → /opencloud/resource/planner/1.0
|
||||||
|
end
|
||||||
|
|
||||||
|
else Action == PB_CLOSE_PLANNER
|
||||||
|
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B}
|
||||||
|
NodeA -> StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
|
||||||
|
NodeA -> StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
|
||||||
|
|
||||||
|
else Action == PB_SEARCH + DataType == PEER
|
||||||
|
NodeA -> NodeA: Unmarshal → {search: "..."}
|
||||||
|
NodeA -> NodeA: GetPeerRecord(ctx, search)
|
||||||
|
note over NodeA: Résolution via DB A + Indexer + DHT
|
||||||
|
NodeA -> NATSA: SetNATSPub(SEARCH_EVENT, {PEER, PeerRecord JSON})
|
||||||
|
NATSA -> NATSA: (AppA reçoit le résultat)
|
||||||
|
|
||||||
|
else Action == PB_SEARCH + autre DataType
|
||||||
|
NodeA -> NodeA: Unmarshal → {type:"all"|"known"|"partner", search:"..."}
|
||||||
|
NodeA -> NodeA: PubSubService.SearchPublishEvent(ctx, dt, type, user, search)
|
||||||
|
note over NodeA: Voir diagrammes 10 et 11
|
||||||
|
end
|
||||||
|
|
||||||
|
@enduml
|
||||||
52
docs/diagrams/10_pubsub_search.mmd
Normal file
52
docs/diagrams/10_pubsub_search.mmd
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title PubSub — Recherche gossip globale (type "all") : Pair A cherche, Pair B répond
|
||||||
|
|
||||||
|
participant AppA as App Pair A
|
||||||
|
participant NATSA as NATS A
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant PubSubA as PubSubService A
|
||||||
|
participant GossipSub as GossipSub libp2p (mesh)
|
||||||
|
participant NodeB as Node B
|
||||||
|
participant PubSubB as PubSubService B
|
||||||
|
participant DBB as DB Pair B (oc-lib)
|
||||||
|
participant StreamB as StreamService B
|
||||||
|
participant StreamA as StreamService A
|
||||||
|
|
||||||
|
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"all", search:"gpu"})
|
||||||
|
NATSA->>NodeA: ListenNATS → PB_SEARCH (type "all")
|
||||||
|
|
||||||
|
NodeA->>PubSubA: SearchPublishEvent(ctx, dt, "all", user, "gpu")
|
||||||
|
PubSubA->>PubSubA: publishEvent(PB_SEARCH, user, {search:"gpu"})
|
||||||
|
PubSubA->>PubSubA: GenerateNodeID() → from = DID_A
|
||||||
|
PubSubA->>PubSubA: priv_A.Sign(event body) → sig
|
||||||
|
PubSubA->>PubSubA: Build Event{Type:"search", From:DID_A, Payload:{search:"gpu"}, Sig}
|
||||||
|
|
||||||
|
PubSubA->>GossipSub: topic.Join("search")
|
||||||
|
PubSubA->>GossipSub: topic.Publish(ctx, json(Event))
|
||||||
|
|
||||||
|
GossipSub-->>NodeB: Message propagé (gossip mesh)
|
||||||
|
|
||||||
|
NodeB->>PubSubB: subscribeEvents écoute topic "search#"
|
||||||
|
PubSubB->>PubSubB: json.Unmarshal → Event{From: DID_A}
|
||||||
|
|
||||||
|
PubSubB->>NodeB: GetPeerRecord(ctx, DID_A)
|
||||||
|
Note over NodeB: Résolution Pair A via DB B ou Indexer
|
||||||
|
NodeB-->>PubSubB: Peer A {PublicKey_A, Relation, ...}
|
||||||
|
|
||||||
|
PubSubB->>PubSubB: event.Verify(Peer A) → valide sig_A
|
||||||
|
PubSubB->>PubSubB: handleEventSearch(ctx, evt, PB_SEARCH)
|
||||||
|
|
||||||
|
PubSubB->>StreamB: SendResponse(Peer A, evt)
|
||||||
|
StreamB->>DBB: Search(COMPUTE + STORAGE + ..., filters{creator=self, access=PUBLIC OR partnerships[PeerID_A]}, search="gpu")
|
||||||
|
DBB-->>StreamB: [Resource1, Resource2, ...]
|
||||||
|
|
||||||
|
loop Pour chaque ressource matchée
|
||||||
|
StreamB->>StreamB: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
|
||||||
|
StreamB->>StreamA: NewStream /opencloud/resource/search/1.0
|
||||||
|
StreamB->>StreamA: json.Encode(Event{Type:search, From:DID_B, DataType, Payload:resource})
|
||||||
|
end
|
||||||
|
|
||||||
|
StreamA->>StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
|
||||||
|
StreamA->>StreamA: retrieveResponse(evt)
|
||||||
|
StreamA->>NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
|
||||||
|
NATSA->>AppA: Résultats de recherche de Pair B
|
||||||
54
docs/diagrams/10_pubsub_search.puml
Normal file
54
docs/diagrams/10_pubsub_search.puml
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
@startuml
|
||||||
|
title PubSub — Recherche gossip globale (type "all") : Pair A cherche, Pair B répond
|
||||||
|
|
||||||
|
participant "App Pair A" as AppA
|
||||||
|
participant "NATS A" as NATSA
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "PubSubService A" as PubSubA
|
||||||
|
participant "GossipSub libp2p (mesh)" as GossipSub
|
||||||
|
participant "Node B" as NodeB
|
||||||
|
participant "PubSubService B" as PubSubB
|
||||||
|
participant "DB Pair B (oc-lib)" as DBB
|
||||||
|
participant "StreamService B" as StreamB
|
||||||
|
participant "StreamService A" as StreamA
|
||||||
|
|
||||||
|
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"all", search:"gpu"})
|
||||||
|
NATSA -> NodeA: ListenNATS → PB_SEARCH (type "all")
|
||||||
|
|
||||||
|
NodeA -> PubSubA: SearchPublishEvent(ctx, dt, "all", user, "gpu")
|
||||||
|
PubSubA -> PubSubA: publishEvent(PB_SEARCH, user, {search:"gpu"})
|
||||||
|
PubSubA -> PubSubA: GenerateNodeID() → from = DID_A
|
||||||
|
PubSubA -> PubSubA: priv_A.Sign(event body) → sig
|
||||||
|
PubSubA -> PubSubA: Build Event{Type:"search", From:DID_A, Payload:{search:"gpu"}, Sig}
|
||||||
|
|
||||||
|
PubSubA -> GossipSub: topic.Join("search")
|
||||||
|
PubSubA -> GossipSub: topic.Publish(ctx, json(Event))
|
||||||
|
|
||||||
|
GossipSub --> NodeB: Message propagé (gossip mesh)
|
||||||
|
|
||||||
|
NodeB -> PubSubB: subscribeEvents écoute topic "search#"
|
||||||
|
PubSubB -> PubSubB: json.Unmarshal → Event{From: DID_A}
|
||||||
|
|
||||||
|
PubSubB -> NodeB: GetPeerRecord(ctx, DID_A)
|
||||||
|
note over NodeB: Résolution Pair A via DB B ou Indexer
|
||||||
|
NodeB --> PubSubB: Peer A {PublicKey_A, Relation, ...}
|
||||||
|
|
||||||
|
PubSubB -> PubSubB: event.Verify(Peer A) → valide sig_A
|
||||||
|
PubSubB -> PubSubB: handleEventSearch(ctx, evt, PB_SEARCH)
|
||||||
|
|
||||||
|
PubSubB -> StreamB: SendResponse(Peer A, evt)
|
||||||
|
StreamB -> DBB: Search(COMPUTE + STORAGE + ..., filters{creator=self, access=PUBLIC OR partnerships[PeerID_A]}, search="gpu")
|
||||||
|
DBB --> StreamB: [Resource1, Resource2, ...]
|
||||||
|
|
||||||
|
loop Pour chaque ressource matchée
|
||||||
|
StreamB -> StreamB: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
|
||||||
|
StreamB -> StreamA: NewStream /opencloud/resource/search/1.0
|
||||||
|
StreamB -> StreamA: json.Encode(Event{Type:search, From:DID_B, DataType, Payload:resource})
|
||||||
|
end
|
||||||
|
|
||||||
|
StreamA -> StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
|
||||||
|
StreamA -> StreamA: retrieveResponse(evt)
|
||||||
|
StreamA -> NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
|
||||||
|
NATSA -> AppA: Résultats de recherche de Pair B
|
||||||
|
|
||||||
|
@enduml
|
||||||
52
docs/diagrams/11_stream_search.mmd
Normal file
52
docs/diagrams/11_stream_search.mmd
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Stream — Recherche directe (type "known"/"partner") : Pair A → Pair B
|
||||||
|
|
||||||
|
participant AppA as App Pair A
|
||||||
|
participant NATSA as NATS A
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant PubSubA as PubSubService A
|
||||||
|
participant StreamA as StreamService A
|
||||||
|
participant DBA as DB Pair A (oc-lib)
|
||||||
|
participant NodeB as Node B
|
||||||
|
participant StreamB as StreamService B
|
||||||
|
participant DBB as DB Pair B (oc-lib)
|
||||||
|
|
||||||
|
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"partner", search:"gpu"})
|
||||||
|
NATSA->>NodeA: ListenNATS → PB_SEARCH (type "partner")
|
||||||
|
NodeA->>PubSubA: SearchPublishEvent(ctx, dt, "partner", user, "gpu")
|
||||||
|
|
||||||
|
PubSubA->>StreamA: SearchPartnersPublishEvent(dt, user, "gpu")
|
||||||
|
StreamA->>DBA: Search(PEER, PARTNER) + PeerIDS config
|
||||||
|
DBA-->>StreamA: [Peer B, ...]
|
||||||
|
|
||||||
|
loop Pour chaque pair partenaire (Pair B)
|
||||||
|
StreamA->>StreamA: json.Marshal({search:"gpu"}) → payload
|
||||||
|
StreamA->>StreamA: write(PeerID_B, addr_B, dt, user, payload, ProtocolSearchResource)
|
||||||
|
StreamA->>NodeB: TempStream /opencloud/resource/search/1.0
|
||||||
|
StreamA->>NodeB: json.Encode(Event{Type:search, From:DID_A, DataType, Payload:{search:"gpu"}})
|
||||||
|
|
||||||
|
NodeB->>StreamB: HandleResponse(stream) → readLoop
|
||||||
|
StreamB->>StreamB: handleEvent(ProtocolSearchResource, evt)
|
||||||
|
StreamB->>StreamB: handleEventFromPartner(evt, ProtocolSearchResource)
|
||||||
|
|
||||||
|
alt evt.DataType == -1 (toutes ressources)
|
||||||
|
StreamB->>DBA: Search(PEER, evt.From=DID_A)
|
||||||
|
Note over StreamB: Résolution locale ou via GetPeerRecord
|
||||||
|
StreamB->>StreamB: SendResponse(Peer A, evt)
|
||||||
|
StreamB->>DBB: Search(ALL_RESOURCES, filter{creator=B + public OR partner A + search:"gpu"})
|
||||||
|
DBB-->>StreamB: [Resource1, Resource2, ...]
|
||||||
|
else evt.DataType spécifié
|
||||||
|
StreamB->>DBB: Search(DataType, filter{creator=B + access + search:"gpu"})
|
||||||
|
DBB-->>StreamB: [Resource1, ...]
|
||||||
|
end
|
||||||
|
|
||||||
|
loop Pour chaque ressource
|
||||||
|
StreamB->>StreamA: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
|
||||||
|
StreamA->>StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
|
||||||
|
StreamA->>StreamA: retrieveResponse(evt)
|
||||||
|
StreamA->>NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
|
||||||
|
NATSA->>AppA: Résultat de Pair B
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
Note over NATSA,DBA: Optionnel: App A persiste<br/>les ressources découvertes dans DB A
|
||||||
54
docs/diagrams/11_stream_search.puml
Normal file
54
docs/diagrams/11_stream_search.puml
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
@startuml
|
||||||
|
title Stream — Recherche directe (type "known"/"partner") : Pair A → Pair B
|
||||||
|
|
||||||
|
participant "App Pair A" as AppA
|
||||||
|
participant "NATS A" as NATSA
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "PubSubService A" as PubSubA
|
||||||
|
participant "StreamService A" as StreamA
|
||||||
|
participant "DB Pair A (oc-lib)" as DBA
|
||||||
|
participant "Node B" as NodeB
|
||||||
|
participant "StreamService B" as StreamB
|
||||||
|
participant "DB Pair B (oc-lib)" as DBB
|
||||||
|
|
||||||
|
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"partner", search:"gpu"})
|
||||||
|
NATSA -> NodeA: ListenNATS → PB_SEARCH (type "partner")
|
||||||
|
NodeA -> PubSubA: SearchPublishEvent(ctx, dt, "partner", user, "gpu")
|
||||||
|
|
||||||
|
PubSubA -> StreamA: SearchPartnersPublishEvent(dt, user, "gpu")
|
||||||
|
StreamA -> DBA: Search(PEER, PARTNER) + PeerIDS config
|
||||||
|
DBA --> StreamA: [Peer B, ...]
|
||||||
|
|
||||||
|
loop Pour chaque pair partenaire (Pair B)
|
||||||
|
StreamA -> StreamA: json.Marshal({search:"gpu"}) → payload
|
||||||
|
StreamA -> StreamA: write(PeerID_B, addr_B, dt, user, payload, ProtocolSearchResource)
|
||||||
|
StreamA -> NodeB: TempStream /opencloud/resource/search/1.0
|
||||||
|
StreamA -> NodeB: json.Encode(Event{Type:search, From:DID_A, DataType, Payload:{search:"gpu"}})
|
||||||
|
|
||||||
|
NodeB -> StreamB: HandleResponse(stream) → readLoop
|
||||||
|
StreamB -> StreamB: handleEvent(ProtocolSearchResource, evt)
|
||||||
|
StreamB -> StreamB: handleEventFromPartner(evt, ProtocolSearchResource)
|
||||||
|
|
||||||
|
alt evt.DataType == -1 (toutes ressources)
|
||||||
|
StreamB -> DBA: Search(PEER, evt.From=DID_A)
|
||||||
|
note over StreamB: Résolution locale ou via GetPeerRecord
|
||||||
|
StreamB -> StreamB: SendResponse(Peer A, evt)
|
||||||
|
StreamB -> DBB: Search(ALL_RESOURCES, filter{creator=B + public OR partner A + search:"gpu"})
|
||||||
|
DBB --> StreamB: [Resource1, Resource2, ...]
|
||||||
|
else evt.DataType spécifié
|
||||||
|
StreamB -> DBB: Search(DataType, filter{creator=B + access + search:"gpu"})
|
||||||
|
DBB --> StreamB: [Resource1, ...]
|
||||||
|
end
|
||||||
|
|
||||||
|
loop Pour chaque ressource
|
||||||
|
StreamB -> StreamA: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
|
||||||
|
StreamA -> StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
|
||||||
|
StreamA -> StreamA: retrieveResponse(evt)
|
||||||
|
StreamA -> NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
|
||||||
|
NATSA -> AppA: Résultat de Pair B
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
note over NATSA,DBA: Optionnel: App A persiste\nles ressources découvertes dans DB A
|
||||||
|
|
||||||
|
@enduml
|
||||||
58
docs/diagrams/12_partner_heartbeat.mmd
Normal file
58
docs/diagrams/12_partner_heartbeat.mmd
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Stream — Partner Heartbeat et propagation CRUD Pair A ↔ Pair B
|
||||||
|
|
||||||
|
participant DBA as DB Pair A (oc-lib)
|
||||||
|
participant StreamA as StreamService A
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant NodeB as Node B
|
||||||
|
participant StreamB as StreamService B
|
||||||
|
participant NATSB as NATS B
|
||||||
|
participant DBB as DB Pair B (oc-lib)
|
||||||
|
participant NATSA as NATS A
|
||||||
|
|
||||||
|
Note over StreamA: Démarrage → connectToPartners()
|
||||||
|
|
||||||
|
StreamA->>DBA: Search(PEER, PARTNER) + PeerIDS config
|
||||||
|
DBA-->>StreamA: [Peer B, ...]
|
||||||
|
|
||||||
|
StreamA->>NodeB: Connect (libp2p)
|
||||||
|
StreamA->>NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
|
||||||
|
StreamA->>NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A, IndexersBinded_A})
|
||||||
|
|
||||||
|
NodeB->>StreamB: HandlePartnerHeartbeat(stream)
|
||||||
|
StreamB->>StreamB: CheckHeartbeat → bandwidth challenge
|
||||||
|
StreamB->>StreamA: Echo(payload)
|
||||||
|
StreamB->>StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
|
||||||
|
|
||||||
|
StreamA->>StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
|
||||||
|
|
||||||
|
Note over StreamA,StreamB: Stream partner long-lived établi<br/>GC toutes les 8s (StreamService A)<br/>GC toutes les 30s (StreamService B)
|
||||||
|
|
||||||
|
Note over NATSA: Pair A reçoit PROPALGATION_EVENT{PB_DELETE, dt:"storage", payload:res}
|
||||||
|
|
||||||
|
NATSA->>NodeA: ListenNATS → ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
|
||||||
|
NodeA->>StreamA: ToPartnerPublishEvent(ctx, PB_DELETE, dt_storage, user, payload)
|
||||||
|
|
||||||
|
alt dt == PEER (mise à jour relation partenaire)
|
||||||
|
StreamA->>StreamA: json.Unmarshal → peer.Peer B updated
|
||||||
|
alt B.Relation == PARTNER
|
||||||
|
StreamA->>NodeB: ConnectToPartner(B.StreamAddress)
|
||||||
|
Note over StreamA,NodeB: Reconnexion heartbeat si relation upgrade
|
||||||
|
else B.Relation != PARTNER
|
||||||
|
loop Tous les protocoles
|
||||||
|
StreamA->>StreamA: delete(streams[proto][PeerID_B])
|
||||||
|
StreamA->>NodeB: (streams fermés)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
else dt != PEER (ressource ordinaire)
|
||||||
|
StreamA->>DBA: Search(PEER, PARTNER) → [Pair B, ...]
|
||||||
|
loop Pour chaque protocole partner (Create/Update/Delete)
|
||||||
|
StreamA->>NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
|
||||||
|
Note over NodeB: /opencloud/resource/delete/1.0
|
||||||
|
|
||||||
|
NodeB->>StreamB: HandleResponse → readLoop
|
||||||
|
StreamB->>StreamB: handleEventFromPartner(evt, ProtocolDeleteResource)
|
||||||
|
StreamB->>NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
|
||||||
|
NATSB->>DBB: Supprimer ressource dans DB B
|
||||||
|
end
|
||||||
|
end
|
||||||
60
docs/diagrams/12_partner_heartbeat.puml
Normal file
60
docs/diagrams/12_partner_heartbeat.puml
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
@startuml
|
||||||
|
title Stream — Partner Heartbeat et propagation CRUD Pair A ↔ Pair B
|
||||||
|
|
||||||
|
participant "DB Pair A (oc-lib)" as DBA
|
||||||
|
participant "StreamService A" as StreamA
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "Node B" as NodeB
|
||||||
|
participant "StreamService B" as StreamB
|
||||||
|
participant "NATS B" as NATSB
|
||||||
|
participant "DB Pair B (oc-lib)" as DBB
|
||||||
|
participant "NATS A" as NATSA
|
||||||
|
|
||||||
|
note over StreamA: Démarrage → connectToPartners()
|
||||||
|
|
||||||
|
StreamA -> DBA: Search(PEER, PARTNER) + PeerIDS config
|
||||||
|
DBA --> StreamA: [Peer B, ...]
|
||||||
|
|
||||||
|
StreamA -> NodeB: Connect (libp2p)
|
||||||
|
StreamA -> NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
|
||||||
|
StreamA -> NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A, IndexersBinded_A})
|
||||||
|
|
||||||
|
NodeB -> StreamB: HandlePartnerHeartbeat(stream)
|
||||||
|
StreamB -> StreamB: CheckHeartbeat → bandwidth challenge
|
||||||
|
StreamB -> StreamA: Echo(payload)
|
||||||
|
StreamB -> StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
|
||||||
|
|
||||||
|
StreamA -> StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
|
||||||
|
|
||||||
|
note over StreamA,StreamB: Stream partner long-lived établi\nGC toutes les 8s (StreamService A)\nGC toutes les 30s (StreamService B)
|
||||||
|
|
||||||
|
note over NATSA: Pair A reçoit PROPALGATION_EVENT{PB_DELETE, dt:"storage", payload:res}
|
||||||
|
|
||||||
|
NATSA -> NodeA: ListenNATS → ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
|
||||||
|
NodeA -> StreamA: ToPartnerPublishEvent(ctx, PB_DELETE, dt_storage, user, payload)
|
||||||
|
|
||||||
|
alt dt == PEER (mise à jour relation partenaire)
|
||||||
|
StreamA -> StreamA: json.Unmarshal → peer.Peer B updated
|
||||||
|
alt B.Relation == PARTNER
|
||||||
|
StreamA -> NodeB: ConnectToPartner(B.StreamAddress)
|
||||||
|
note over StreamA,NodeB: Reconnexion heartbeat si relation upgrade
|
||||||
|
else B.Relation != PARTNER
|
||||||
|
loop Tous les protocoles
|
||||||
|
StreamA -> StreamA: delete(streams[proto][PeerID_B])
|
||||||
|
StreamA -> NodeB: (streams fermés)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
else dt != PEER (ressource ordinaire)
|
||||||
|
StreamA -> DBA: Search(PEER, PARTNER) → [Pair B, ...]
|
||||||
|
loop Pour chaque protocole partner (Create/Update/Delete)
|
||||||
|
StreamA -> NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
|
||||||
|
note over NodeB: /opencloud/resource/delete/1.0
|
||||||
|
|
||||||
|
NodeB -> StreamB: HandleResponse → readLoop
|
||||||
|
StreamB -> StreamB: handleEventFromPartner(evt, ProtocolDeleteResource)
|
||||||
|
StreamB -> NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
|
||||||
|
NATSB -> DBB: Supprimer ressource dans DB B
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
@enduml
|
||||||
49
docs/diagrams/13_planner_flow.mmd
Normal file
49
docs/diagrams/13_planner_flow.mmd
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Stream — Session Planner : Pair A demande le plan de Pair B
|
||||||
|
|
||||||
|
participant AppA as App Pair A (oc-booking)
|
||||||
|
participant NATSA as NATS A
|
||||||
|
participant NodeA as Node A
|
||||||
|
participant StreamA as StreamService A
|
||||||
|
participant NodeB as Node B
|
||||||
|
participant StreamB as StreamService B
|
||||||
|
participant DBB as DB Pair B (oc-lib)
|
||||||
|
participant NATSB as NATS B
|
||||||
|
|
||||||
|
%% Ouverture session planner
|
||||||
|
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_PLANNER, peer_id:PeerID_B, payload:{}})
|
||||||
|
NATSA->>NodeA: ListenNATS → PB_PLANNER
|
||||||
|
|
||||||
|
NodeA->>NodeA: Unmarshal → {peer_id: PeerID_B, payload: {}}
|
||||||
|
NodeA->>StreamA: PublishCommon(nil, user, PeerID_B, ProtocolSendPlanner, {})
|
||||||
|
Note over StreamA: WaitResponse=true, TTL=24h<br/>Stream long-lived vers Pair B
|
||||||
|
StreamA->>NodeB: TempStream /opencloud/resource/planner/1.0
|
||||||
|
StreamA->>NodeB: json.Encode(Event{Type:planner, From:DID_A, Payload:{}})
|
||||||
|
|
||||||
|
NodeB->>StreamB: HandleResponse → readLoop(ProtocolSendPlanner)
|
||||||
|
StreamB->>StreamB: handleEvent(ProtocolSendPlanner, evt)
|
||||||
|
StreamB->>StreamB: sendPlanner(evt)
|
||||||
|
|
||||||
|
alt evt.Payload vide (requête initiale)
|
||||||
|
StreamB->>DBB: planner.GenerateShallow(AdminRequest)
|
||||||
|
DBB-->>StreamB: plan (shallow booking plan de Pair B)
|
||||||
|
StreamB->>StreamA: PublishCommon(nil, user, DID_A, ProtocolSendPlanner, planJSON)
|
||||||
|
StreamA->>NodeA: json.Encode(Event{plan de B})
|
||||||
|
NodeA->>NATSA: (forwardé à AppA via SEARCH_EVENT ou PLANNER event)
|
||||||
|
NATSA->>AppA: Plan de Pair B
|
||||||
|
else evt.Payload non vide (mise à jour planner)
|
||||||
|
StreamB->>StreamB: m["peer_id"] = evt.From (DID_A)
|
||||||
|
StreamB->>NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_PLANNER, peer_id:DID_A, payload:plan})
|
||||||
|
NATSB->>DBB: (oc-booking traite le plan sur NATS B)
|
||||||
|
end
|
||||||
|
|
||||||
|
%% Fermeture session planner
|
||||||
|
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_CLOSE_PLANNER, peer_id:PeerID_B})
|
||||||
|
NATSA->>NodeA: ListenNATS → PB_CLOSE_PLANNER
|
||||||
|
|
||||||
|
NodeA->>NodeA: Unmarshal → {peer_id: PeerID_B}
|
||||||
|
NodeA->>StreamA: Mu.Lock()
|
||||||
|
NodeA->>StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
|
||||||
|
NodeA->>StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
|
||||||
|
NodeA->>StreamA: Mu.Unlock()
|
||||||
|
Note over StreamA,NodeB: Stream planner fermé — session terminée
|
||||||
51
docs/diagrams/13_planner_flow.puml
Normal file
51
docs/diagrams/13_planner_flow.puml
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
@startuml
|
||||||
|
title Stream — Session Planner : Pair A demande le plan de Pair B
|
||||||
|
|
||||||
|
participant "App Pair A (oc-booking)" as AppA
|
||||||
|
participant "NATS A" as NATSA
|
||||||
|
participant "Node A" as NodeA
|
||||||
|
participant "StreamService A" as StreamA
|
||||||
|
participant "Node B" as NodeB
|
||||||
|
participant "StreamService B" as StreamB
|
||||||
|
participant "DB Pair B (oc-lib)" as DBB
|
||||||
|
participant "NATS B" as NATSB
|
||||||
|
|
||||||
|
' Ouverture session planner
|
||||||
|
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_PLANNER, peer_id:PeerID_B, payload:{}})
|
||||||
|
NATSA -> NodeA: ListenNATS → PB_PLANNER
|
||||||
|
|
||||||
|
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B, payload: {}}
|
||||||
|
NodeA -> StreamA: PublishCommon(nil, user, PeerID_B, ProtocolSendPlanner, {})
|
||||||
|
note over StreamA: WaitResponse=true, TTL=24h\nStream long-lived vers Pair B
|
||||||
|
StreamA -> NodeB: TempStream /opencloud/resource/planner/1.0
|
||||||
|
StreamA -> NodeB: json.Encode(Event{Type:planner, From:DID_A, Payload:{}})
|
||||||
|
|
||||||
|
NodeB -> StreamB: HandleResponse → readLoop(ProtocolSendPlanner)
|
||||||
|
StreamB -> StreamB: handleEvent(ProtocolSendPlanner, evt)
|
||||||
|
StreamB -> StreamB: sendPlanner(evt)
|
||||||
|
|
||||||
|
alt evt.Payload vide (requête initiale)
|
||||||
|
StreamB -> DBB: planner.GenerateShallow(AdminRequest)
|
||||||
|
DBB --> StreamB: plan (shallow booking plan de Pair B)
|
||||||
|
StreamB -> StreamA: PublishCommon(nil, user, DID_A, ProtocolSendPlanner, planJSON)
|
||||||
|
StreamA -> NodeA: json.Encode(Event{plan de B})
|
||||||
|
NodeA -> NATSA: (forwardé à AppA via SEARCH_EVENT ou PLANNER event)
|
||||||
|
NATSA -> AppA: Plan de Pair B
|
||||||
|
else evt.Payload non vide (mise à jour planner)
|
||||||
|
StreamB -> StreamB: m["peer_id"] = evt.From (DID_A)
|
||||||
|
StreamB -> NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_PLANNER, peer_id:DID_A, payload:plan})
|
||||||
|
NATSB -> DBB: (oc-booking traite le plan sur NATS B)
|
||||||
|
end
|
||||||
|
|
||||||
|
' Fermeture session planner
|
||||||
|
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_CLOSE_PLANNER, peer_id:PeerID_B})
|
||||||
|
NATSA -> NodeA: ListenNATS → PB_CLOSE_PLANNER
|
||||||
|
|
||||||
|
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B}
|
||||||
|
NodeA -> StreamA: Mu.Lock()
|
||||||
|
NodeA -> StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
|
||||||
|
NodeA -> StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
|
||||||
|
NodeA -> StreamA: Mu.Unlock()
|
||||||
|
note over StreamA,NodeB: Stream planner fermé — session terminée
|
||||||
|
|
||||||
|
@enduml
|
||||||
59
docs/diagrams/14_native_offload_gc.mmd
Normal file
59
docs/diagrams/14_native_offload_gc.mmd
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
sequenceDiagram
|
||||||
|
title Native Indexer — Boucles background (offload, DHT refresh, GC streams)
|
||||||
|
|
||||||
|
participant IndexerA as Indexer A (enregistré)
|
||||||
|
participant IndexerB as Indexer B (enregistré)
|
||||||
|
participant Native as Native Indexer
|
||||||
|
participant DHT as DHT Kademlia
|
||||||
|
participant NodeA as Node A (responsible peer)
|
||||||
|
|
||||||
|
Note over Native: runOffloadLoop — toutes les 30s
|
||||||
|
|
||||||
|
loop Toutes les 30s
|
||||||
|
Native->>Native: len(responsiblePeers) > 0 ?
|
||||||
|
Note over Native: responsiblePeers = peers pour lesquels<br/>le native a fait selfDelegate (aucun indexer dispo)
|
||||||
|
alt Des responsible peers existent (ex: Node A)
|
||||||
|
Native->>Native: reachableLiveIndexers()
|
||||||
|
Note over Native: Filtre liveIndexers par TTL<br/>ping PeerIsAlive pour chaque candidat
|
||||||
|
alt Indexers A et B maintenant joignables
|
||||||
|
Native->>Native: responsiblePeers = {} (libère Node A et autres)
|
||||||
|
Note over Native: Node A se reconnectera<br/>au prochain ConnectToNatives
|
||||||
|
else Toujours aucun indexer
|
||||||
|
Note over Native: Node A reste sous la responsabilité du native
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
Note over Native: refreshIndexersFromDHT — toutes les 30s
|
||||||
|
|
||||||
|
loop Toutes les 30s
|
||||||
|
Native->>Native: Collecter tous les knownPeerIDs<br/>= {PeerID_A, PeerID_B, ...}
|
||||||
|
loop Pour chaque PeerID connu
|
||||||
|
Native->>Native: liveIndexers[PeerID] encore frais ?
|
||||||
|
alt Entrée manquante ou expirée
|
||||||
|
Native->>DHT: SearchValue(ctx 5s, "/indexer/"+PeerID)
|
||||||
|
DHT-->>Native: channel de bytes
|
||||||
|
loop Pour chaque résultat DHT
|
||||||
|
Native->>Native: Unmarshal → liveIndexerEntry
|
||||||
|
Native->>Native: Garder le meilleur (ExpiresAt le plus récent, valide)
|
||||||
|
end
|
||||||
|
Native->>Native: liveIndexers[PeerID] = best entry
|
||||||
|
Note over Native: "native: refreshed indexer from DHT"
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
Note over Native: LongLivedStreamRecordedService GC — toutes les 30s
|
||||||
|
|
||||||
|
loop Toutes les 30s
|
||||||
|
Native->>Native: gc() — lock StreamRecords[Heartbeat]
|
||||||
|
loop Pour chaque StreamRecord (Indexer A, B, ...)
|
||||||
|
Native->>Native: now > rec.Expiry ?<br/>OU timeSince(LastSeen) > 2×TTL restant ?
|
||||||
|
alt Pair périmé (ex: Indexer B disparu)
|
||||||
|
Native->>Native: Supprimer Indexer B de TOUS les maps de protocoles
|
||||||
|
Note over Native: Stream heartbeat fermé<br/>liveIndexers[PeerID_B] expirera naturellement
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
Note over IndexerA: Indexer A continue à heartbeater normalement<br/>et reste dans StreamRecords + liveIndexers
|
||||||
61
docs/diagrams/14_native_offload_gc.puml
Normal file
61
docs/diagrams/14_native_offload_gc.puml
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
@startuml
|
||||||
|
title Native Indexer — Boucles background (offload, DHT refresh, GC streams)
|
||||||
|
|
||||||
|
participant "Indexer A (enregistré)" as IndexerA
|
||||||
|
participant "Indexer B (enregistré)" as IndexerB
|
||||||
|
participant "Native Indexer" as Native
|
||||||
|
participant "DHT Kademlia" as DHT
|
||||||
|
participant "Node A (responsible peer)" as NodeA
|
||||||
|
|
||||||
|
note over Native: runOffloadLoop — toutes les 30s
|
||||||
|
|
||||||
|
loop Toutes les 30s
|
||||||
|
Native -> Native: len(responsiblePeers) > 0 ?
|
||||||
|
note over Native: responsiblePeers = peers pour lesquels\nle native a fait selfDelegate (aucun indexer dispo)
|
||||||
|
alt Des responsible peers existent (ex: Node A)
|
||||||
|
Native -> Native: reachableLiveIndexers()
|
||||||
|
note over Native: Filtre liveIndexers par TTL\nping PeerIsAlive pour chaque candidat
|
||||||
|
alt Indexers A et B maintenant joignables
|
||||||
|
Native -> Native: responsiblePeers = {} (libère Node A et autres)
|
||||||
|
note over Native: Node A se reconnectera\nau prochain ConnectToNatives
|
||||||
|
else Toujours aucun indexer
|
||||||
|
note over Native: Node A reste sous la responsabilité du native
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
note over Native: refreshIndexersFromDHT — toutes les 30s
|
||||||
|
|
||||||
|
loop Toutes les 30s
|
||||||
|
Native -> Native: Collecter tous les knownPeerIDs\n= {PeerID_A, PeerID_B, ...}
|
||||||
|
loop Pour chaque PeerID connu
|
||||||
|
Native -> Native: liveIndexers[PeerID] encore frais ?
|
||||||
|
alt Entrée manquante ou expirée
|
||||||
|
Native -> DHT: SearchValue(ctx 5s, "/indexer/"+PeerID)
|
||||||
|
DHT --> Native: channel de bytes
|
||||||
|
loop Pour chaque résultat DHT
|
||||||
|
Native -> Native: Unmarshal → liveIndexerEntry
|
||||||
|
Native -> Native: Garder le meilleur (ExpiresAt le plus récent, valide)
|
||||||
|
end
|
||||||
|
Native -> Native: liveIndexers[PeerID] = best entry
|
||||||
|
note over Native: "native: refreshed indexer from DHT"
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
note over Native: LongLivedStreamRecordedService GC — toutes les 30s
|
||||||
|
|
||||||
|
loop Toutes les 30s
|
||||||
|
Native -> Native: gc() — lock StreamRecords[Heartbeat]
|
||||||
|
loop Pour chaque StreamRecord (Indexer A, B, ...)
|
||||||
|
Native -> Native: now > rec.Expiry ?\nOU timeSince(LastSeen) > 2×TTL restant ?
|
||||||
|
alt Pair périmé (ex: Indexer B disparu)
|
||||||
|
Native -> Native: Supprimer Indexer B de TOUS les maps de protocoles
|
||||||
|
note over Native: Stream heartbeat fermé\nliveIndexers[PeerID_B] expirera naturellement
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
note over IndexerA: Indexer A continue à heartbeater normalement\net reste dans StreamRecords + liveIndexers
|
||||||
|
|
||||||
|
@enduml
|
||||||
43
docs/diagrams/README.md
Normal file
43
docs/diagrams/README.md
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# OC-Discovery — Diagrammes de séquence
|
||||||
|
|
||||||
|
Tous les fichiers `.mmd` sont au format [Mermaid](https://mermaid.js.org/).
|
||||||
|
Rendu possible via VS Code (extension Mermaid Preview), IntelliJ, ou [mermaid.live](https://mermaid.live).
|
||||||
|
|
||||||
|
## Vue d'ensemble des diagrammes
|
||||||
|
|
||||||
|
| Fichier | Description |
|
||||||
|
|---------|-------------|
|
||||||
|
| `01_node_init.mmd` | Initialisation complète d'un Node (libp2p host, GossipSub, indexers, StreamService, PubSubService, NATS) |
|
||||||
|
| `02_node_claim.mmd` | Enregistrement du nœud auprès des indexeurs (`claimInfo` + `publishPeerRecord`) |
|
||||||
|
| `03_indexer_heartbeat.mmd` | Protocole heartbeat avec calcul du score qualité (bande passante, uptime, diversité) |
|
||||||
|
| `04_indexer_publish.mmd` | Publication d'un `PeerRecord` vers l'indexeur → DHT |
|
||||||
|
| `05_indexer_get.mmd` | Résolution d'un pair via l'indexeur (`GetPeerRecord` + `handleNodeGet` + DHT) |
|
||||||
|
| `06_native_registration.mmd` | Enregistrement d'un indexeur auprès d'un Native Indexer + gossip PubSub |
|
||||||
|
| `07_native_get_consensus.mmd` | `ConnectToNatives` : pool d'indexeurs + protocole de consensus (vote majoritaire) |
|
||||||
|
| `08_nats_create_resource.mmd` | Handler NATS `CREATE_RESOURCE` : connexion/déconnexion d'un partner |
|
||||||
|
| `09_nats_propagation.mmd` | Handler NATS `PROPALGATION_EVENT` : delete, considers, planner, search |
|
||||||
|
| `10_pubsub_search.mmd` | Recherche gossip globale (type `"all"`) via GossipSub |
|
||||||
|
| `11_stream_search.mmd` | Recherche directe par stream (type `"known"` ou `"partner"`) |
|
||||||
|
| `12_partner_heartbeat.mmd` | Heartbeat partner + propagation CRUD vers les partenaires |
|
||||||
|
| `13_planner_flow.mmd` | Session planner (ouverture, échange, fermeture) |
|
||||||
|
| `14_native_offload_gc.mmd` | Boucles background du Native Indexer (offload, DHT refresh, GC) |
|
||||||
|
|
||||||
|
## Protocoles libp2p utilisés
|
||||||
|
|
||||||
|
| Protocole | Description |
|
||||||
|
|-----------|-------------|
|
||||||
|
| `/opencloud/heartbeat/1.0` | Heartbeat node → indexeur (long-lived) |
|
||||||
|
| `/opencloud/heartbeat/indexer/1.0` | Heartbeat indexeur → native (long-lived) |
|
||||||
|
| `/opencloud/resource/heartbeat/partner/1.0` | Heartbeat node ↔ partner (long-lived) |
|
||||||
|
| `/opencloud/record/publish/1.0` | Publication `PeerRecord` vers indexeur |
|
||||||
|
| `/opencloud/record/get/1.0` | Requête `GetPeerRecord` vers indexeur |
|
||||||
|
| `/opencloud/native/subscribe/1.0` | Enregistrement indexeur auprès du native |
|
||||||
|
| `/opencloud/native/indexers/1.0` | Requête de pool d'indexeurs au native |
|
||||||
|
| `/opencloud/native/consensus/1.0` | Validation de pool d'indexeurs (consensus) |
|
||||||
|
| `/opencloud/resource/search/1.0` | Recherche de ressources entre peers |
|
||||||
|
| `/opencloud/resource/create/1.0` | Propagation création ressource vers partner |
|
||||||
|
| `/opencloud/resource/update/1.0` | Propagation mise à jour ressource vers partner |
|
||||||
|
| `/opencloud/resource/delete/1.0` | Propagation suppression ressource vers partner |
|
||||||
|
| `/opencloud/resource/planner/1.0` | Session planner (booking) |
|
||||||
|
| `/opencloud/resource/verify/1.0` | Vérification signature ressource |
|
||||||
|
| `/opencloud/resource/considers/1.0` | Transmission d'un "considers" d'exécution |
|
||||||
191
go.mod
191
go.mod
@@ -1,64 +1,177 @@
|
|||||||
module oc-discovery
|
module oc-discovery
|
||||||
|
|
||||||
go 1.22.0
|
go 1.25.0
|
||||||
|
|
||||||
require (
|
require (
|
||||||
cloud.o-forge.io/core/oc-lib v0.0.0-20240902132116-fba1608edb70
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260302152414-542b0b73aba5
|
||||||
github.com/beego/beego v1.12.13
|
github.com/libp2p/go-libp2p v0.47.0
|
||||||
github.com/beego/beego/v2 v2.3.0
|
github.com/libp2p/go-libp2p-record v0.3.1
|
||||||
github.com/go-redis/redis v6.15.9+incompatible
|
github.com/multiformats/go-multiaddr v0.16.1
|
||||||
github.com/goraz/onion v0.1.3
|
)
|
||||||
github.com/smartystreets/goconvey v1.7.2
|
|
||||||
github.com/tidwall/gjson v1.17.3
|
require (
|
||||||
|
github.com/beego/beego/v2 v2.3.8 // indirect
|
||||||
|
github.com/benbjohnson/clock v1.3.5 // indirect
|
||||||
|
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||||
|
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect
|
||||||
|
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
|
||||||
|
github.com/dunglas/httpsfv v1.1.0 // indirect
|
||||||
|
github.com/emicklei/go-restful/v3 v3.12.2 // indirect
|
||||||
|
github.com/filecoin-project/go-clock v0.1.0 // indirect
|
||||||
|
github.com/flynn/noise v1.1.0 // indirect
|
||||||
|
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
|
||||||
|
github.com/go-logr/logr v1.4.3 // indirect
|
||||||
|
github.com/go-logr/stdr v1.2.2 // indirect
|
||||||
|
github.com/go-openapi/jsonpointer v0.21.0 // indirect
|
||||||
|
github.com/go-openapi/jsonreference v0.20.2 // indirect
|
||||||
|
github.com/go-openapi/swag v0.23.0 // indirect
|
||||||
|
github.com/gogo/protobuf v1.3.2 // indirect
|
||||||
|
github.com/google/gnostic-models v0.7.0 // indirect
|
||||||
|
github.com/google/gopacket v1.1.19 // indirect
|
||||||
|
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
|
||||||
|
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
|
||||||
|
github.com/huin/goupnp v1.3.0 // indirect
|
||||||
|
github.com/ipfs/boxo v0.35.2 // indirect
|
||||||
|
github.com/ipfs/go-cid v0.6.0 // indirect
|
||||||
|
github.com/ipfs/go-datastore v0.9.0 // indirect
|
||||||
|
github.com/ipfs/go-log/v2 v2.9.1 // indirect
|
||||||
|
github.com/ipld/go-ipld-prime v0.21.0 // indirect
|
||||||
|
github.com/jackpal/go-nat-pmp v1.0.2 // indirect
|
||||||
|
github.com/jbenet/go-temp-err-catcher v0.1.0 // indirect
|
||||||
|
github.com/josharian/intern v1.0.0 // indirect
|
||||||
|
github.com/json-iterator/go v1.1.12 // indirect
|
||||||
|
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||||
|
github.com/koron/go-ssdp v0.0.6 // indirect
|
||||||
|
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
|
||||||
|
github.com/libp2p/go-cidranger v1.1.0 // indirect
|
||||||
|
github.com/libp2p/go-flow-metrics v0.3.0 // indirect
|
||||||
|
github.com/libp2p/go-libp2p-asn-util v0.4.1 // indirect
|
||||||
|
github.com/libp2p/go-libp2p-kbucket v0.8.0 // indirect
|
||||||
|
github.com/libp2p/go-libp2p-routing-helpers v0.7.5 // indirect
|
||||||
|
github.com/libp2p/go-msgio v0.3.0 // indirect
|
||||||
|
github.com/libp2p/go-netroute v0.4.0 // indirect
|
||||||
|
github.com/libp2p/go-reuseport v0.4.0 // indirect
|
||||||
|
github.com/libp2p/go-yamux/v5 v5.0.1 // indirect
|
||||||
|
github.com/mailru/easyjson v0.7.7 // indirect
|
||||||
|
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd // indirect
|
||||||
|
github.com/miekg/dns v1.1.68 // indirect
|
||||||
|
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b // indirect
|
||||||
|
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc // indirect
|
||||||
|
github.com/minio/sha256-simd v1.0.1 // indirect
|
||||||
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||||
|
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
|
||||||
|
github.com/mr-tron/base58 v1.2.0 // indirect
|
||||||
|
github.com/multiformats/go-base32 v0.1.0 // indirect
|
||||||
|
github.com/multiformats/go-base36 v0.2.0 // indirect
|
||||||
|
github.com/multiformats/go-multiaddr-dns v0.4.1 // indirect
|
||||||
|
github.com/multiformats/go-multiaddr-fmt v0.1.0 // indirect
|
||||||
|
github.com/multiformats/go-multibase v0.2.0 // indirect
|
||||||
|
github.com/multiformats/go-multicodec v0.10.0 // indirect
|
||||||
|
github.com/multiformats/go-multihash v0.2.3 // indirect
|
||||||
|
github.com/multiformats/go-multistream v0.6.1 // indirect
|
||||||
|
github.com/multiformats/go-varint v0.1.0 // indirect
|
||||||
|
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect
|
||||||
|
github.com/pion/datachannel v1.5.10 // indirect
|
||||||
|
github.com/pion/dtls/v2 v2.2.12 // indirect
|
||||||
|
github.com/pion/dtls/v3 v3.0.6 // indirect
|
||||||
|
github.com/pion/ice/v4 v4.0.10 // indirect
|
||||||
|
github.com/pion/interceptor v0.1.40 // indirect
|
||||||
|
github.com/pion/logging v0.2.3 // indirect
|
||||||
|
github.com/pion/mdns/v2 v2.0.7 // indirect
|
||||||
|
github.com/pion/randutil v0.1.0 // indirect
|
||||||
|
github.com/pion/rtcp v1.2.15 // indirect
|
||||||
|
github.com/pion/rtp v1.8.19 // indirect
|
||||||
|
github.com/pion/sctp v1.8.39 // indirect
|
||||||
|
github.com/pion/sdp/v3 v3.0.13 // indirect
|
||||||
|
github.com/pion/srtp/v3 v3.0.6 // indirect
|
||||||
|
github.com/pion/stun v0.6.1 // indirect
|
||||||
|
github.com/pion/stun/v3 v3.0.0 // indirect
|
||||||
|
github.com/pion/transport/v2 v2.2.10 // indirect
|
||||||
|
github.com/pion/transport/v3 v3.0.7 // indirect
|
||||||
|
github.com/pion/turn/v4 v4.0.2 // indirect
|
||||||
|
github.com/pion/webrtc/v4 v4.1.2 // indirect
|
||||||
|
github.com/polydawn/refmt v0.89.0 // indirect
|
||||||
|
github.com/quic-go/qpack v0.6.0 // indirect
|
||||||
|
github.com/quic-go/quic-go v0.59.0 // indirect
|
||||||
|
github.com/quic-go/webtransport-go v0.10.0 // indirect
|
||||||
|
github.com/spaolacci/murmur3 v1.1.0 // indirect
|
||||||
|
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 // indirect
|
||||||
|
github.com/wlynxg/anet v0.0.5 // indirect
|
||||||
|
github.com/x448/float16 v0.8.4 // indirect
|
||||||
|
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||||
|
go.opentelemetry.io/otel v1.39.0 // indirect
|
||||||
|
go.opentelemetry.io/otel/metric v1.39.0 // indirect
|
||||||
|
go.opentelemetry.io/otel/trace v1.39.0 // indirect
|
||||||
|
go.uber.org/dig v1.19.0 // indirect
|
||||||
|
go.uber.org/fx v1.24.0 // indirect
|
||||||
|
go.uber.org/mock v0.5.2 // indirect
|
||||||
|
go.uber.org/multierr v1.11.0 // indirect
|
||||||
|
go.uber.org/zap v1.27.1 // indirect
|
||||||
|
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||||
|
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||||
|
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 // indirect
|
||||||
|
golang.org/x/mod v0.32.0 // indirect
|
||||||
|
golang.org/x/oauth2 v0.32.0 // indirect
|
||||||
|
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2 // indirect
|
||||||
|
golang.org/x/term v0.39.0 // indirect
|
||||||
|
golang.org/x/time v0.12.0 // indirect
|
||||||
|
golang.org/x/tools v0.41.0 // indirect
|
||||||
|
gonum.org/v1/gonum v0.17.0 // indirect
|
||||||
|
gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect
|
||||||
|
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||||
|
k8s.io/api v0.35.1 // indirect
|
||||||
|
k8s.io/apimachinery v0.35.1 // indirect
|
||||||
|
k8s.io/client-go v0.35.1 // indirect
|
||||||
|
k8s.io/klog/v2 v2.130.1 // indirect
|
||||||
|
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 // indirect
|
||||||
|
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect
|
||||||
|
lukechampine.com/blake3 v1.4.1 // indirect
|
||||||
|
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
|
||||||
|
sigs.k8s.io/randfill v1.0.0 // indirect
|
||||||
|
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect
|
||||||
|
sigs.k8s.io/yaml v1.6.0 // indirect
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/beorn7/perks v1.0.1 // indirect
|
github.com/beorn7/perks v1.0.1 // indirect
|
||||||
|
github.com/biter777/countries v1.7.5 // indirect
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||||
github.com/gabriel-vasile/mimetype v1.4.5 // indirect
|
github.com/gabriel-vasile/mimetype v1.4.10 // indirect
|
||||||
github.com/go-playground/locales v0.14.1 // indirect
|
github.com/go-playground/locales v0.14.1 // indirect
|
||||||
github.com/go-playground/universal-translator v0.18.1 // indirect
|
github.com/go-playground/universal-translator v0.18.1 // indirect
|
||||||
github.com/go-playground/validator/v10 v10.22.0 // indirect
|
github.com/go-playground/validator/v10 v10.27.0 // indirect
|
||||||
github.com/golang/protobuf v1.5.4 // indirect
|
github.com/golang/snappy v1.0.0 // indirect
|
||||||
github.com/golang/snappy v0.0.4 // indirect
|
github.com/google/uuid v1.6.0
|
||||||
github.com/google/uuid v1.6.0 // indirect
|
github.com/goraz/onion v0.1.3 // indirect
|
||||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 // indirect
|
|
||||||
github.com/hashicorp/golang-lru v1.0.2 // indirect
|
github.com/hashicorp/golang-lru v1.0.2 // indirect
|
||||||
github.com/jtolds/gls v4.20.0+incompatible // indirect
|
github.com/klauspost/compress v1.18.0 // indirect
|
||||||
github.com/klauspost/compress v1.17.9 // indirect
|
|
||||||
github.com/kr/pretty v0.3.1 // indirect
|
|
||||||
github.com/leodido/go-urn v1.4.0 // indirect
|
github.com/leodido/go-urn v1.4.0 // indirect
|
||||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
github.com/libp2p/go-libp2p-kad-dht v0.37.1
|
||||||
|
github.com/libp2p/go-libp2p-pubsub v0.15.0
|
||||||
|
github.com/mattn/go-colorable v0.1.14 // indirect
|
||||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||||
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
|
|
||||||
github.com/mitchellh/mapstructure v1.5.0 // indirect
|
github.com/mitchellh/mapstructure v1.5.0 // indirect
|
||||||
github.com/montanaflynn/stats v0.7.1 // indirect
|
github.com/montanaflynn/stats v0.7.1 // indirect
|
||||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||||
github.com/nats-io/nats.go v1.37.0 // indirect
|
github.com/nats-io/nats.go v1.43.0 // indirect
|
||||||
github.com/nats-io/nkeys v0.4.7 // indirect
|
github.com/nats-io/nkeys v0.4.11 // indirect
|
||||||
github.com/nats-io/nuid v1.0.1 // indirect
|
github.com/nats-io/nuid v1.0.1 // indirect
|
||||||
github.com/pkg/errors v0.9.1 // indirect
|
github.com/prometheus/client_golang v1.23.2 // indirect
|
||||||
github.com/prometheus/client_golang v1.20.2 // indirect
|
github.com/prometheus/client_model v0.6.2 // indirect
|
||||||
github.com/prometheus/client_model v0.6.1 // indirect
|
github.com/prometheus/common v0.66.1 // indirect
|
||||||
github.com/prometheus/common v0.57.0 // indirect
|
github.com/prometheus/procfs v0.17.0 // indirect
|
||||||
github.com/prometheus/procfs v0.15.1 // indirect
|
github.com/rs/zerolog v1.34.0 // indirect
|
||||||
github.com/robfig/cron/v3 v3.0.1 // indirect
|
|
||||||
github.com/rs/zerolog v1.33.0 // indirect
|
|
||||||
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 // indirect
|
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 // indirect
|
||||||
github.com/smartystreets/assertions v1.2.0 // indirect
|
|
||||||
github.com/tidwall/match v1.1.1 // indirect
|
|
||||||
github.com/tidwall/pretty v1.2.1 // indirect
|
|
||||||
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
|
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
|
||||||
github.com/xdg-go/scram v1.1.2 // indirect
|
github.com/xdg-go/scram v1.1.2 // indirect
|
||||||
github.com/xdg-go/stringprep v1.0.4 // indirect
|
github.com/xdg-go/stringprep v1.0.4 // indirect
|
||||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
|
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 // indirect
|
||||||
go.mongodb.org/mongo-driver v1.16.1 // indirect
|
go.mongodb.org/mongo-driver v1.17.4 // indirect
|
||||||
golang.org/x/crypto v0.26.0 // indirect
|
golang.org/x/crypto v0.47.0 // indirect
|
||||||
golang.org/x/net v0.28.0 // indirect
|
golang.org/x/net v0.49.0 // indirect
|
||||||
golang.org/x/sync v0.8.0 // indirect
|
golang.org/x/sync v0.19.0 // indirect
|
||||||
golang.org/x/sys v0.24.0 // indirect
|
golang.org/x/sys v0.40.0 // indirect
|
||||||
golang.org/x/text v0.17.0 // indirect
|
golang.org/x/text v0.33.0 // indirect
|
||||||
google.golang.org/protobuf v1.34.2 // indirect
|
google.golang.org/protobuf v1.36.11 // indirect
|
||||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||||
)
|
)
|
||||||
|
|||||||
663
go.sum
663
go.sum
@@ -1,230 +1,339 @@
|
|||||||
cloud.o-forge.io/core/oc-lib v0.0.0-20240830131445-af18dba5563c h1:4ZoM9ONJiaeLHSi0s8gsCe4lHuRHXkfK+eDSnTCspa0=
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260224130821-ce8ef70516f7 h1:p9uJjMY+QkE4neA+xRmIRtAm9us94EKZqgajDdLOd0Y=
|
||||||
cloud.o-forge.io/core/oc-lib v0.0.0-20240830131445-af18dba5563c/go.mod h1:FIJD0taWLJ5pjQLJ6sfE2KlTkvbmk5SMcyrxdjsaVz0=
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260224130821-ce8ef70516f7/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
|
||||||
cloud.o-forge.io/core/oc-lib v0.0.0-20240902132116-fba1608edb70 h1:xHxxRDtMG2/AAc7immArZfsnVF+KfJqoyUeUENmF6DA=
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260226084851-959fce48ef6c h1:FTUu9tdEfib6J+fuc7e5wYTe++EIlB70bVNpOeFjnyU=
|
||||||
cloud.o-forge.io/core/oc-lib v0.0.0-20240902132116-fba1608edb70/go.mod h1:FIJD0taWLJ5pjQLJ6sfE2KlTkvbmk5SMcyrxdjsaVz0=
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260226084851-959fce48ef6c/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
|
||||||
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260226085754-f4e2d8057df0 h1:lvrRF4ToIMl/5k1q4AiPEy6ycjwRtOaDhWnQ/LrW1ZA=
|
||||||
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260226085754-f4e2d8057df0/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
|
||||||
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260226091217-cb3771c17a31 h1:hvkvJibS9NmImw73j79Ov5VpIYs4WbP4SYGlK/XO82Q=
|
||||||
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260226091217-cb3771c17a31/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
|
||||||
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260302152414-542b0b73aba5 h1:h+Fkyj6cfwAirc0QGCBEkZSSrgcyThXswg7ytOLm948=
|
||||||
|
cloud.o-forge.io/core/oc-lib v0.0.0-20260302152414-542b0b73aba5/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
|
||||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||||
github.com/Knetic/govaluate v3.0.0+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
|
github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=
|
||||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
|
||||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
github.com/beego/beego/v2 v2.3.8 h1:wplhB1pF4TxR+2SS4PUej8eDoH4xGfxuHfS7wAk9VBc=
|
||||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
github.com/beego/beego/v2 v2.3.8/go.mod h1:8vl9+RrXqvodrl9C8yivX1e6le6deCK6RWeq8R7gTTg=
|
||||||
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
github.com/benbjohnson/clock v1.3.5 h1:VvXlSJBzZpA/zum6Sj74hxwYI2DIxRWuNIoXAzHZz5o=
|
||||||
github.com/alicebob/gopher-json v0.0.0-20180125190556-5a6b3ba71ee6/go.mod h1:SGnFV6hVsYE877CKEZ6tDNTjaSXYUk6QqoIK6PrAtcc=
|
github.com/benbjohnson/clock v1.3.5/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
|
||||||
github.com/alicebob/miniredis v2.5.0+incompatible/go.mod h1:8HZjEj4yU0dwhYHky+DxYx+6BMjkBbe5ONFIF1MXffk=
|
|
||||||
github.com/beego/beego v1.12.11 h1:MWKcnpavb7iAIS0m6uuEq6pHKkYvGNw/5umIUKqL7jM=
|
|
||||||
github.com/beego/beego v1.12.11/go.mod h1:QURFL1HldOcCZAxnc1cZ7wrplsYR5dKPHFjmk6WkLAs=
|
|
||||||
github.com/beego/beego v1.12.13 h1:g39O1LGLTiPejWVqQKK/TFGrroW9BCZQz6/pf4S8IRM=
|
|
||||||
github.com/beego/beego v1.12.13/go.mod h1:QURFL1HldOcCZAxnc1cZ7wrplsYR5dKPHFjmk6WkLAs=
|
|
||||||
github.com/beego/beego/v2 v2.0.7 h1:9KNnUM40tn3pbCOFfe6SJ1oOL0oTi/oBS/C/wCEdAXA=
|
|
||||||
github.com/beego/beego/v2 v2.0.7/go.mod h1:f0uOEkmJWgAuDTlTxUdgJzwG3PDSIf3UWF3NpMohbFE=
|
|
||||||
github.com/beego/beego/v2 v2.3.0 h1:iECVwzm6egw6iw6tkWrEDqXG4NQtKLQ6QBSYqlM6T/I=
|
|
||||||
github.com/beego/beego/v2 v2.3.0/go.mod h1:Ob/5BJ9fIKZLd4s9ZV3o9J6odkkIyL83et+p98gyYXo=
|
|
||||||
github.com/beego/goyaml2 v0.0.0-20130207012346-5545475820dd/go.mod h1:1b+Y/CofkYwXMUU0OhQqGvsY2Bvgr4j6jfT699wyZKQ=
|
|
||||||
github.com/beego/x2j v0.0.0-20131220205130-a0352aadc542/go.mod h1:kSeGC/p1AbBiEp5kat81+DSQrZenVBZXklMLaELspWU=
|
|
||||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
|
||||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
|
||||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||||
github.com/bradfitz/gomemcache v0.0.0-20180710155616-bc664df96737/go.mod h1:PmM6Mmwb0LSuEubjR8N7PtNe1KxZLtOUHtbeikc5h60=
|
github.com/biter777/countries v1.7.5 h1:MJ+n3+rSxWQdqVJU8eBy9RqcdH6ePPn4PJHocVWUa+Q=
|
||||||
github.com/casbin/casbin v1.7.0/go.mod h1:c67qKN6Oum3UF5Q1+BByfFxkwKvhwW57ITjqwtzR1KE=
|
github.com/biter777/countries v1.7.5/go.mod h1:1HSpZ526mYqKJcpT5Ti1kcGQ0L0SrXWIaptUWjFfv2E=
|
||||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
|
||||||
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
|
|
||||||
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
|
||||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||||
github.com/cloudflare/golz4 v0.0.0-20150217214814-ef862a3cdc58/go.mod h1:EOBUe0h4xcZ5GoxqC5SDxFQ8gwyZPKQoEzownBlhI80=
|
|
||||||
github.com/coreos/etcd v3.3.17+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
github.com/coreos/etcd v3.3.17+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||||
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||||
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
|
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
|
||||||
github.com/couchbase/go-couchbase v0.0.0-20201216133707-c04035124b17/go.mod h1:+/bddYDxXsf9qt0xpDUtRR47A2GjaXmGGAqQ/k3GJ8A=
|
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
||||||
github.com/couchbase/gomemcached v0.1.2-0.20201224031647-c432ccf49f32/go.mod h1:mxliKQxOv84gQ0bJWbI+w9Wxdpt9HjDvgW9MjCym5Vo=
|
|
||||||
github.com/couchbase/goutils v0.0.0-20210118111533-e33d3ffb5401/go.mod h1:BQwMFlJzDjFDG3DJUdU0KORxn88UlsOULuxLExMh3Hs=
|
|
||||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||||
github.com/cupcake/rdb v0.0.0-20161107195141-43ba34106c76/go.mod h1:vYwsqCOLxGiisLwp9rITslkFNpZD5rz43tf41QFkTWY=
|
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/edsrzf/mmap-go v0.0.0-20170320065105-0bce6a688712/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
|
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c h1:pFUpOrbxDR6AkioZ1ySsx5yxlDQZ8stG2b88gTPxgJU=
|
||||||
github.com/elastic/go-elasticsearch/v6 v6.8.5/go.mod h1:UwaDJsD3rWLM5rKNFzv9hgox93HoX8utj1kxD9aFUcI=
|
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c/go.mod h1:6UhI8N9EjYm1c2odKpFpAYeR8dsBeM7PtzQhRgxRr9U=
|
||||||
github.com/elazarl/go-bindata-assetfs v1.0.0/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4=
|
github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8=
|
||||||
|
github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
|
||||||
|
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc=
|
||||||
|
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
|
||||||
|
github.com/dunglas/httpsfv v1.1.0 h1:Jw76nAyKWKZKFrpMMcL76y35tOpYHqQPzHQiwDvpe54=
|
||||||
|
github.com/dunglas/httpsfv v1.1.0/go.mod h1:zID2mqw9mFsnt7YC3vYQ9/cjq30q41W+1AnDwH8TiMg=
|
||||||
github.com/elazarl/go-bindata-assetfs v1.0.1 h1:m0kkaHRKEu7tUIUFVwhGGGYClXvyl4RE03qmvRTNfbw=
|
github.com/elazarl/go-bindata-assetfs v1.0.1 h1:m0kkaHRKEu7tUIUFVwhGGGYClXvyl4RE03qmvRTNfbw=
|
||||||
github.com/elazarl/go-bindata-assetfs v1.0.1/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4=
|
github.com/elazarl/go-bindata-assetfs v1.0.1/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4=
|
||||||
|
github.com/emicklei/go-restful/v3 v3.12.2 h1:DhwDP0vY3k8ZzE0RunuJy8GhNpPL6zqLkDf9B/a0/xU=
|
||||||
|
github.com/emicklei/go-restful/v3 v3.12.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||||
github.com/etcd-io/etcd v3.3.17+incompatible/go.mod h1:cdZ77EstHBwVtD6iTgzgvogwcjo9m4iOqoijouPJ4bs=
|
github.com/etcd-io/etcd v3.3.17+incompatible/go.mod h1:cdZ77EstHBwVtD6iTgzgvogwcjo9m4iOqoijouPJ4bs=
|
||||||
|
github.com/filecoin-project/go-clock v0.1.0 h1:SFbYIM75M8NnFm1yMHhN9Ahy3W5bEZV9gd6MPfXbKVU=
|
||||||
|
github.com/filecoin-project/go-clock v0.1.0/go.mod h1:4uB/O4PvOjlx1VCMdZ9MyDZXRm//gkj1ELEbxfI1AZs=
|
||||||
|
github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg=
|
||||||
|
github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
|
||||||
|
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
|
||||||
|
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
|
||||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.4 h1:QjV6pZ7/XZ7ryI2KuyeEDE8wnh7fHP9YnQy+R0LnH8I=
|
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.4/go.mod h1:JwLei5XPtWdGiMFB5Pjle1oEeoSeEuJfJE+TtfvdB/s=
|
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.5 h1:J7wGKdGu33ocBOhGy0z653k/lFKLFDPJMG8Gql0kxn4=
|
github.com/gabriel-vasile/mimetype v1.4.10 h1:zyueNbySn/z8mJZHLt6IPw0KoZsiQNszIpU+bX4+ZK0=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.5/go.mod h1:ibHel+/kbxn9x2407k1izTA1S81ku1z/DlgOW2QE0M4=
|
github.com/gabriel-vasile/mimetype v1.4.10/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
|
||||||
github.com/glendc/gopher-json v0.0.0-20170414221815-dc4743023d0c/go.mod h1:Gja1A+xZ9BoviGJNA2E9vFkPjjsl+CoJxSXiQM1UXtw=
|
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||||
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||||
|
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
|
||||||
|
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
|
||||||
|
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
|
||||||
|
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
|
||||||
|
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
|
||||||
|
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
|
||||||
|
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
|
||||||
|
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
|
||||||
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
|
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
|
||||||
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
|
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
|
||||||
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
|
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
|
||||||
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
|
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
|
||||||
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
|
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
|
||||||
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
|
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
|
||||||
github.com/go-playground/validator/v10 v10.22.0 h1:k6HsTZ0sTnROkhS//R0O+55JgM8C4Bx7ia+JlgcnOao=
|
github.com/go-playground/validator/v10 v10.27.0 h1:w8+XrWVMhGkxOaaowyKH35gFydVHOvC0/uWoy2Fzwn4=
|
||||||
github.com/go-playground/validator/v10 v10.22.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
|
github.com/go-playground/validator/v10 v10.27.0/go.mod h1:I5QpIEbmr8On7W0TktmJAumgzX4CA1XNl4ZmDuVHKKo=
|
||||||
github.com/go-redis/redis v6.14.2+incompatible h1:UE9pLhzmWf+xHNmZsoccjXosPicuiNaInPgym8nzfg0=
|
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
|
||||||
github.com/go-redis/redis v6.14.2+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
|
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
|
||||||
github.com/go-redis/redis v6.15.9+incompatible h1:K0pv1D7EQUjfyoMql+r/jZqCLizCGKFlFgcHWWmHQjg=
|
github.com/go-yaml/yaml v2.1.0+incompatible/go.mod h1:w2MrLa16VYP0jy6N7M5kHaCkaLENm+P+Tv+MfurjSw0=
|
||||||
github.com/go-redis/redis v6.15.9+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
|
|
||||||
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
|
|
||||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
|
||||||
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=
|
||||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||||
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
|
github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo=
|
||||||
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
|
github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ=
|
||||||
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
|
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||||
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
|
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||||
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
|
|
||||||
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
|
|
||||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
|
||||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
|
||||||
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
|
|
||||||
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
|
|
||||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
|
||||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
|
||||||
github.com/golang/snappy v0.0.0-20170215233205-553a64147049/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
|
||||||
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
|
|
||||||
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
|
||||||
github.com/gomodule/redigo v2.0.0+incompatible/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4=
|
|
||||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
|
||||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
|
||||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
|
||||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
|
||||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
|
||||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
|
||||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||||
|
github.com/google/gopacket v1.1.19 h1:ves8RnFZPGiFnTS0uPQStjwru6uO6h+nlr9j6fL7kF8=
|
||||||
|
github.com/google/gopacket v1.1.19/go.mod h1:iJ8V8n6KS+z2U1A8pUwu8bW5SyEMkXJB8Yo/Vo+TKTo=
|
||||||
|
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8=
|
||||||
|
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
|
||||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
|
|
||||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||||
|
github.com/gopherjs/gopherjs v0.0.0-20190430165422-3e4dfb77656c h1:7lF+Vz0LqiRidnzC1Oq86fpX1q/iEv2KJdrCtttYjT4=
|
||||||
|
github.com/gopherjs/gopherjs v0.0.0-20190430165422-3e4dfb77656c/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||||
github.com/goraz/onion v0.1.3 h1:KhyvbDA2b70gcz/d5izfwTiOH8SmrvV43AsVzpng3n0=
|
github.com/goraz/onion v0.1.3 h1:KhyvbDA2b70gcz/d5izfwTiOH8SmrvV43AsVzpng3n0=
|
||||||
github.com/goraz/onion v0.1.3/go.mod h1:XEmz1XoBz+wxTgWB8NwuvRm4RAu3vKxvrmYtzK+XCuQ=
|
github.com/goraz/onion v0.1.3/go.mod h1:XEmz1XoBz+wxTgWB8NwuvRm4RAu3vKxvrmYtzK+XCuQ=
|
||||||
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
|
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
|
||||||
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
|
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
|
||||||
github.com/hashicorp/golang-lru v1.0.2 h1:dV3g9Z/unq5DpblPpw+Oqcv4dU/1omnb4Ok8iPY6p1c=
|
github.com/hashicorp/golang-lru v1.0.2 h1:dV3g9Z/unq5DpblPpw+Oqcv4dU/1omnb4Ok8iPY6p1c=
|
||||||
github.com/hashicorp/golang-lru v1.0.2/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
|
github.com/hashicorp/golang-lru v1.0.2/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
|
||||||
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
|
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
|
||||||
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
|
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
|
||||||
|
github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc=
|
||||||
|
github.com/huin/goupnp v1.3.0/go.mod h1:gnGPsThkYa7bFi/KWmEysQRf48l2dvR5bxr2OFckNX8=
|
||||||
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
|
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
|
||||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
github.com/ipfs/boxo v0.35.2 h1:0QZJJh6qrak28abENOi5OA8NjBnZM4p52SxeuIDqNf8=
|
||||||
|
github.com/ipfs/boxo v0.35.2/go.mod h1:bZn02OFWwJtY8dDW9XLHaki59EC5o+TGDECXEbe1w8U=
|
||||||
|
github.com/ipfs/go-block-format v0.2.3 h1:mpCuDaNXJ4wrBJLrtEaGFGXkferrw5eqVvzaHhtFKQk=
|
||||||
|
github.com/ipfs/go-block-format v0.2.3/go.mod h1:WJaQmPAKhD3LspLixqlqNFxiZ3BZ3xgqxxoSR/76pnA=
|
||||||
|
github.com/ipfs/go-cid v0.6.0 h1:DlOReBV1xhHBhhfy/gBNNTSyfOM6rLiIx9J7A4DGf30=
|
||||||
|
github.com/ipfs/go-cid v0.6.0/go.mod h1:NC4kS1LZjzfhK40UGmpXv5/qD2kcMzACYJNntCUiDhQ=
|
||||||
|
github.com/ipfs/go-datastore v0.9.0 h1:WocriPOayqalEsueHv6SdD4nPVl4rYMfYGLD4bqCZ+w=
|
||||||
|
github.com/ipfs/go-datastore v0.9.0/go.mod h1:uT77w/XEGrvJWwHgdrMr8bqCN6ZTW9gzmi+3uK+ouHg=
|
||||||
|
github.com/ipfs/go-detect-race v0.0.1 h1:qX/xay2W3E4Q1U7d9lNs1sU9nvguX0a7319XbyQ6cOk=
|
||||||
|
github.com/ipfs/go-detect-race v0.0.1/go.mod h1:8BNT7shDZPo99Q74BpGMK+4D8Mn4j46UU0LZ723meps=
|
||||||
|
github.com/ipfs/go-log/v2 v2.9.1 h1:3JXwHWU31dsCpvQ+7asz6/QsFJHqFr4gLgQ0FWteujk=
|
||||||
|
github.com/ipfs/go-log/v2 v2.9.1/go.mod h1:evFx7sBiohUN3AG12mXlZBw5hacBQld3ZPHrowlJYoo=
|
||||||
|
github.com/ipfs/go-test v0.2.3 h1:Z/jXNAReQFtCYyn7bsv/ZqUwS6E7iIcSpJ2CuzCvnrc=
|
||||||
|
github.com/ipfs/go-test v0.2.3/go.mod h1:QW8vSKkwYvWFwIZQLGQXdkt9Ud76eQXRQ9Ao2H+cA1o=
|
||||||
|
github.com/ipld/go-ipld-prime v0.21.0 h1:n4JmcpOlPDIxBcY037SVfpd1G+Sj1nKZah0m6QH9C2E=
|
||||||
|
github.com/ipld/go-ipld-prime v0.21.0/go.mod h1:3RLqy//ERg/y5oShXXdx5YIp50cFGOanyMctpPjsvxQ=
|
||||||
|
github.com/jackpal/go-nat-pmp v1.0.2 h1:KzKSgb7qkJvOUTqYl9/Hg/me3pWgBmERKrTGD7BdWus=
|
||||||
|
github.com/jackpal/go-nat-pmp v1.0.2/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
|
||||||
|
github.com/jbenet/go-temp-err-catcher v0.1.0 h1:zpb3ZH6wIE8Shj2sKS+khgRvf7T7RABoLk/+KKHggpk=
|
||||||
|
github.com/jbenet/go-temp-err-catcher v0.1.0/go.mod h1:0kJRvmDZXNMIiJirNPEYfhpPwbGVtZVWC34vc5WLsDk=
|
||||||
|
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||||
|
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||||
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||||
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||||
|
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||||
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
|
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
|
||||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||||
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
|
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||||
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
|
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
|
||||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
|
||||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
||||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
||||||
|
github.com/koron/go-ssdp v0.0.6 h1:Jb0h04599eq/CY7rB5YEqPS83HmRfHP2azkxMN2rFtU=
|
||||||
|
github.com/koron/go-ssdp v0.0.6/go.mod h1:0R9LfRJGek1zWTjN3JUNlm5INCDYGpRDfAptnct63fI=
|
||||||
|
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||||
github.com/ledisdb/ledisdb v0.0.0-20200510135210-d35789ec47e6/go.mod h1:n931TsDuKuq+uX4v1fulaMbA/7ZLLhjc85h7chZGBCQ=
|
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
||||||
|
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
||||||
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
|
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
|
||||||
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
||||||
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
|
||||||
|
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
|
||||||
|
github.com/libp2p/go-cidranger v1.1.0 h1:ewPN8EZ0dd1LSnrtuwd4709PXVcITVeuwbag38yPW7c=
|
||||||
|
github.com/libp2p/go-cidranger v1.1.0/go.mod h1:KWZTfSr+r9qEo9OkI9/SIEeAtw+NNoU0dXIXt15Okic=
|
||||||
|
github.com/libp2p/go-flow-metrics v0.3.0 h1:q31zcHUvHnwDO0SHaukewPYgwOBSxtt830uJtUx6784=
|
||||||
|
github.com/libp2p/go-flow-metrics v0.3.0/go.mod h1:nuhlreIwEguM1IvHAew3ij7A8BMlyHQJ279ao24eZZo=
|
||||||
|
github.com/libp2p/go-libp2p v0.47.0 h1:qQpBjSCWNQFF0hjBbKirMXE9RHLtSuzTDkTfr1rw0yc=
|
||||||
|
github.com/libp2p/go-libp2p v0.47.0/go.mod h1:s8HPh7mMV933OtXzONaGFseCg/BE//m1V34p3x4EUOY=
|
||||||
|
github.com/libp2p/go-libp2p-asn-util v0.4.1 h1:xqL7++IKD9TBFMgnLPZR6/6iYhawHKHl950SO9L6n94=
|
||||||
|
github.com/libp2p/go-libp2p-asn-util v0.4.1/go.mod h1:d/NI6XZ9qxw67b4e+NgpQexCIiFYJjErASrYW4PFDN8=
|
||||||
|
github.com/libp2p/go-libp2p-kad-dht v0.37.1 h1:jtX8bQIXVCs6/allskNB4m5n95Xvwav7wHAhopGZfS0=
|
||||||
|
github.com/libp2p/go-libp2p-kad-dht v0.37.1/go.mod h1:Uwokdh232k9Y1uMy2yJOK5zb7hpMHn4P8uWS4s9i05Q=
|
||||||
|
github.com/libp2p/go-libp2p-kbucket v0.8.0 h1:QAK7RzKJpYe+EuSEATAaaHYMYLkPDGC18m9jxPLnU8s=
|
||||||
|
github.com/libp2p/go-libp2p-kbucket v0.8.0/go.mod h1:JMlxqcEyKwO6ox716eyC0hmiduSWZZl6JY93mGaaqc4=
|
||||||
|
github.com/libp2p/go-libp2p-pubsub v0.15.0 h1:cG7Cng2BT82WttmPFMi50gDNV+58K626m/wR00vGL1o=
|
||||||
|
github.com/libp2p/go-libp2p-pubsub v0.15.0/go.mod h1:lr4oE8bFgQaifRcoc2uWhWWiK6tPdOEKpUuR408GFN4=
|
||||||
|
github.com/libp2p/go-libp2p-record v0.3.1 h1:cly48Xi5GjNw5Wq+7gmjfBiG9HCzQVkiZOUZ8kUl+Fg=
|
||||||
|
github.com/libp2p/go-libp2p-record v0.3.1/go.mod h1:T8itUkLcWQLCYMqtX7Th6r7SexyUJpIyPgks757td/E=
|
||||||
|
github.com/libp2p/go-libp2p-routing-helpers v0.7.5 h1:HdwZj9NKovMx0vqq6YNPTh6aaNzey5zHD7HeLJtq6fI=
|
||||||
|
github.com/libp2p/go-libp2p-routing-helpers v0.7.5/go.mod h1:3YaxrwP0OBPDD7my3D0KxfR89FlcX/IEbxDEDfAmj98=
|
||||||
|
github.com/libp2p/go-libp2p-testing v0.12.0 h1:EPvBb4kKMWO29qP4mZGyhVzUyR25dvfUIK5WDu6iPUA=
|
||||||
|
github.com/libp2p/go-libp2p-testing v0.12.0/go.mod h1:KcGDRXyN7sQCllucn1cOOS+Dmm7ujhfEyXQL5lvkcPg=
|
||||||
|
github.com/libp2p/go-msgio v0.3.0 h1:mf3Z8B1xcFN314sWX+2vOTShIE0Mmn2TXn3YCUQGNj0=
|
||||||
|
github.com/libp2p/go-msgio v0.3.0/go.mod h1:nyRM819GmVaF9LX3l03RMh10QdOroF++NBbxAb0mmDM=
|
||||||
|
github.com/libp2p/go-netroute v0.4.0 h1:sZZx9hyANYUx9PZyqcgE/E1GUG3iEtTZHUEvdtXT7/Q=
|
||||||
|
github.com/libp2p/go-netroute v0.4.0/go.mod h1:Nkd5ShYgSMS5MUKy/MU2T57xFoOKvvLR92Lic48LEyA=
|
||||||
|
github.com/libp2p/go-reuseport v0.4.0 h1:nR5KU7hD0WxXCJbmw7r2rhRYruNRl2koHw8fQscQm2s=
|
||||||
|
github.com/libp2p/go-reuseport v0.4.0/go.mod h1:ZtI03j/wO5hZVDFo2jKywN6bYKWLOy8Se6DrI2E1cLU=
|
||||||
|
github.com/libp2p/go-yamux/v5 v5.0.1 h1:f0WoX/bEF2E8SbE4c/k1Mo+/9z0O4oC/hWEA+nfYRSg=
|
||||||
|
github.com/libp2p/go-yamux/v5 v5.0.1/go.mod h1:en+3cdX51U0ZslwRdRLrvQsdayFt3TSUKvBGErzpWbU=
|
||||||
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||||
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
|
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
|
||||||
|
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||||
|
github.com/marcopolo/simnet v0.0.4 h1:50Kx4hS9kFGSRIbrt9xUS3NJX33EyPqHVmpXvaKLqrY=
|
||||||
|
github.com/marcopolo/simnet v0.0.4/go.mod h1:tfQF1u2DmaB6WHODMtQaLtClEf3a296CKQLq5gAsIS0=
|
||||||
|
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd h1:br0buuQ854V8u83wA0rVZ8ttrq5CpaPZdvrK0LP2lOk=
|
||||||
|
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd/go.mod h1:QuCEs1Nt24+FYQEqAAncTDPJIuGs+LxK1MCiFL25pMU=
|
||||||
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
||||||
|
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
|
||||||
|
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
|
||||||
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
||||||
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||||
github.com/mattn/go-sqlite3 v2.0.3+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
|
github.com/miekg/dns v1.1.68 h1:jsSRkNozw7G/mnmXULynzMNIsgY2dHC8LO6U6Ij2JEA=
|
||||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
github.com/miekg/dns v1.1.68/go.mod h1:fujopn7TB3Pu3JM69XaawiU0wqjpL9/8xGop5UrTPps=
|
||||||
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
|
github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c h1:bzE/A84HN25pxAuk9Eej1Kz9OUelF97nAc82bDquQI8=
|
||||||
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
|
github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c/go.mod h1:0SQS9kMwD2VsyFEB++InYyBJroV/FRmBgcydeSUcJms=
|
||||||
|
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b h1:z78hV3sbSMAUoyUMM0I83AUIT6Hu17AWfgjzIbtrYFc=
|
||||||
|
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b/go.mod h1:lxPUiZwKoFL8DUUmalo2yJJUCxbPKtm8OKfqr2/FTNU=
|
||||||
|
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc h1:PTfri+PuQmWDqERdnNMiD9ZejrlswWrCpBEZgWOiTrc=
|
||||||
|
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc/go.mod h1:cGKTAVKx4SxOuR/czcZ/E2RSJ3sfHs8FpHhQ5CWMf9s=
|
||||||
|
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1/go.mod h1:pD8RvIylQ358TN4wwqatJ8rNavkEINozVn9DtGI3dfQ=
|
||||||
|
github.com/minio/sha256-simd v0.1.1-0.20190913151208-6de447530771/go.mod h1:B5e1o+1/KgNmWrSQK08Y6Z1Vb5pwIktudl0J58iy0KM=
|
||||||
|
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
|
||||||
|
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
|
||||||
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||||
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
|
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
|
||||||
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||||
|
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||||
|
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=
|
||||||
|
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||||
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
|
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
|
||||||
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
|
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
|
||||||
|
github.com/mr-tron/base58 v1.1.2/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
|
||||||
|
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
|
||||||
|
github.com/mr-tron/base58 v1.2.0/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
|
||||||
|
github.com/multiformats/go-base32 v0.1.0 h1:pVx9xoSPqEIQG8o+UbAe7DNi51oej1NtK+aGkbLYxPE=
|
||||||
|
github.com/multiformats/go-base32 v0.1.0/go.mod h1:Kj3tFY6zNr+ABYMqeUNeGvkIC/UYgtWibDcT0rExnbI=
|
||||||
|
github.com/multiformats/go-base36 v0.2.0 h1:lFsAbNOGeKtuKozrtBsAkSVhv1p9D0/qedU9rQyccr0=
|
||||||
|
github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4=
|
||||||
|
github.com/multiformats/go-multiaddr v0.1.1/go.mod h1:aMKBKNEYmzmDmxfX88/vz+J5IU55txyt0p4aiWVohjo=
|
||||||
|
github.com/multiformats/go-multiaddr v0.16.1 h1:fgJ0Pitow+wWXzN9do+1b8Pyjmo8m5WhGfzpL82MpCw=
|
||||||
|
github.com/multiformats/go-multiaddr v0.16.1/go.mod h1:JSVUmXDjsVFiW7RjIFMP7+Ev+h1DTbiJgVeTV/tcmP0=
|
||||||
|
github.com/multiformats/go-multiaddr-dns v0.4.1 h1:whi/uCLbDS3mSEUMb1MsoT4uzUeZB0N32yzufqS0i5M=
|
||||||
|
github.com/multiformats/go-multiaddr-dns v0.4.1/go.mod h1:7hfthtB4E4pQwirrz+J0CcDUfbWzTqEzVyYKKIKpgkc=
|
||||||
|
github.com/multiformats/go-multiaddr-fmt v0.1.0 h1:WLEFClPycPkp4fnIzoFoV9FVd49/eQsuaL3/CWe167E=
|
||||||
|
github.com/multiformats/go-multiaddr-fmt v0.1.0/go.mod h1:hGtDIW4PU4BqJ50gW2quDuPVjyWNZxToGUh/HwTZYJo=
|
||||||
|
github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g=
|
||||||
|
github.com/multiformats/go-multibase v0.2.0/go.mod h1:bFBZX4lKCA/2lyOFSAoKH5SS6oPyjtnzK/XTFDPkNuk=
|
||||||
|
github.com/multiformats/go-multicodec v0.10.0 h1:UpP223cig/Cx8J76jWt91njpK3GTAO1w02sdcjZDSuc=
|
||||||
|
github.com/multiformats/go-multicodec v0.10.0/go.mod h1:wg88pM+s2kZJEQfRCKBNU+g32F5aWBEjyFHXvZLTcLI=
|
||||||
|
github.com/multiformats/go-multihash v0.0.8/go.mod h1:YSLudS+Pi8NHE7o6tb3D8vrpKa63epEDmG8nTduyAew=
|
||||||
|
github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U=
|
||||||
|
github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM=
|
||||||
|
github.com/multiformats/go-multistream v0.6.1 h1:4aoX5v6T+yWmc2raBHsTvzmFhOI8WVOer28DeBBEYdQ=
|
||||||
|
github.com/multiformats/go-multistream v0.6.1/go.mod h1:ksQf6kqHAb6zIsyw7Zm+gAuVo57Qbq84E27YlYqavqw=
|
||||||
|
github.com/multiformats/go-varint v0.1.0 h1:i2wqFp4sdl3IcIxfAonHQV9qU5OsZ4Ts9IOoETFs5dI=
|
||||||
|
github.com/multiformats/go-varint v0.1.0/go.mod h1:5KVAVXegtfmNQQm/lCY+ATvDzvJJhSkUlGQV9wgObdI=
|
||||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
|
||||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
github.com/nats-io/nats.go v1.43.0 h1:uRFZ2FEoRvP64+UUhaTokyS18XBCR/xM2vQZKO4i8ug=
|
||||||
github.com/nats-io/nats.go v1.37.0 h1:07rauXbVnnJvv1gfIyghFEo6lUcYRY0WXc3x7x0vUxE=
|
github.com/nats-io/nats.go v1.43.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
|
||||||
github.com/nats-io/nats.go v1.37.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
|
github.com/nats-io/nkeys v0.4.11 h1:q44qGV008kYd9W1b1nEBkNzvnWxtRSQ7A8BoqRrcfa0=
|
||||||
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI=
|
github.com/nats-io/nkeys v0.4.11/go.mod h1:szDimtgmfOi9n25JpfIdGw12tZFYXqhGxjhVxsatHVE=
|
||||||
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
|
|
||||||
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
|
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
|
||||||
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
|
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
|
||||||
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
|
|
||||||
github.com/ogier/pflag v0.0.1/go.mod h1:zkFki7tvTa0tafRvTBIZTvzYyAu6kQhPZFnshFFPE+g=
|
github.com/ogier/pflag v0.0.1/go.mod h1:zkFki7tvTa0tafRvTBIZTvzYyAu6kQhPZFnshFFPE+g=
|
||||||
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
|
||||||
github.com/onsi/ginkgo v1.12.0 h1:Iw5WCbBcaAAd0fpRb1c9r5YCylv4XDoCSigm1zLevwU=
|
github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns=
|
||||||
github.com/onsi/ginkgo v1.12.0/go.mod h1:oUhWkIvk5aDxtKvDDuw8gItl8pKl42LzjC9KZE0HfGg=
|
github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
|
||||||
github.com/onsi/gomega v1.7.1 h1:K0jcRCwNQM3vFGh1ppMtDh/+7ApJrjldlX8fA0jDTLQ=
|
github.com/onsi/gomega v1.38.2 h1:eZCjf2xjZAqe+LeWvKb5weQ+NcPwX84kqJ0cZNxok2A=
|
||||||
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
|
github.com/onsi/gomega v1.38.2/go.mod h1:W2MJcYxRGV63b418Ai34Ud0hEdTVXq9NW9+Sx6uXf3k=
|
||||||
github.com/pelletier/go-toml v1.0.1/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
|
||||||
|
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=
|
||||||
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
|
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
|
||||||
github.com/peterh/liner v1.0.1-0.20171122030339-3681c2a91233/go.mod h1:xIteQHvHuaLYG9IFj6mSxM0fCKrs34IrEQUhOYuGPHc=
|
github.com/pion/datachannel v1.5.10 h1:ly0Q26K1i6ZkGf42W7D4hQYR90pZwzFOjTq5AuCKk4o=
|
||||||
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
|
github.com/pion/datachannel v1.5.10/go.mod h1:p/jJfC9arb29W7WrxyKbepTU20CFgyx5oLo8Rs4Py/M=
|
||||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
github.com/pion/dtls/v2 v2.2.7/go.mod h1:8WiMkebSHFD0T+dIU+UeBaoV7kDhOW5oDCzZ7WZ/F9s=
|
||||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
github.com/pion/dtls/v2 v2.2.12 h1:KP7H5/c1EiVAAKUmXyCzPiQe5+bCJrpOeKg/L05dunk=
|
||||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
github.com/pion/dtls/v2 v2.2.12/go.mod h1:d9SYc9fch0CqK90mRk1dC7AkzzpwJj6u2GU3u+9pqFE=
|
||||||
|
github.com/pion/dtls/v3 v3.0.6 h1:7Hkd8WhAJNbRgq9RgdNh1aaWlZlGpYTzdqjy9x9sK2E=
|
||||||
|
github.com/pion/dtls/v3 v3.0.6/go.mod h1:iJxNQ3Uhn1NZWOMWlLxEEHAN5yX7GyPvvKw04v9bzYU=
|
||||||
|
github.com/pion/ice/v4 v4.0.10 h1:P59w1iauC/wPk9PdY8Vjl4fOFL5B+USq1+xbDcN6gT4=
|
||||||
|
github.com/pion/ice/v4 v4.0.10/go.mod h1:y3M18aPhIxLlcO/4dn9X8LzLLSma84cx6emMSu14FGw=
|
||||||
|
github.com/pion/interceptor v0.1.40 h1:e0BjnPcGpr2CFQgKhrQisBU7V3GXK6wrfYrGYaU6Jq4=
|
||||||
|
github.com/pion/interceptor v0.1.40/go.mod h1:Z6kqH7M/FYirg3frjGJ21VLSRJGBXB/KqaTIrdqnOic=
|
||||||
|
github.com/pion/logging v0.2.2/go.mod h1:k0/tDVsRCX2Mb2ZEmTqNa7CWsQPc+YYCB7Q+5pahoms=
|
||||||
|
github.com/pion/logging v0.2.3 h1:gHuf0zpoh1GW67Nr6Gj4cv5Z9ZscU7g/EaoC/Ke/igI=
|
||||||
|
github.com/pion/logging v0.2.3/go.mod h1:z8YfknkquMe1csOrxK5kc+5/ZPAzMxbKLX5aXpbpC90=
|
||||||
|
github.com/pion/mdns/v2 v2.0.7 h1:c9kM8ewCgjslaAmicYMFQIde2H9/lrZpjBkN8VwoVtM=
|
||||||
|
github.com/pion/mdns/v2 v2.0.7/go.mod h1:vAdSYNAT0Jy3Ru0zl2YiW3Rm/fJCwIeM0nToenfOJKA=
|
||||||
|
github.com/pion/randutil v0.1.0 h1:CFG1UdESneORglEsnimhUjf33Rwjubwj6xfiOXBa3mA=
|
||||||
|
github.com/pion/randutil v0.1.0/go.mod h1:XcJrSMMbbMRhASFVOlj/5hQial/Y8oH/HVo7TBZq+j8=
|
||||||
|
github.com/pion/rtcp v1.2.15 h1:LZQi2JbdipLOj4eBjK4wlVoQWfrZbh3Q6eHtWtJBZBo=
|
||||||
|
github.com/pion/rtcp v1.2.15/go.mod h1:jlGuAjHMEXwMUHK78RgX0UmEJFV4zUKOFHR7OP+D3D0=
|
||||||
|
github.com/pion/rtp v1.8.19 h1:jhdO/3XhL/aKm/wARFVmvTfq0lC/CvN1xwYKmduly3c=
|
||||||
|
github.com/pion/rtp v1.8.19/go.mod h1:bAu2UFKScgzyFqvUKmbvzSdPr+NGbZtv6UB2hesqXBk=
|
||||||
|
github.com/pion/sctp v1.8.39 h1:PJma40vRHa3UTO3C4MyeJDQ+KIobVYRZQZ0Nt7SjQnE=
|
||||||
|
github.com/pion/sctp v1.8.39/go.mod h1:cNiLdchXra8fHQwmIoqw0MbLLMs+f7uQ+dGMG2gWebE=
|
||||||
|
github.com/pion/sdp/v3 v3.0.13 h1:uN3SS2b+QDZnWXgdr69SM8KB4EbcnPnPf2Laxhty/l4=
|
||||||
|
github.com/pion/sdp/v3 v3.0.13/go.mod h1:88GMahN5xnScv1hIMTqLdu/cOcUkj6a9ytbncwMCq2E=
|
||||||
|
github.com/pion/srtp/v3 v3.0.6 h1:E2gyj1f5X10sB/qILUGIkL4C2CqK269Xq167PbGCc/4=
|
||||||
|
github.com/pion/srtp/v3 v3.0.6/go.mod h1:BxvziG3v/armJHAaJ87euvkhHqWe9I7iiOy50K2QkhY=
|
||||||
|
github.com/pion/stun v0.6.1 h1:8lp6YejULeHBF8NmV8e2787BogQhduZugh5PdhDyyN4=
|
||||||
|
github.com/pion/stun v0.6.1/go.mod h1:/hO7APkX4hZKu/D0f2lHzNyvdkTGtIy3NDmLR7kSz/8=
|
||||||
|
github.com/pion/stun/v3 v3.0.0 h1:4h1gwhWLWuZWOJIJR9s2ferRO+W3zA/b6ijOI6mKzUw=
|
||||||
|
github.com/pion/stun/v3 v3.0.0/go.mod h1:HvCN8txt8mwi4FBvS3EmDghW6aQJ24T+y+1TKjB5jyU=
|
||||||
|
github.com/pion/transport/v2 v2.2.1/go.mod h1:cXXWavvCnFF6McHTft3DWS9iic2Mftcz1Aq29pGcU5g=
|
||||||
|
github.com/pion/transport/v2 v2.2.4/go.mod h1:q2U/tf9FEfnSBGSW6w5Qp5PFWRLRj3NjLhCCgpRK4p0=
|
||||||
|
github.com/pion/transport/v2 v2.2.10 h1:ucLBLE8nuxiHfvkFKnkDQRYWYfp8ejf4YBOPfaQpw6Q=
|
||||||
|
github.com/pion/transport/v2 v2.2.10/go.mod h1:sq1kSLWs+cHW9E+2fJP95QudkzbK7wscs8yYgQToO5E=
|
||||||
|
github.com/pion/transport/v3 v3.0.7 h1:iRbMH05BzSNwhILHoBoAPxoB9xQgOaJk+591KC9P1o0=
|
||||||
|
github.com/pion/transport/v3 v3.0.7/go.mod h1:YleKiTZ4vqNxVwh77Z0zytYi7rXHl7j6uPLGhhz9rwo=
|
||||||
|
github.com/pion/turn/v4 v4.0.2 h1:ZqgQ3+MjP32ug30xAbD6Mn+/K4Sxi3SdNOTFf+7mpps=
|
||||||
|
github.com/pion/turn/v4 v4.0.2/go.mod h1:pMMKP/ieNAG/fN5cZiN4SDuyKsXtNTr0ccN7IToA1zs=
|
||||||
|
github.com/pion/webrtc/v4 v4.1.2 h1:mpuUo/EJ1zMNKGE79fAdYNFZBX790KE7kQQpLMjjR54=
|
||||||
|
github.com/pion/webrtc/v4 v4.1.2/go.mod h1:xsCXiNAmMEjIdFxAYU0MbB3RwRieJsegSB2JZsGN+8U=
|
||||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
github.com/polydawn/refmt v0.89.0 h1:ADJTApkvkeBZsN0tBTx8QjpD9JkmxbKp0cxfr9qszm4=
|
||||||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
github.com/polydawn/refmt v0.89.0/go.mod h1:/zvteZs/GwLtCgZ4BL6CBsk9IKIlexP43ObX9AxTqTw=
|
||||||
github.com/prometheus/client_golang v1.7.0/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
|
||||||
github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw=
|
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
|
||||||
github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y=
|
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
|
||||||
github.com/prometheus/client_golang v1.20.2 h1:5ctymQzZlyOON1666svgwn3s6IKWgfbjsejTMiXIyjg=
|
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
|
||||||
github.com/prometheus/client_golang v1.20.2/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
|
github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
|
||||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
|
||||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=
|
||||||
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=
|
||||||
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
|
github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8=
|
||||||
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
|
github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII=
|
||||||
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
|
github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw=
|
||||||
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
|
github.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU=
|
||||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
github.com/quic-go/webtransport-go v0.10.0 h1:LqXXPOXuETY5Xe8ITdGisBzTYmUOy5eSj+9n4hLTjHI=
|
||||||
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
|
github.com/quic-go/webtransport-go v0.10.0/go.mod h1:LeGIXr5BQKE3UsynwVBeQrU1TPrbh73MGoC6jd+V7ow=
|
||||||
github.com/prometheus/common v0.41.0 h1:npo01n6vUlRViIj5fgwiK8vlNIh8bnoxqh3gypKsyAw=
|
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||||
github.com/prometheus/common v0.41.0/go.mod h1:xBwqVerjNdUDjgODMpudtOMwlOwf2SaTr1yjz4b7Zbc=
|
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||||
github.com/prometheus/common v0.57.0 h1:Ro/rKjwdq9mZn1K5QPctzh+MA4Lp0BuYk5ZZEVhoNcY=
|
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
|
||||||
github.com/prometheus/common v0.57.0/go.mod h1:7uRPFSUTbfZWsJ7MHY56sqt7hLQu3bxXHDnNhl8E9qI=
|
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
|
||||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
|
||||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||||
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
|
|
||||||
github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJfhI=
|
|
||||||
github.com/prometheus/procfs v0.9.0/go.mod h1:+pB4zwohETzFnmlpe6yd2lSc+0/46IYZRB/chUwxUZY=
|
|
||||||
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
|
|
||||||
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
|
|
||||||
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
|
||||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
|
||||||
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
|
|
||||||
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
|
|
||||||
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
|
|
||||||
github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8=
|
|
||||||
github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
|
|
||||||
github.com/shiena/ansicolor v0.0.0-20151119151921-a422bbe96644/go.mod h1:nkxAfR/5quYxwPZhyDxgasBMnRtBZd0FCEpawpjMUFg=
|
|
||||||
github.com/shiena/ansicolor v0.0.0-20200904210342-c7312218db18 h1:DAYUYH5869yV94zvCES9F51oYtN5oGlwjxJJz7ZCnik=
|
|
||||||
github.com/shiena/ansicolor v0.0.0-20200904210342-c7312218db18/go.mod h1:nkxAfR/5quYxwPZhyDxgasBMnRtBZd0FCEpawpjMUFg=
|
|
||||||
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 h1:v9ezJDHA1XGxViAUSIoO/Id7Fl63u6d0YmsAm+/p2hs=
|
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 h1:v9ezJDHA1XGxViAUSIoO/Id7Fl63u6d0YmsAm+/p2hs=
|
||||||
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02/go.mod h1:RF16/A3L0xSa0oSERcnhd8Pu3IXSDZSK2gmGIMsttFE=
|
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02/go.mod h1:RF16/A3L0xSa0oSERcnhd8Pu3IXSDZSK2gmGIMsttFE=
|
||||||
github.com/siddontang/go v0.0.0-20170517070808-cb568a3e5cc0/go.mod h1:3yhqj7WBBfRhbBlzyOC3gUxftwsU0u8gqevxwIHQpMw=
|
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
|
||||||
github.com/siddontang/goredis v0.0.0-20150324035039-760763f78400/go.mod h1:DDcKzU3qCuvj/tPnimWSsZZzvk9qvkvrIL5naVBPh5s=
|
|
||||||
github.com/siddontang/rdb v0.0.0-20150307021120-fc89ed2e418d/go.mod h1:AMEsy7v5z92TR1JKMkLLoaOQk++LVnOKL3ScbJ8GNGA=
|
|
||||||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
|
||||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
|
||||||
github.com/skarademir/naturalsort v0.0.0-20150715044055-69a5d87bef62/go.mod h1:oIdVclZaltY1Nf7OQUkg1/2jImBJ+ZfKZuDIRSwk3p0=
|
github.com/skarademir/naturalsort v0.0.0-20150715044055-69a5d87bef62/go.mod h1:oIdVclZaltY1Nf7OQUkg1/2jImBJ+ZfKZuDIRSwk3p0=
|
||||||
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
|
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
|
||||||
github.com/smartystreets/assertions v1.2.0 h1:42S6lae5dvLc7BrLu/0ugRtcFVjoJNMC/N3yZFZkDFs=
|
github.com/smartystreets/assertions v1.2.0 h1:42S6lae5dvLc7BrLu/0ugRtcFVjoJNMC/N3yZFZkDFs=
|
||||||
@@ -232,143 +341,213 @@ github.com/smartystreets/assertions v1.2.0/go.mod h1:tcbTF8ujkAEcZ8TElKY+i30BzYl
|
|||||||
github.com/smartystreets/goconvey v0.0.0-20190731233626-505e41936337/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
|
github.com/smartystreets/goconvey v0.0.0-20190731233626-505e41936337/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
|
||||||
github.com/smartystreets/goconvey v1.7.2 h1:9RBaZCeXEQ3UselpuwUQHltGVXvdwm6cv1hgR6gDIPg=
|
github.com/smartystreets/goconvey v1.7.2 h1:9RBaZCeXEQ3UselpuwUQHltGVXvdwm6cv1hgR6gDIPg=
|
||||||
github.com/smartystreets/goconvey v1.7.2/go.mod h1:Vw0tHAZW6lzCRk3xgdin6fKYcG+G3Pg9vgXWeJpQFMM=
|
github.com/smartystreets/goconvey v1.7.2/go.mod h1:Vw0tHAZW6lzCRk3xgdin6fKYcG+G3Pg9vgXWeJpQFMM=
|
||||||
github.com/ssdb/gossdb v0.0.0-20180723034631-88f6b59b84ec/go.mod h1:QBvMkMya+gXctz3kmljlUCu/yB3GZ6oee+dUozsezQE=
|
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
|
||||||
|
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||||
|
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
||||||
|
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||||
|
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
|
||||||
|
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
|
||||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||||
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
|
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||||
github.com/syndtr/goleveldb v0.0.0-20160425020131-cfa635847112/go.mod h1:Z4AUp2Km+PwemOoO/VB5AOx9XSsIItzFjoJlOSiYmn0=
|
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||||
github.com/tidwall/gjson v1.14.4 h1:uo0p8EbA09J7RQaflQ1aBRffTR7xedD2bcIVSYxLnkM=
|
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||||
github.com/tidwall/gjson v1.14.4/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||||
github.com/tidwall/gjson v1.17.3 h1:bwWLZU7icoKRG+C+0PNwIKC6FCJO/Q3p2pZvuP0jN94=
|
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||||
github.com/tidwall/gjson v1.17.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
|
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||||
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
|
github.com/urfave/cli v1.22.10/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
|
||||||
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
|
github.com/warpfork/go-wish v0.0.0-20220906213052-39a1cc7a02d0 h1:GDDkbFiaK8jsSDJfjId/PEGEShv6ugrt4kYsC5UIDaQ=
|
||||||
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
|
github.com/warpfork/go-wish v0.0.0-20220906213052-39a1cc7a02d0/go.mod h1:x6AKhvSSexNrVSrViXSHUEbICjmGXhtgABaHIySUSGw=
|
||||||
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
|
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 h1:EKhdznlJHPMoKr0XTrX+IlJs1LH3lyx2nfr1dOlZ79k=
|
||||||
github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
|
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1/go.mod h1:8UvriyWtv5Q5EOgjHaSseUEdkQfvwFv1I/In/O2M9gc=
|
||||||
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
|
github.com/wlynxg/anet v0.0.3/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
|
||||||
github.com/ugorji/go v0.0.0-20171122102828-84cb69a8af83/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ=
|
github.com/wlynxg/anet v0.0.5 h1:J3VJGi1gvo0JwZ/P1/Yc/8p63SoW98B5dHkYDmpgvvU=
|
||||||
github.com/wendal/errors v0.0.0-20181209125328-7f31f4b264ec/go.mod h1:Q12BUT7DqIlHRmgv3RskH+UCM/4eqVMgI0EMmlSpAXc=
|
github.com/wlynxg/anet v0.0.5/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
|
||||||
|
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
|
||||||
|
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
|
||||||
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
|
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
|
||||||
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
|
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
|
||||||
github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
|
github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
|
||||||
github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
|
github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
|
||||||
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
|
github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
|
||||||
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
|
github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
|
||||||
github.com/youmark/pkcs8 v0.0.0-20240424034433-3c2c7870ae76 h1:tBiBTKHnIjovYoLX/TPkcf+OjqqKGQrPtGT3Foz+Pgo=
|
|
||||||
github.com/youmark/pkcs8 v0.0.0-20240424034433-3c2c7870ae76/go.mod h1:SQliXeA7Dhkt//vS29v3zpbEwoa+zb2Cn5xj5uO4K5U=
|
|
||||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
|
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78 h1:ilQV1hzziu+LLM3zUTJ0trRztfwgjqKnBWNtSRkbmwM=
|
||||||
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
|
github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfSfmXjznFBSZNN13rSJjlIOI1fUNAtF7rmI=
|
||||||
|
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
|
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||||
github.com/yuin/gopher-lua v0.0.0-20171031051903-609c9cd26973/go.mod h1:aEV29XrmTYFr3CiRxZeGHpkvbwq+prZduBqMaascyCU=
|
go.mongodb.org/mongo-driver v1.17.4 h1:jUorfmVzljjr0FLzYQsGP8cgN/qzzxlY9Vh0C9KFXVw=
|
||||||
go.mongodb.org/mongo-driver v1.16.0 h1:tpRsfBJMROVHKpdGyc1BBEzzjDUWjItxbVSZ8Ls4BQ4=
|
go.mongodb.org/mongo-driver v1.17.4/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
|
||||||
go.mongodb.org/mongo-driver v1.16.0/go.mod h1:oB6AhJQvFQL4LEHyXi6aJzQJtBiTQHiAd83l0GdFaiw=
|
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||||
go.mongodb.org/mongo-driver v1.16.1 h1:rIVLL3q0IHM39dvE+z2ulZLp9ENZKThVfuvN/IiN4l8=
|
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||||
go.mongodb.org/mongo-driver v1.16.1/go.mod h1:oB6AhJQvFQL4LEHyXi6aJzQJtBiTQHiAd83l0GdFaiw=
|
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48=
|
||||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8=
|
||||||
|
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0=
|
||||||
|
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs=
|
||||||
|
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI=
|
||||||
|
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA=
|
||||||
|
go.uber.org/dig v1.19.0 h1:BACLhebsYdpQ7IROQ1AGPjrXcP5dF80U3gKoFzbaq/4=
|
||||||
|
go.uber.org/dig v1.19.0/go.mod h1:Us0rSJiThwCv2GteUN0Q7OKvU7n5J4dxZ9JKUXozFdE=
|
||||||
|
go.uber.org/fx v1.24.0 h1:wE8mruvpg2kiiL1Vqd0CC+tr0/24XIB10Iwp2lLWzkg=
|
||||||
|
go.uber.org/fx v1.24.0/go.mod h1:AmDeGyS+ZARGKM4tlH4FY2Jr63VjbEDJHtqXTGP5hbo=
|
||||||
|
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||||
|
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||||
|
go.uber.org/mock v0.5.2 h1:LbtPTcP8A5k9WPXj54PPPbjcI4Y6lhyOZXn+VS7wNko=
|
||||||
|
go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o=
|
||||||
|
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||||
|
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||||
|
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
|
||||||
|
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
||||||
|
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
|
||||||
|
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
|
||||||
|
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
|
||||||
|
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
|
||||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||||
|
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
|
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
golang.org/x/crypto v0.0.0-20191112222119-e1110fd1c708/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
golang.org/x/crypto v0.0.0-20191112222119-e1110fd1c708/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||||
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
golang.org/x/crypto v0.0.0-20200602180216-279210d13fed/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||||
|
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||||
|
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
|
||||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||||
golang.org/x/crypto v0.25.0 h1:ypSNr+bnYL2YhwoMt2zPxHFmbAN1KZs/njMG3hxUp30=
|
golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
|
||||||
golang.org/x/crypto v0.25.0/go.mod h1:T+wALwcMOSE0kXgUAnPAHqTLW+XHgcELELW8VaDgm/M=
|
golang.org/x/crypto v0.12.0/go.mod h1:NF0Gs7EO5K4qLn+Ylc+fih8BSTeIjAP05siRnAh98yw=
|
||||||
golang.org/x/crypto v0.26.0 h1:RrRspgV4mU+YwB4FYnuBoKsUapNIL5cohGAmSH3azsw=
|
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
|
||||||
golang.org/x/crypto v0.26.0/go.mod h1:GY7jblb9wI+FOo5y8/S2oY4zWP07AkOJ4+jxCqdqn54=
|
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
|
||||||
|
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
|
||||||
|
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU=
|
||||||
|
golang.org/x/exp v0.0.0-20260112195511-716be5621a96/go.mod h1:nzimsREAkjBCIEFtHiYkrJyT+2uy9YZJB7H1k68CXZU=
|
||||||
|
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||||
|
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||||
|
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
|
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
|
||||||
|
golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU=
|
||||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
|
||||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||||
|
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||||
golang.org/x/net v0.27.0 h1:5K3Njcw06/l2y9vpGCSdcxWOYHOUk3dVNGDXN+FvAys=
|
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||||
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE=
|
golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
|
||||||
golang.org/x/net v0.28.0 h1:a9JDOJc5GMUJ0+UDqmLT86WiEy7iWyIhz8gz8E4e5hE=
|
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
||||||
golang.org/x/net v0.28.0/go.mod h1:yqtgsTWOOnlGLG9GFRrK3++bGOUEkNBoHZc8MEDWPNg=
|
golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
|
||||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
|
||||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
|
||||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
|
||||||
|
golang.org/x/oauth2 v0.32.0 h1:jsCblLleRMDrxMN29H3z/k1KliIvpLgCkE6R8FXXNgY=
|
||||||
|
golang.org/x/oauth2 v0.32.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
|
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||||
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
|
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||||
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
|
||||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
|
||||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
|
||||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
|
||||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20200602225109-6fdc65e7d980/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
|
||||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
|
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
||||||
golang.org/x/sys v0.24.0 h1:Twjiwq9dn6R1fQcyiK+wQyHWfaz/BJB+YIpzU/Cv3Xg=
|
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
golang.org/x/sys v0.24.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2 h1:O1cMQHRfwNpDfDJerqRoE2oD+AFlyid87D40L/OkkJo=
|
||||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2/go.mod h1:b7fPSJ0pKZ3ccUh8gnTONJxhn3c/PS6tyzQvyqw4iA8=
|
||||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||||
|
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||||
|
golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
|
||||||
|
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
|
||||||
|
golang.org/x/term v0.11.0/go.mod h1:zC9APTIj3jG3FdV/Ons+XE1riIZXG4aZ4GTHiPZJPIU=
|
||||||
|
golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY=
|
||||||
|
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
|
||||||
|
golang.org/x/term v0.39.0/go.mod h1:yxzUCTP/U+FzoxfdKmLaA0RV1WgE0VY7hXBwKtY/4ww=
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||||
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
|
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
|
||||||
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
|
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||||
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
|
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
|
||||||
golang.org/x/text v0.17.0 h1:XtiM5bkSOt+ewxlOE/aE/AKEHibwj/6gvWMl9Rsh0Qc=
|
golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
|
||||||
golang.org/x/text v0.17.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
|
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||||
|
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
|
||||||
|
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
|
||||||
|
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
|
||||||
|
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
|
||||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
|
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||||
|
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||||
|
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||||
|
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
|
||||||
|
golang.org/x/tools v0.41.0 h1:a9b8iMweWG+S0OBnlU36rzLp20z1Rp10w+IY2czHTQc=
|
||||||
|
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
|
||||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||||
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
|
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||||
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
|
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||||
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
|
||||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
|
||||||
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
|
||||||
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
|
|
||||||
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
|
||||||
google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg=
|
|
||||||
google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw=
|
|
||||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
|
||||||
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||||
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
|
gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo=
|
||||||
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
|
||||||
gopkg.in/mgo.v2 v2.0.0-20190816093944-a6b53ec6cb22/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA=
|
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
||||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
|
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
||||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
|
|
||||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
|
||||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
|
||||||
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
||||||
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||||
|
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
|
k8s.io/api v0.35.1 h1:0PO/1FhlK/EQNVK5+txc4FuhQibV25VLSdLMmGpDE/Q=
|
||||||
|
k8s.io/api v0.35.1/go.mod h1:28uR9xlXWml9eT0uaGo6y71xK86JBELShLy4wR1XtxM=
|
||||||
|
k8s.io/apimachinery v0.35.1 h1:yxO6gV555P1YV0SANtnTjXYfiivaTPvCTKX6w6qdDsU=
|
||||||
|
k8s.io/apimachinery v0.35.1/go.mod h1:jQCgFZFR1F4Ik7hvr2g84RTJSZegBc8yHgFWKn//hns=
|
||||||
|
k8s.io/client-go v0.35.1 h1:+eSfZHwuo/I19PaSxqumjqZ9l5XiTEKbIaJ+j1wLcLM=
|
||||||
|
k8s.io/client-go v0.35.1/go.mod h1:1p1KxDt3a0ruRfc/pG4qT/3oHmUj1AhSHEcxNSGg+OA=
|
||||||
|
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
|
||||||
|
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
|
||||||
|
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 h1:Y3gxNAuB0OBLImH611+UDZcmKS3g6CthxToOb37KgwE=
|
||||||
|
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ=
|
||||||
|
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck=
|
||||||
|
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
|
||||||
|
lukechampine.com/blake3 v1.4.1 h1:I3Smz7gso8w4/TunLKec6K2fn+kyKtDxr/xcQEN84Wg=
|
||||||
|
lukechampine.com/blake3 v1.4.1/go.mod h1:QFosUxmjB8mnrWFSNwKmvxHpfY72bmD2tQ0kBMM3kwo=
|
||||||
|
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg=
|
||||||
|
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
|
||||||
|
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
|
||||||
|
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
|
||||||
|
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco=
|
||||||
|
sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
|
||||||
|
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=
|
||||||
|
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=
|
||||||
|
|||||||
68
main.go
68
main.go
@@ -1,44 +1,58 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"oc-discovery/models"
|
"context"
|
||||||
_ "oc-discovery/routers"
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"oc-discovery/conf"
|
||||||
|
"oc-discovery/daemons/node"
|
||||||
|
"os"
|
||||||
|
"os/signal"
|
||||||
|
"strings"
|
||||||
|
"syscall"
|
||||||
|
|
||||||
oclib "cloud.o-forge.io/core/oc-lib"
|
oclib "cloud.o-forge.io/core/oc-lib"
|
||||||
"cloud.o-forge.io/core/oc-lib/logs"
|
|
||||||
"cloud.o-forge.io/core/oc-lib/tools"
|
|
||||||
beego "github.com/beego/beego/v2/server/web"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const appname = "oc-discovery"
|
const appname = "oc-discovery"
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
// Init the oc-lib
|
// Init the oc-lib
|
||||||
oclib.Init(appname, "", "")
|
oclib.InitDaemon(appname)
|
||||||
// get the right config file
|
// get the right config file
|
||||||
|
|
||||||
o := tools.GetConfLoader()
|
o := oclib.GetConfLoader(appname)
|
||||||
|
|
||||||
models.GetConfig().Port = o.GetIntDefault("port", 8080)
|
conf.GetConfig().Name = o.GetStringDefault("NAME", "opencloud-demo")
|
||||||
models.GetConfig().LokiUrl = o.GetStringDefault("lokiurl", "")
|
conf.GetConfig().Hostname = o.GetStringDefault("HOSTNAME", "127.0.0.1")
|
||||||
models.GetConfig().RedisUrl = o.GetStringDefault("redisurl", "localhost:6379")
|
conf.GetConfig().PSKPath = o.GetStringDefault("PSK_PATH", "./psk/psk.key")
|
||||||
models.GetConfig().RedisPassword = o.GetStringDefault("redispassword", "")
|
conf.GetConfig().NodeEndpointPort = o.GetInt64Default("NODE_ENDPOINT_PORT", 4001)
|
||||||
models.GetConfig().ZincUrl = o.GetStringDefault("zincurl", "http://localhost:4080")
|
conf.GetConfig().IndexerAddresses = o.GetStringDefault("INDEXER_ADDRESSES", "")
|
||||||
models.GetConfig().ZincLogin = o.GetStringDefault("zinclogin", "admin")
|
conf.GetConfig().NativeIndexerAddresses = o.GetStringDefault("NATIVE_INDEXER_ADDRESSES", "")
|
||||||
models.GetConfig().ZincPassword = o.GetStringDefault("zincpassword", "admin")
|
|
||||||
models.GetConfig().IdentityFile = o.GetStringDefault("identityfile", "./identity.json")
|
|
||||||
models.GetConfig().Defaultpeers = o.GetStringDefault("defaultpeers", "./peers.json")
|
|
||||||
|
|
||||||
// set oc-lib logger
|
conf.GetConfig().PeerIDS = o.GetStringDefault("PEER_IDS", "")
|
||||||
if models.GetConfig().LokiUrl != "" {
|
|
||||||
logs.CreateLogger(appname, models.GetConfig().LokiUrl)
|
conf.GetConfig().NodeMode = o.GetStringDefault("NODE_MODE", "node")
|
||||||
|
|
||||||
|
conf.GetConfig().MinIndexer = o.GetIntDefault("MIN_INDEXER", 1)
|
||||||
|
conf.GetConfig().MaxIndexer = o.GetIntDefault("MAX_INDEXER", 5)
|
||||||
|
|
||||||
|
ctx, stop := signal.NotifyContext(
|
||||||
|
context.Background(),
|
||||||
|
os.Interrupt,
|
||||||
|
syscall.SIGTERM,
|
||||||
|
)
|
||||||
|
defer stop()
|
||||||
|
fmt.Println(conf.GetConfig().NodeMode)
|
||||||
|
isNode := strings.Contains(conf.GetConfig().NodeMode, "node")
|
||||||
|
isIndexer := strings.Contains(conf.GetConfig().NodeMode, "indexer")
|
||||||
|
isNativeIndexer := strings.Contains(conf.GetConfig().NodeMode, "native-indexer")
|
||||||
|
|
||||||
|
if n, err := node.InitNode(isNode, isIndexer, isNativeIndexer); err != nil {
|
||||||
|
panic(err)
|
||||||
|
} else {
|
||||||
|
<-ctx.Done() // the only blocking point
|
||||||
|
log.Println("shutting down")
|
||||||
|
n.Close()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Normal beego init
|
|
||||||
beego.BConfig.AppName = appname
|
|
||||||
beego.BConfig.Listen.HTTPPort = models.GetConfig().Port
|
|
||||||
beego.BConfig.WebConfig.DirectoryIndex = true
|
|
||||||
beego.BConfig.WebConfig.StaticDir["/swagger"] = "swagger"
|
|
||||||
|
|
||||||
beego.Run()
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,25 +0,0 @@
|
|||||||
package models
|
|
||||||
|
|
||||||
import "sync"
|
|
||||||
|
|
||||||
type Config struct {
|
|
||||||
Port int
|
|
||||||
LokiUrl string
|
|
||||||
ZincUrl string
|
|
||||||
ZincLogin string
|
|
||||||
ZincPassword string
|
|
||||||
RedisUrl string
|
|
||||||
RedisPassword string
|
|
||||||
IdentityFile string
|
|
||||||
Defaultpeers string
|
|
||||||
}
|
|
||||||
|
|
||||||
var instance *Config
|
|
||||||
var once sync.Once
|
|
||||||
|
|
||||||
func GetConfig() *Config {
|
|
||||||
once.Do(func() {
|
|
||||||
instance = &Config{}
|
|
||||||
})
|
|
||||||
return instance
|
|
||||||
}
|
|
||||||
56
models/event.go
Normal file
56
models/event.go
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
package models
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"cloud.o-forge.io/core/oc-lib/tools"
|
||||||
|
"github.com/libp2p/go-libp2p/core/crypto"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Event struct {
|
||||||
|
Type string `json:"type"`
|
||||||
|
From string `json:"from"` // peerID
|
||||||
|
|
||||||
|
User string
|
||||||
|
|
||||||
|
DataType int64 `json:"datatype"`
|
||||||
|
Timestamp int64 `json:"ts"`
|
||||||
|
Payload []byte `json:"payload"`
|
||||||
|
Signature []byte `json:"sig"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewEvent(name string, from string, dt *tools.DataType, user string, payload []byte, priv crypto.PrivKey) *Event {
|
||||||
|
evt := &Event{
|
||||||
|
Type: name,
|
||||||
|
From: from,
|
||||||
|
User: user,
|
||||||
|
Timestamp: time.Now().UTC().Unix(),
|
||||||
|
Payload: payload,
|
||||||
|
}
|
||||||
|
if dt != nil {
|
||||||
|
evt.DataType = int64(dt.EnumIndex())
|
||||||
|
} else {
|
||||||
|
evt.DataType = -1
|
||||||
|
}
|
||||||
|
|
||||||
|
body, _ := json.Marshal(evt)
|
||||||
|
sig, _ := priv.Sign(body)
|
||||||
|
evt.Signature = sig
|
||||||
|
return evt
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *Event) RawEvent() *Event {
|
||||||
|
return &Event{
|
||||||
|
Type: e.Type,
|
||||||
|
From: e.From,
|
||||||
|
User: e.User,
|
||||||
|
DataType: e.DataType,
|
||||||
|
Timestamp: e.Timestamp,
|
||||||
|
Payload: e.Payload,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *Event) ToRawByte() ([]byte, error) {
|
||||||
|
return json.Marshal(e.RawEvent())
|
||||||
|
}
|
||||||
@@ -1,44 +0,0 @@
|
|||||||
package models
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"os"
|
|
||||||
|
|
||||||
"github.com/beego/beego/logs"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
Me Identity
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
content, err := os.ReadFile("./identity.json")
|
|
||||||
if err != nil {
|
|
||||||
logs.Error("Error when opening file: ", err)
|
|
||||||
}
|
|
||||||
err = json.Unmarshal(content, &Me)
|
|
||||||
if err != nil {
|
|
||||||
logs.Error("Error during Unmarshal(): ", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Identity struct {
|
|
||||||
Id string `json:"id,omitempty"`
|
|
||||||
Name string `json:"name,omitempty"`
|
|
||||||
PrivateKey string `json:"private_key,omitempty"`
|
|
||||||
PublicAttributes Peer `json:"public_attributes,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetIdentity() (u *Identity) {
|
|
||||||
return &Me
|
|
||||||
}
|
|
||||||
|
|
||||||
func UpdateIdentity(uu *Identity) error {
|
|
||||||
Me = *uu
|
|
||||||
jsonBytes, err := json.Marshal(uu)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
os.WriteFile("./identity.json", jsonBytes, 0600)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@@ -1,88 +0,0 @@
|
|||||||
package models
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"io/ioutil"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/beego/beego/logs"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
Peers []Peer
|
|
||||||
Store Storage
|
|
||||||
)
|
|
||||||
|
|
||||||
type Peer struct {
|
|
||||||
PeerId string `json:"peer_id,omitempty"`
|
|
||||||
Name string `json:"name,omitempty"`
|
|
||||||
EntityName string `json:"entity_name,omitempty"`
|
|
||||||
EntityType string `json:"entity_type,omitempty"`
|
|
||||||
Description string `json:"description,omitempty"`
|
|
||||||
Website string `json:"website,omitempty"`
|
|
||||||
Address string `json:"address,omitempty"`
|
|
||||||
Postcode string `json:"postcode,omitempty"`
|
|
||||||
City string `json:"city,omitempty"`
|
|
||||||
Country string `json:"country,omitempty"`
|
|
||||||
Phone string `json:"phone,omitempty"`
|
|
||||||
Email string `json:"email,omitempty"`
|
|
||||||
Activity string `json:"activity,omitempty"`
|
|
||||||
Keywords []string `json:"keywords,omitempty"`
|
|
||||||
ApiUrl string `json:"api_url,omitempty"`
|
|
||||||
PublicKey string `json:"public_key,omitempty"`
|
|
||||||
// internal use
|
|
||||||
Score int64 `json:"score,omitempty"`
|
|
||||||
LastSeenOnline time.Time `json:"last_seen_online,omitempty"`
|
|
||||||
ApiVersion string `json:"api_version,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
c := GetConfig()
|
|
||||||
Store = Storage{c.ZincUrl, c.ZincLogin, c.ZincPassword, c.RedisUrl, c.RedisPassword}
|
|
||||||
Store = Storage{"http://localhost:4080", "admin", "admin", "localhost:6379", ""}
|
|
||||||
//p := Peer{uuid.New().String(), 0, []string{"car", "highway", "images", "video"}, time.Now(), "1", "asf", ""}
|
|
||||||
// pa := []Peer{p}
|
|
||||||
// byteArray, err := json.Marshal(pa)
|
|
||||||
// if err != nil {
|
|
||||||
// log.Fatal(err)
|
|
||||||
// }
|
|
||||||
// ioutil.WriteFile("./peers.json", byteArray, 0644)
|
|
||||||
content, err := ioutil.ReadFile("./peers.json")
|
|
||||||
if err != nil {
|
|
||||||
logs.Error("Error when opening file: ", err)
|
|
||||||
}
|
|
||||||
err = json.Unmarshal(content, &Peers)
|
|
||||||
if err != nil {
|
|
||||||
logs.Error("Error during Unmarshal(): ", err)
|
|
||||||
}
|
|
||||||
Store.ImportData(LoadPeersJson("./peers.json"))
|
|
||||||
}
|
|
||||||
|
|
||||||
func AddPeers(peers []Peer) (status string) {
|
|
||||||
err := Store.ImportData(peers)
|
|
||||||
if err != nil {
|
|
||||||
logs.Error("Error during Unmarshal(): ", err)
|
|
||||||
return "error"
|
|
||||||
}
|
|
||||||
return "ok"
|
|
||||||
}
|
|
||||||
|
|
||||||
func FindPeers(query string) (peers []Peer, err error) {
|
|
||||||
result, err := Store.FindPeers(query)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return result, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetPeer(uid string) (*Peer, error) {
|
|
||||||
return Store.GetPeer(uid)
|
|
||||||
}
|
|
||||||
|
|
||||||
func Delete(PeerId string) error {
|
|
||||||
err := Store.DeletePeer(PeerId)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@@ -1,206 +0,0 @@
|
|||||||
package models
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"log"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/beego/beego/logs"
|
|
||||||
"github.com/go-redis/redis"
|
|
||||||
"github.com/tidwall/gjson"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Storage struct {
|
|
||||||
ZincUrl string
|
|
||||||
ZincLogin string
|
|
||||||
ZincPassword string
|
|
||||||
RedisUrl string
|
|
||||||
RedisPassword string
|
|
||||||
}
|
|
||||||
|
|
||||||
func LoadPeersJson(filename string) []Peer {
|
|
||||||
var peers []Peer
|
|
||||||
content, err := os.ReadFile("./peers.json")
|
|
||||||
if err != nil {
|
|
||||||
logs.Error("Error when opening file: ", err)
|
|
||||||
}
|
|
||||||
err = json.Unmarshal(content, &peers)
|
|
||||||
if err != nil {
|
|
||||||
logs.Error("Error during Unmarshal(): ", err)
|
|
||||||
}
|
|
||||||
return peers
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Storage) ImportData(peers []Peer) error {
|
|
||||||
rdb := redis.NewClient(&redis.Options{
|
|
||||||
Addr: s.RedisUrl,
|
|
||||||
Password: s.RedisPassword, // no password set
|
|
||||||
DB: 0, // use default DB
|
|
||||||
})
|
|
||||||
var indexedPeers []map[string]interface{}
|
|
||||||
for _, p := range peers {
|
|
||||||
// Creating data block for indexing
|
|
||||||
indexedPeer := make(map[string]interface{})
|
|
||||||
indexedPeer["_id"] = p.PeerId
|
|
||||||
indexedPeer["name"] = p.Name
|
|
||||||
indexedPeer["keywords"] = p.Keywords
|
|
||||||
indexedPeer["name"] = p.Name
|
|
||||||
indexedPeer["entityname"] = p.EntityName
|
|
||||||
indexedPeer["entitytype"] = p.EntityType
|
|
||||||
indexedPeer["activity"] = p.Activity
|
|
||||||
indexedPeer["address"] = p.Address
|
|
||||||
indexedPeer["postcode"] = p.Postcode
|
|
||||||
indexedPeer["city"] = p.City
|
|
||||||
indexedPeer["country"] = p.Country
|
|
||||||
indexedPeer["description"] = p.Description
|
|
||||||
indexedPeer["apiurl"] = p.ApiUrl
|
|
||||||
indexedPeer["website"] = p.Website
|
|
||||||
indexedPeers = append(indexedPeers, indexedPeer)
|
|
||||||
// Adding peer to Redis (fast retieval and status updates)
|
|
||||||
jsonp, err := json.Marshal(p)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
err = rdb.Set("peer:"+p.PeerId, jsonp, 0).Err()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
bulk := map[string]interface{}{"index": "peers", "records": indexedPeers}
|
|
||||||
raw, err := json.Marshal(bulk)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
req, err := http.NewRequest("POST", s.ZincUrl+"/api/_bulkv2", strings.NewReader(string(raw)))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
req.SetBasicAuth(s.ZincLogin, s.ZincPassword)
|
|
||||||
req.Header.Set("Content-Type", "application/json")
|
|
||||||
req.Header.Set("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36")
|
|
||||||
|
|
||||||
resp, err := http.DefaultClient.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
log.Println(resp.StatusCode)
|
|
||||||
body, err := io.ReadAll(resp.Body)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
fmt.Println(string(body))
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Storage) FindPeers(queryString string) ([]Peer, error) {
|
|
||||||
var peers []Peer
|
|
||||||
query := `{
|
|
||||||
"search_type": "match",
|
|
||||||
"query":
|
|
||||||
{
|
|
||||||
"term": "` + queryString + `",
|
|
||||||
"start_time": "2020-06-02T14:28:31.894Z",
|
|
||||||
"end_time": "2029-12-02T15:28:31.894Z"
|
|
||||||
},
|
|
||||||
"from": 0,
|
|
||||||
"max_results": 100,
|
|
||||||
"_source": []
|
|
||||||
}`
|
|
||||||
req, err := http.NewRequest("POST", s.ZincUrl+"/api/peers/_search", strings.NewReader(query))
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
req.SetBasicAuth(s.ZincLogin, s.ZincPassword)
|
|
||||||
req.Header.Set("Content-Type", "application/json")
|
|
||||||
req.Header.Set("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36")
|
|
||||||
|
|
||||||
resp, err := http.DefaultClient.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
log.Println(resp.StatusCode)
|
|
||||||
body, err := io.ReadAll(resp.Body)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
value := gjson.Get(string(body), "hits.hits")
|
|
||||||
rdb := redis.NewClient(&redis.Options{
|
|
||||||
Addr: s.RedisUrl,
|
|
||||||
Password: s.RedisPassword, // no password set
|
|
||||||
DB: 0, // use default DB
|
|
||||||
})
|
|
||||||
for _, v := range value.Array() {
|
|
||||||
peerBytes, err := rdb.Get("peer:" + v.Get("_id").Str).Bytes()
|
|
||||||
if err != nil {
|
|
||||||
logs.Error(err)
|
|
||||||
} else {
|
|
||||||
var p Peer
|
|
||||||
err = json.Unmarshal(peerBytes, &p)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
peers = append(peers, p)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return peers, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Storage) GetPeer(uid string) (*Peer, error) {
|
|
||||||
var peer Peer
|
|
||||||
rdb := redis.NewClient(&redis.Options{
|
|
||||||
Addr: s.RedisUrl,
|
|
||||||
Password: s.RedisPassword, // no password set
|
|
||||||
DB: 0, // use default DB
|
|
||||||
})
|
|
||||||
peerBytes, err := rdb.Get("peer:" + uid).Bytes()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
} else {
|
|
||||||
err = json.Unmarshal(peerBytes, &peer)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &peer, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Storage) DeletePeer(uid string) error {
|
|
||||||
// Removing from Redis
|
|
||||||
rdb := redis.NewClient(&redis.Options{
|
|
||||||
Addr: s.RedisUrl,
|
|
||||||
Password: s.RedisPassword, // no password set
|
|
||||||
DB: 0, // use default DB
|
|
||||||
})
|
|
||||||
err := rdb.Unlink("peer:" + uid).Err()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// Removing from Index
|
|
||||||
req, err := http.NewRequest("DELETE", s.ZincUrl+"/api/peers/_doc"+uid, nil)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
req.SetBasicAuth(s.ZincLogin, s.ZincPassword)
|
|
||||||
req.Header.Set("Content-Type", "application/json")
|
|
||||||
req.Header.Set("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36")
|
|
||||||
|
|
||||||
resp, err := http.DefaultClient.Do(req)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer resp.Body.Close()
|
|
||||||
log.Println(resp.StatusCode)
|
|
||||||
body, err := io.ReadAll(resp.Body)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
fmt.Println(string(body))
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
54
peers.json
54
peers.json
@@ -1,54 +0,0 @@
|
|||||||
[
|
|
||||||
{
|
|
||||||
"peer_id": "a50d3697-7ede-4fe5-a385-e9d01ebc1002",
|
|
||||||
"name": "ASF",
|
|
||||||
"keywords": [
|
|
||||||
"car",
|
|
||||||
"highway",
|
|
||||||
"images",
|
|
||||||
"video"
|
|
||||||
],
|
|
||||||
"last_seen_online": "2023-03-07T11:57:13.378707853+01:00",
|
|
||||||
"api_version": "1",
|
|
||||||
"api_url": "http://127.0.0.1:49618/v1"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"peer_id": "a50d3697-7ede-4fe5-a385-e9d01ebc1003",
|
|
||||||
"name": "IT",
|
|
||||||
"keywords": [
|
|
||||||
"car",
|
|
||||||
"highway",
|
|
||||||
"images",
|
|
||||||
"video"
|
|
||||||
],
|
|
||||||
"last_seen_online": "2023-03-07T11:57:13.378707853+01:00",
|
|
||||||
"api_version": "1",
|
|
||||||
"api_url": "https://it.irtse.com/oc"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"peer_id": "a50d3697-7ede-4fe5-a385-e9d01ebc1004",
|
|
||||||
"name": "Centre de traitement des amendes",
|
|
||||||
"keywords": [
|
|
||||||
"car",
|
|
||||||
"highway",
|
|
||||||
"images",
|
|
||||||
"video"
|
|
||||||
],
|
|
||||||
"last_seen_online": "2023-03-07T11:57:13.378707853+01:00",
|
|
||||||
"api_version": "1",
|
|
||||||
"api_url": "https://impots.irtse.com/oc"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"peer_id": "a50d3697-7ede-4fe5-a385-e9d01ebc1005",
|
|
||||||
"name": "Douanes",
|
|
||||||
"keywords": [
|
|
||||||
"car",
|
|
||||||
"highway",
|
|
||||||
"images",
|
|
||||||
"video"
|
|
||||||
],
|
|
||||||
"last_seen_online": "2023-03-07T11:57:13.378707853+01:00",
|
|
||||||
"api_version": "1",
|
|
||||||
"api_url": "https://douanes.irtse.com/oc"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
3
pem/private1.pem
Normal file
3
pem/private1.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MC4CAQAwBQYDK2VwBCIEIK2oBaOtGNchE09MBRtPd5oEOUcVUQG2ndym5wKExj7R
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
3
pem/private10.pem
Normal file
3
pem/private10.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MC4CAQAwBQYDK2VwBCIEIPc7D3Mgb1U2Ipyb/85hA4Ew7dC8zHDEuQYSjqzzRgLK
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
3
pem/private2.pem
Normal file
3
pem/private2.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MC4CAQAwBQYDK2VwBCIEIE58GDazCyF1jp796ivSmHiCepbkC8TpzliIaQ7eGEpu
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
3
pem/private3.pem
Normal file
3
pem/private3.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MC4CAQAwBQYDK2VwBCIEIAeX4O7ldwehRSnPkbzuE6csyo63vjvqAcNNujENOKUC
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
3
pem/private4.pem
Normal file
3
pem/private4.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MC4CAQAwBQYDK2VwBCIEIEkgqINXDLnxIJZs2LEK9O4vdsqk43dwbULGUE25AWuR
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
3
pem/private5.pem
Normal file
3
pem/private5.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MC4CAQAwBQYDK2VwBCIEIK2oBaOtGNchE09MBRtPd5oEOUcVUQG2ndym5wKExj7R
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
3
pem/private6.pem
Normal file
3
pem/private6.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MC4CAQAwBQYDK2VwBCIEIE58GDazCyF1jp796ivSmHiCepbkC8TpzliIaQ7eGEpu
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
3
pem/private7.pem
Normal file
3
pem/private7.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MC4CAQAwBQYDK2VwBCIEIAeX4O7ldwehRSnPkbzuE6csyo63vjvqAcNNujENOKUC
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
3
pem/private8.pem
Normal file
3
pem/private8.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MC4CAQAwBQYDK2VwBCIEIEkgqINXDLnxIJZs2LEK9O4vdsqk43dwbULGUE25AWuR
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
3
pem/private9.pem
Normal file
3
pem/private9.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PRIVATE KEY-----
|
||||||
|
MC4CAQAwBQYDK2VwBCIEIBcflxGlZYyUVJoExC94rHZbIyKMwZ+Oh7EDkb0qUlxd
|
||||||
|
-----END PRIVATE KEY-----
|
||||||
3
pem/public1.pem
Normal file
3
pem/public1.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PUBLIC KEY-----
|
||||||
|
MCowBQYDK2VwAyEAZ2nLJBL8a5opfa8nFeVj0SZToW8pl4+zgcSUkeZFRO4=
|
||||||
|
-----END PUBLIC KEY-----
|
||||||
3
pem/public10.pem
Normal file
3
pem/public10.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PUBLIC KEY-----
|
||||||
|
MCowBQYDK2VwAyEAEomuEQGmGsYVw35C6DB5tfY8LI8jm359ceAxRX8eQ0o=
|
||||||
|
-----END PUBLIC KEY-----
|
||||||
3
pem/public2.pem
Normal file
3
pem/public2.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PUBLIC KEY-----
|
||||||
|
MCowBQYDK2VwAyEAIQVeSGwsjPjyepPTnzzYqVxIxviSEjZXU7C7zuNTui4=
|
||||||
|
-----END PUBLIC KEY-----
|
||||||
3
pem/public3.pem
Normal file
3
pem/public3.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PUBLIC KEY-----
|
||||||
|
MCowBQYDK2VwAyEAG95Ettl3jTi41HM8le1A9WDmOEq0ANEqpLF7zTZrfXA=
|
||||||
|
-----END PUBLIC KEY-----
|
||||||
3
pem/public4.pem
Normal file
3
pem/public4.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PUBLIC KEY-----
|
||||||
|
MCowBQYDK2VwAyEA/ymOIb0sJ0qCWrf3mKz7ACCvsMXLog/EK533JfNXZTM=
|
||||||
|
-----END PUBLIC KEY-----
|
||||||
3
pem/public5.pem
Normal file
3
pem/public5.pem
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-----BEGIN PUBLIC KEY-----
|
||||||
|
MCowBQYDK2VwAyEAZ2nLJBL8a5opfa8nFeVj0SZToW8pl4+zgcSUkeZFRO4=
|
||||||
|
-----END PUBLIC KEY-----
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user