opensergo-control-plane icon indicating copy to clipboard operation
opensergo-control-plane copied to clipboard

Universal cloud-native microservice governance control plane (微服务治理控制面)

Results 37 opensergo-control-plane issues
Sort by recently updated
recently updated
newest added

### Does this pull request fix one issue? https://github.com/opensergo/opensergo-control-plane/issues/51 ### Describe how you did it Add grpc send interval.

## Issue Description Type: *bug report* ### Describe what happened Error occurred when CRD changes received: ``` {"timestamp":"2023-01-17 19:44:31.31170","caller":"crd_watcher.go:171","logLevel":"INFO","msg":"controller.fault-tolerance.opensergo.io/v1alpha1/ConcurrencyLimitStrategy OpenSergo CRD received","crd":{"kind":"ConcurrencyLimitStrategy","apiVersion":"fault-tolerance.opensergo.io/v1alpha1","metadata":{"name":"concurrency-limit-foo","namespace":"default","uid":"0b79c7c4-7f6a-4677-a636-2210a9608a13","resourceVersion":"7910317","generation":1,"creationTimestamp":"2023-01-17T11:29:55Z","labels":{"app":"foo-app"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"fault-tolerance.opensergo.io/v1alpha1\",\"kind\":\"ConcurrencyLimitStrategy\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"foo-app\"},\"name\":\"concurrency-limit-foo\",\"namespace\":\"default\"},\"spec\":{\"limitMode\":\"Local\",\"maxConcurrency\":8}}\n"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"fault-tolerance.opensergo.io/v1alpha1","time":"2023-01-17T11:29:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:app":{}}},"f:spec":{".":{},"f:limitMode":{},"f:maxConcurrency":{}}}}]},"spec":{"maxConcurrency":8,"limitMode":"Local"},"status":{}},"crdNamespace":"default","crdName":"concurrency-limit-foo","kind":"fault-tolerance.opensergo.io/v1alpha1/ConcurrencyLimitStrategy"} {"timestamp":"2023-01-17 19:44:31.31170","caller":"crd_watcher.go:197","logLevel":"ERROR","msg":"controller.fault-tolerance.opensergo.io/v1alpha1/RateLimitStrategy Failed to send rules","kind":"fault-tolerance.opensergo.io/v1alpha1/RateLimitStrategy","crdNamespace":"default","crdName":"rate-limit-foo","kind":"fault-tolerance.opensergo.io/v1alpha1/RateLimitStrategy"} rpc error:...

kind/bug
good first issue

Add extension mechanism for CRDMetadata registry, so that we could register CRDMetadata outside the control plane pkg. Related code: https://github.com/opensergo/opensergo-control-plane/blob/91ef7e92e2745c646f294f248c5c4249ef488af2/pkg/controller/crd_meta.go --- 新增扩展机制,支持注册自定义的 CRDMetadata,以便其他 control plane 能够更方便地扩展使用 OpenSergo 控制面模块。

good first issue
kind/feature

### 背景 opensergo-control-plane作为控制面有很多场景有扩展的需求,比如某些自定义CRD的解析、转换或是应用等。而Go语言本身对扩展机制没有很好的支持,因此我对目前的一些方案设计和实践做了简单整理。 ### Go插件机制实现方案概述 #### Go plugin 包 基于Go plugin包实现扩展机制的方案,通过编译`.so`文件的方式构建插件,主程序借助plugin加载并使用插件,底层基于cgo 机制调用 unix 的标准接口实现。无额外的性能开销,但对插件存在严格的的约束检查。 实践案例:[mosn/mosn](https://github.com/mosn/mosn) #### 基于通讯的多进程方案 插件以独立进程的方式运行,主程序通过通讯的方式调用插件。目前该方案较多的实践是基于 [hashicorp/go-plugin](https://github.com/hashicorp/go-plugin) 框架包装,以RPC的方式通讯。可能存在一定通讯开销,插件的管理以及进程的管理会引入一定的复杂度。 实践案例:[hashicorp/terraform](https://github.com/hashicorp/terraform)、[grafana/grafana](https://github.com/grafana/grafana)中的backendplugin、[mosn/mosn](https://github.com/mosn/mosn)等 #### 动态解释执行方案 插件由Go、Lua、JS等语言编写,主程序解释执行。该方案有较多不同语言的支持框架,例如:[traefik/yaegi](https://github.com/traefik/yaegi)、[d5/tengo](https://github.com/d5/tengo)、[yuin/gopher-lua](https://github.com/yuin/gopher-lua)、[robertkrimen/otto](https://github.com/robertkrimen/otto)。其中[traefik/yaegi](https://github.com/traefik/yaegi)是一个Go解释器,完整支持Go规范。 实践案例:[traefik/traefik](https://github.com/traefik/traefik) #### WebAssembly方案 插件被编译为WebAssembly,借助[WasmEdge/WasmEdge](https://github.com/WasmEdge/WasmEdge)这一WebAssembly运行时,在Go主程序中运行WebAssembly。 ### 关于控制面的插件机制...

kind/feature
kind/discussion
area/extension

具体情况是这样的,目前后端由c++、golang、java组成,最开始只有c++,c++那边自研了一套rpc的框架,后来java和go也接入到这套框架上,接下来又自研了网关服务,入口网关用的nginx,这样整体结构如下: nginx->自研网关->微服务 同时静态资源由nginx转发。 目前想要通过opensergo实现灰度发布,同时支持前端静态资源灰度和后端服务灰度,结合目前的情况想了以下几种方案,但是都存在一定问题: 1、将现有的nginx拆分成灰度和正式,在nginx之上再加一层网关,并且新加的网关原生支持opensergo,将流量分发到灰度或者正式的nginx,然后通过nginx打灰度标记,通过自研网关转发到灰度或者正式的微服务上。 考虑这种方案的原因是,原本已经做过一套灰度发布方案,注册中心支持区分灰度节点和正式节点,但是现有方案对于灰度规则的配置方式过于不合理,需要写配置文件,操作复杂,支持的规则只有一种,而且由于某些原因存在一定的业务侵入。 目前由于未找到nginx直接接入opensergo的相关文档,使用其它网关如springcloud等需要额外的调研和接入成本,但是看起来开发量更小。 2、基于方案1,在自研网关和rpc层面完全改造,支持opensergo的协议,并给nginx提供脚本,通过opensergo的配置规则识别流量。 这样的好处是规则相对配置更灵活且能够实时生效,缺点是改造量太大。 3、不考虑opensergo,重做一套用于配置和验证灰度规则的功能,再由nginx、网关做接入。 这样做的好处是成本可控,而且对于大多数采用docker compose部署的项目,方便接入,不像opensergo目前强依赖k8s,缺点是轮子越造越多,维护更困难。 目前思考出了这三种方案,想讨论一下哪种方式更合理,或者还有没有其它方式

假设正常资源放在/data/webapps里,灰度资源放在/data/webapps_gray,这种情况下通过nginx应该如何做

## Issue Description Type: *feature request* ### Describe what feature you want 目前有几种方式支持 Istio 生态: * Co-work with Istio: OpenSergo 控制面支持将 OpenSergo CRD *受限*转换为 Istio CRD,支持受限的治理能力(如最基本的标签路由) https://github.com/opensergo/opensergo-control-plane/issues/37 * Replacement for...

kind/feature
kind/discussion

### Describe what this PR does / why we need it update args of golangci-lint in ci.yml according to official guides ### Does this pull request fix one issue? ###...

### Describe what this PR does / why we need it Feature: add event government ### Does this pull request fix one issue? 1. https://github.com/opensergo/opensergo-specification/issues/62 ### Describe how you did...

kind/feature

opensergo计划支持非k8s环境吗,现在很多还是非k8s环境

kind/feature