fabric-samples
fabric-samples copied to clipboard
Kube test network : Deploy the OSS Fabric Console
The feature/fabric-operations-console branch has some progress on an integration with the OSS Fabric Console. This work is most of the way there but it needs a little bit of work to see it out the door. Three factors complicating a landing into the Kube test network are:
-
OSS Console still has a dependency on the system channel. The test networks, on the other hand, rely on the Channel Administration SDKs, and do not bootstrap the network with a system channel / genesis block. The branch above runs through the integration by downgrading the network to bootstrap from a system channel, which is ... not ideal.
-
The Console requires a sequence of association actions to be performed immediately upon importing the test network into the GUI. While this is manageable for the "test-network" (1x orderer + 2x peers), the typing burden necessary to associate identities for 3x orderers and 4x peers on the test-network-k8s is ... not ideal.
-
Fabric console makes extensive use of a Kubernetes
Ingress
, made available at theCONSOLE_URL
, and tunneling through an Nginx controller to route traffic into the cluster from the web GUI. For network routes coming "into" the console from the GUI, this works OK as traffic can be directed via an Ingress host alias to the appropriate service. However, in some cases the network traffic - between pods - is redirected out to the host network, and back into the Fabric network over the ingress controller / nginx. For systems that have a dual-homed, or multiple NICs, this is not a problem as the ingress controller can be bound to the external NIC and resolved via DNS. For single-NIC systems, or in virtual Kubernetes environments such as KIND, it's virtually impossible to find a stable technique to identify a single DNS host entry that successfully resolves to the port binding the ingress. The test branch relies on either hacking system DNS withdnsmasq
, or clever use of wildcard DNS domains (e.g. my-server.nip.io, *.my-net.vcap.me, etc.) While this works, it doesn't work "well enough" to provide a simple route forward on all environments. This is a "non-issue" in the target k8s environments, largely based on OCPRoute
resources exposed to a NIC / DNS on the Internet.
Some recent developments make an integration with the OSS console viable. The target user interaction is:
- bring up a local test network and console running on a dev k8s:
./network up
./network channel create
./network console up
- user logs in to https://user:[email protected]/console
- user imports a zip archive generated by "console up"
- user can immediately work with the test network - create channels, chaincode, query blocks, etc.
Regarding the "rough edges" above:
-
The network bootstrap script can support an optional code path to either bootstrap the orderers using a system channel, or use the channel admin APIs. When OSS console supports the migration from system channel, this route can be removed. It's fine if the user has to set something in the env, e.g.
TEST_NETWORK_USE_SYSTEM_CHANNEL=true
. -
The console bulk import zip files have the ability to declare associations at import time. Track down the required yaml / etc. and include these in the import archive.
-
With the option to run the kube test network on Rancher Desktop (k3s / containerd), the networking stack does NOT rely on the odd hacks embedded into KIND that are required for access to a TCP port bound to the host OS/NIC at
host.docker.internal
. In other words, k3s and Rancher make it really easy to set a stable property for the CONSOLE_URL, routing traffic from within the cluster out to the ingress controller.
It's not a huge push but would be a nice way to highlight the ease of use and benefits from working with the OSS Console, all within the cozy confines of a local, single-node development workstation. Couple this with Gateway and Chaincode-as-a-Service running locally within an IDE/debugger, and it's a short hop to a production blockchain.