containerlab icon indicating copy to clipboard operation
containerlab copied to clipboard

FR - Allow for a lab path to be a URL

Open networkop opened this issue 4 years ago • 6 comments

Some CLI tools, like kubectl, allow for a yaml file path to be a URL. For containerlab this makes it possible for users to deploy a full lab without having to download any of the artifacts. For example:

containerlab -t https://raw.githubusercontent.com/hellt/clabs/main/labs/cvx03/lab-start.clab.yml deploy

would download the lab and all of the associated artifacts into a temp folder and clean it up before the deploy function exits.

networkop avatar Jul 24 '21 13:07 networkop

Was proposed in #344

But we need to keep the downloaded topo file, as this is needed to destroy the lab and do other commands like 'save'

Unfortunately today even 'destroy' command requires a user to specify the topo file name. And maybe this is an overkill, since we might be able to delete a lab by removing containers with certain labels

hellt avatar Jul 24 '21 14:07 hellt

ok, so based on what I've read the issues are:

  1. sharing the configs together with the lab -- if the paths are relative these can also be looked up in URLs, right?
  2. doing destroy -- if we stick to the kubectl UX, this can also be a URL, or , optionally we can add a flag to delete lab based on its name.
  3. doing save -- I understand you still create a lab directory unconditionally, so maybe instead of deleting what we've downloaded we just put it in this dir?

Ultimately, I think that it's very easy to introduce a --name/-n argument that can be used for any action after deploy.. or it can even be a positional argument. So here's the UX that I have in mind, inspired by kubectl:

$ containerlab -t http://mylab.yaml deploy
$ containerlab list
   [DEBUG] Listing all local labs # Here we can iterate over all runtimes and look for clab's labels
   local-lab-1
   url-based-lab-2
$ containerlab save -n url-based-lab-2
$ containerlab delete -n url-based-lab-2

networkop avatar Jul 24 '21 15:07 networkop

the name argument was (and likely is still) there, but we (un)naturally deviated from it, since we used topology based approach for most of our commands.

If I think about URL based commands, to me it seems it only makes when the lab is self-contained (i.e. no external bindings). or if we clone/download the lab repo via https entirely.

The self-container labs are quite common I'd say. I.e. any basic lab is like that if they don't use any mounted configs. And I think it adds benefits to be able to deploy them without asking to clone a repo with a topo file.

The labs which are more complicated, with binds or references to license files can (as you mentioned) be looked up via http as well, I agree, but I am not sure that the complexity that it adds (creating the temp dir and putting there all the files that are referenced in the topo and not available locally) beats the flow of cloning a repo?

hellt avatar Jul 24 '21 17:07 hellt

I don't think the choice should be between clone and download. Some repose may have more than one lab, like this one, so there's really no point in cloning the entire repo for that. Think of it like helm, there may be a repo with multiple apps and you only install what you need (although helm does package them up in a single tarball)

networkop avatar Jul 25 '21 15:07 networkop

@networkop are you working on this ?

karimra avatar Aug 19 '21 13:08 karimra

@karimra nope, no plans as of yet

networkop avatar Aug 19 '21 21:08 networkop

With https://github.com/srl-labs/containerlab/pull/1704 and https://github.com/srl-labs/containerlab/issues/1654 clab will clone the repo referenced by the URL.

hellt avatar Nov 12 '23 21:11 hellt