Installation issue feedback - archlinux & debian packages
Describe the bug
Two main points:
https://www.cogentcore.org/core/setup/install
The <code> </code> blocks are not directly highlight-able (with brave browser on amd64 linux) which makes it basically impossible to copy only the installation command itself for cogentcore.
The only easy way to do it to highlight the whole line and do ctrl + C ; paste into an editor, and then copy the command to the terminal. Or manually type the whole thing.
it would be preferable if the installation command (or generally code blocks) had the behavior of single click copy
Additionally, the setup documentation assumes that you have GOPATH set correctly and that your GOBIN was already added to your PATH ; but this should be made explicit and not just assumed as it might confuse people.
core setup may be easy enough to remember or run, but it actually did not work for me on archlinux with my specific hardware, to install all the correct dependencies (#1223).
I recommend that the cogentcore/core runtime dependencies (installed by core setup) be listed somewhere in the installation documentation - even if this information is not useful to most people.
If cogentcore was packaged and installed as a package by the package manager on linux, installing it's runtime dependencies should be handled by the package manager. And it's deps should be installed as dependencies of cogentcore and not explicitly.
However, I assume the exact dependencies vary based on the hardware & os ; and core setup is supposed to detect this. That might be challenging to handle by the package manager automatically. It might work best with different flavors (so to speak) of the package which are intended for specific hardware and which include the specific runtime deps used by cogentcore for that hardware.
Still, i suggest informing the user of the dependencies for different hardware and operating systems, even if this is on a different page of documentation from the setup installation page.
How to reproduce
-
attempt to copy the installation command from here https://www.cogentcore.org/core/setup/install
-
N/A
Example code
No response
Relevant output
No response
Platform
Web
Thank you for the feedback. We are aware of both of those issues and are working on fixing them. The first issue will be resolved by #1051, and we may restructure the install documentation in the meantime such that it is easier to copy. For the second point, I am currently working on #1292 to fix that after receiving similar feedback in #1275 and #1260. I will also add documentation regarding Vulkan on other distros such as Arch Linux as part of that.
Is it possible to statically compile core itself? If I'm not mistaken it uses CGO, so it should actually make it more distribute-able if i'm not mistaken.
Is it possible to create statically compiled (with musl) apps built with core? Non web-apps of course.
Are you asking about using Cogent Core as a static library / building static libraries with core? That might be theoretically possible, but I'm not sure that it would make sense relative to direct linking using Go. We are planning to improve cross-compiling support soon, but if there is any other reason using Cogent Core as a static library would be helpful, I can consider it.
Are you asking about using Cogent Core as a static library / building static libraries with core? That might be theoretically possible, but I'm not sure that it would make sense relative to direct linking using Go. We are planning to improve cross-compiling support soon, but if there is any other reason using Cogent Core as a static library would be helpful, I can consider it.
I'm talking about compiling core on one machine, and using it on a different machine.
If, for instance, core is compiled on arch linux, the gcc runtime dependency is likely to be an incompatible version when run on debian-based distros.
However, after becoming more familiar with the runtime dependency situation, it's clear that gcc runtime dep is not the only issue with doing this, because there are additional runtime dependencies for core which differ based on hardware.
I'm not sure how this is handled internally. However, as a thought experiment, suppose that core was statically compiled with musl, on one linux machine, and then the compiled binary was moved to a different machine. And suppose the first machine had an amd gpu and the second machine had NVIDIA or otherwise different hardware, or even just a built in graphics processor.
Would the (static) compilation of the core binary on the one machine work when the binary is copied to the other machine - assuming the appropriate additional deps were installed, or is the compilation (of core itself) basically specific to the (graphics) hardware of the machine on which it was compiled?
If it would work, then I think it should be possible to have a binary release of this software ; and even if you didn't want to have a binary release of core (because of the rapid pace of development or to avoid supplying an outdated binary which might cause users to report issues that have already been solved) it would still be possible to have it packaged for different distros.
I say possible but it would likely still present a challenge in terms of the runtime deps differing based on the hardware. I'm very familiar with linux packaging, and there are two possible methods for handling this, using either:
- optional dependencies
- hardware-specific packages
...
For instance on archlinux - it's possible to statically compile a golang program with musl if you install the musl and kernel-headers-musl packages and then use CC=musl-gcc or export CC=musl-gcc with the go build (or go install) command.
This is typically done either as a consideration for making a binary executable more distributable or because of security concerns about the potential for arbitrary code execution ; i.e. if the host system where the binary runs might have compromised libraries.
...
In terms of statically compiling applications with core basically core build would need to observe the CC env.
And sortof on that note, how might it be possible (or is it possible) to set Ldflags when doing core build ? For setting variables at compile time or to set -s -w ldflags ; things like that
Please correct me if I am misunderstanding which core you are referring to in different contexts. The core command line tool ("core tool") itself does not use CGO and thus can be cross-compiled for any platform in the standard Go way using GOOS and GOARCH. The dependencies that core setup installs are necessary for building Cogent Core apps for native platforms, but they are not in any way attached to the core tool binary itself. Apps built with Cogent Core rely on CGO (except on web), but the core tool itself is mainly just a wrapper around other command-line tools (which is why go build also works fine for building apps on desktop; the core tool does a lot more for mobile and web platforms, so you need it there, but there are still no CGO dependencies).
I believe that the dependencies that core setup installs are largely compile-time dependencies, although some of the Vulkan drivers may be needed at runtime (for actual Cogent Core apps, not for the core tool itself).
I do not have much knowledge of the specifics of dependencies in Linux packaging, but it seems like it might be possible to have a package that contains binary versions of the core tool for each architecture (cross-compiled using the Go tool) and a list of platform-specific dependencies to be installed by the package manager for each distro and hardware combination.
I am currently in the middle of a major restructuring of the documentation (#1321), in which I am updating the installation documentation based on your feedback. As such, there will be an easily accessible list of the requisite dependencies for building Cogent Core apps, which could be used for making a Linux package. I am not necessarily interested in doing so right now given ongoing development as you mentioned, but the core tool should be relatively stable within a few months, so it is definitely a future possibility.
For cross-compiling actual Cogent Core apps, we are planning to implement that at some point relatively soon (#1107), likely once we have finished some other major features.
The core tool currently sets the linker flags automatically (it does -s -w by default), but we can certainly add an option to add additional custom linker flags (update: I filed and fixed #1345, so you can do this now with -ldflags).
I will add all of that information to the documentation as well. Please let me know if you have any further questions.
Ok - I misunderstood where CGO was used. So it's unnecessary to compile core itself statically if it does not use cgo.
With your consent / approval, and with the list of hardware-specific dependencies, I would be happy to maintain a PKGBUILD in the arch user repos of the core tool provided by cogentcore/core ; which would be more convenient for archlinux users when updating and installing, because it would be installed via the system package manager. So it would be basically the same process for installing and updating core as for any other software on the system.
For example, with yay:
yay -S core
Would install the core tool from cogentcore/core (after I create the repo in AUR and add the PKGBUILD there)
If the AUR repo existed, the manual process (i.e. not using yay) for creating and installing the package would be something like this
git clone https://aur.archlinux.org/core
cd core
makepkg -sif
As well (in the spirit of coding once) I have a method for creating .deb packages using archlinux tools (a PKGBUILD & makepkg) and I make it a habit to include the PKGBUILD which can create the .deb package - for whatever it's worth.
[...] which could be used for making a Linux package. I am not necessarily interested in doing so right now [...]
I don't blame you, because there are some other considerations besides just the package itself - specifically for debian packages. Basically, for the user to get an updated version of the package with apt update you would need an apt repo. It's possible to have an apt repository in a github repo - if the packages are small i.e. <25M (See: here). The core binary should be sufficiently small to fit in such an apt repo. But I just mention that as a future consideration.
On the archlinux PKGBUILD ; the software can be built from source on any platform. However, for debian packages, it would need to be cross-compiled in order to produce architecture specific packages. So binary releases of the software would be very useful for .deb packages as it would eliminate the cross-compilation dependencies needed to build it for other architectures, when packaging.
For context of your understanding, there is no one single established way of creating a .deb package on a debian-based distro. There are multiple ways, and the tools for doing this are not included by default. Conversely, there is only one way to make an archlinux package, and the tools needed for this (makepkg) are included with the package manager (pacman). To make the .deb package on archlinux only requires dpkg on top of the existing tools, which can be installed from the AUR
Okay, once I finish updating the documentation in #1321, you can make an arch linux and/or debian package for the core tool. After #1321, there shouldn't be any major changes to the core tool soon, so it should be good from a versioning perspective. Please let me know if there are any changes I need to make to the core tool for compatibility. Thank you for doing this and providing that helpful context!
I've just added the initial PKGBUILDs ; without the full list of optional dependencies for the moment. Will update when you have that after #1321.
so it should be possible to install like this
yay -S cogentcore
or, longhand
git clone https://aur.archlinux.org/cogentcore
cd cogentcore
makepkg -sif
the PKGBUILD can be viewed here https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=cogentcore
Note that because there is an expected naming convention for certain types of build, the above command will install the latest version for which a release source archive exists. To install @main as specified in the docs, there is an alternate build provided by the same repo which can be used as follows
rm ~/.cache/yay/cogentcore/*.zst ; yay --mflags " -p git.PKGBUILD " -S cogentcore
or, longhand
git clone https://aur.archlinux.org/cogentcore
cd cogentcore
makepkg -sifp git.PKGBUILD
the git.PKGBUILD can be viewed here https://aur.archlinux.org/cgit/aur.git/tree/git.PKGBUILD?h=cogentcore
it works for me. Will add builds for the debian packages in the future.
If you want me to add you as co-maintainer or anything in the future, I'm happy to do so.
Thank you for doing that! After I finish #1321 and you add the dependencies, I will add documentation for installing using yay to the website.
As the core tool gets increasingly stable in the future, the need for @main will become much smaller, so that is okay (certainly by the time we get to v1 we will move to @latest). As long as the manually specified version in the PKGBUILD stays relatively up-to-date, we should be good.
in the future, the need for @main will become much smaller, so that is okay (certainly by the time we get to v1 we will move to @latest). As long as the manually specified version in the PKGBUILD stays relatively up-to-date, we should be good.
Yes that sounds good, and I'm glad to contribute to this project, at least in a small way, by maintaining this.
There was already another package in the AUR called core so that's why I added this package as cogentcore
I think, for the sake of simplicity for users, it might be optimal to change this to a split package and define the specific dependencies for each one.
read more about split packages
Then all the user would need to do is to install the correct package for their hardware, and the dependencies would be installed in the same step instead of the user needing to manually install those runtime deps after the fact (or use core setup)
I'd ask for your input on the naming convention on what to call the variants of the packages which differ by dependencies only. It should perhaps be something like cogentcore-nvidia ; cogentcore-amd ; cogentcore-intel ; but again I'd like your input on this as I don't have a clear view of just how many dependency variations there are or how exactly to refer to the different hardware being targeted.
Though I've never tried a split package build for .deb packages, it should work the same.
Thank you for the suggestion. Based on my preliminary research, it seems like the GPU-specific drivers are typically automatically installed by other driver management software (especially for NVIDIA) and thus do not need to be installed manually, but that may not always be the case. In particular, arch may be less automatic for that, although I would think that anyone with a discrete GPU has probably already configured it.
We will add vulkan-swrast to the dependencies regardless since it is useful on any device, which should guarantee at least basic functionality independent of other factors. I am not sure whether the split package would be worth it, but I don't necessarily have enough knowledge of arch GPU usage distributions to make a conclusive decision. The question is whether optional dependencies and clear documentation would be sufficient for the likely small proportion of users that have a discrete GPU but don't have the drivers already installed. If most people don't need it, split packages might add unnecessary confusion, but if most people do need it, it would probably be worth it. (In your case, did you only need vulkan-swrast and not any other GPU-specific packages?)
Any insights you have on how to proceed would be appreciated. Regardless, you are correct that nvidia, amd, and intel are the relevant categories, and so any split package setup would have those three options, or if we go with optional dependencies, there would be one for each of those three. If you think split packages make sense, there should definitely still be a base cogentcore without any GPU additions other than vulkan-swrast, as many people will still only need that pathway. Do you think it could also be possible to use something like glxinfo to detect the GPU dynamically and install the appropriate drivers?
Also, I would appreciate it if you could add me as a co-maintainer. I recently got Linux VMs working well on my computer through UTM as part of #1341, so I can now easily test on arch without having to switch between computers. Thank you again for creating and maintaining this!
If there are any hardware specific runtime deps, regardless of if they are already installed or not, they should be listed as optional deps. It makes somewhat more sense to do split packages for archlinux since the AUR makes it so that built packages only exist on the system where they are built. If you have a repo with packages (debian or arch) it would be better to have just one package, for maintenance overhead of the package repo itself ; but I'd further argue that having just one basic formula for the build would be the easiest to maintain.
Having split packages should be easier for users, because the user would just need to run one command to install cogentcore, and then everything would be working right - assuming that the user selected the right package. But regardless of that, even if they installed the wrong one, it should be possible to get a working installation, provided the right runtime deps existed. So I'm in favor of the latter even if it entails some redundancy in built packages for a potential future package repo.
You'll need to make an account for the AUR. And their captcha system may force you to actually be using archlinux to make an account there. plus you need makepkg to maintain packages anyways. I'll go ahead and add you as a co-maintainer there as soon as you have an account and post here your username.
FYI I use / recommend endeavourOS - which is an arch-based distro, very close to vanilla archlinux.
on using glxinfo - basically the limitation is that the runtime deps of the package need to be listed, and they'll be installed automatically if they are listed in the PKGBUILD. If they are optional deps they don't get installed automatically.
It may be possible to simply install all the runtime deps available, for every different hardware. But I'm not sure if that would cause errors for cogentcore/core if there are drivers for hardware that doesn't exist, or how it sorts out that situation. What happens if you have drivers installed for hardware that doesn't exist ; in terms of how cogentcore behaves? How does core determine what driver to use?
The overall goal here is just to simplify installation and setup to a single command, if possible. And that would basically replace the use of core setup.
But from what you said - i tend to agree. If there are any drivers needed, they are likely already installed. So perhaps what we should do is only list vulkan-swrast as the runtime dependency, and assume that any other drivers are already there, but list those as optional dependencies. However, I think you'd need another vulkan package like vulkan-intel or some other flavor for this to work on the appropriate hardware - but this should be tested.
A note on the maintenance of AUR packages. I've grown fond of aurpublish which basically allows for multiple packages to be managed as submodules of a git repo. That makes it easy to track changes, etc. It's only slightly overkill if you maintain just one package, but hopefully this project aspires to produce more than just one software package.
So you could, if desired, make a git repo under cogentcore and maintain the package(s) there. Depends on your reasons, but I think it's convenient for tracking potential packaging issues, etc.
Here is an example of such a repo where I maintain AUR packages for skycoin
On a tangentially related note, in the new year I'll be recommending cogentcore/core for certain UI re-designs of skycoin software. Mainly for the reward system UI but there are plenty of other outdated UIs that could make use of cogentcore/core.
Thank you for your response. I plan to focus on #1321 for about a week, and then I will turn to getting this all figured out.
I tested with a machine that I think uses builtin intel graphics. vulkan-intel did not make a difference. When I uninstalled vulkan-swrast, core applications wouldn't run, even with vulkan-intel installed.
Thank you for that information. I think that affirms that we can just install vulkan-swrast and provide platform-specific packages as optional additions. I have been somewhat busy recently, but I am still planning to finish #1321 soon, at which point I will write a more detailed response here. Thank you for your patience.
@0pcom I finished #1321, so the install documentation on the website now has selectable install commands, in addition to the platform-specific manual commands listed for Linux distros. Please let me know if you have any feedback on how it could be improved. I will look into the Arch Linux things further soon.