headlamp icon indicating copy to clipboard operation
headlamp copied to clipboard

backend:bug: Fixes the bug that caused breaking of app-linux build

Open BABAR-TAHSEEN55 opened this issue 5 months ago • 21 comments

Summary

This PR fixes a bug in which app-linux target did not automatically build the backend for the current (Host) OS which caused binary exec error due to an architecture mismatch. Now the makefile ensures that backend is built for the appropriate architecture before running the backend server

Related Issue

Fixes #3162

Changes

  • Added backend in makefile so as to make sure backend is built for the appropriate architecture before running backend server

Steps to Test

  1. make app-linux
  2. make run-backend

Screenshots (if applicable)

Screencast from 2025-07-02 19-38-57.webm

Notes for the Reviewer

  • [e.g., This touches the i18n layer, so please check language consistency.]

BABAR-TAHSEEN55 avatar Jul 02 '25 17:07 BABAR-TAHSEEN55

CLA Signed

The committers listed above are authorized under a signed CLA.

  • :white_check_mark: login: BABAR-TAHSEEN55 / name: Mohammed Babar Tahseen (6078a252772883ed81e6ffdf9d97dbb1348b5a09, 646abb351992f3769178c67d5599d1474dfce378, 9990d4a4dc13b9345a48335613779fd1cfb9bdc5, 8292fa1f7635730dd17ed5896fcc46af87a6c150)

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: BABAR-TAHSEEN55 Once this PR has been reviewed and has the lgtm label, please assign sniok for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

k8s-ci-robot avatar Jul 02 '25 17:07 k8s-ci-robot

Welcome @BABAR-TAHSEEN55!

It looks like this is your first PR to kubernetes-sigs/headlamp 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/headlamp has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. :smiley:

k8s-ci-robot avatar Jul 02 '25 17:07 k8s-ci-robot

Hey @illume @sniok This pr is ready for review . Can you review it ?

BABAR-TAHSEEN55 avatar Jul 02 '25 17:07 BABAR-TAHSEEN55

Running make backend takes 0.5 seconds for me after it's compiled, and 8+ seconds normally. So it doesn't do a full rebuild, and is pretty fast. (This is on my very slow laptop)

I'm wondering if it's worth it?

pros

  • it solves the make app-linux breaking run-backend problem
  • it makes sure when there's changes to the backend run-backend runs with the latest

Cons:

  • it adds 0.5 seconds extra to run-backend (probably less on good laptops)

illume avatar Jul 04 '25 10:07 illume

I don't think this is a bug, it's intended so we don't have to compile the backend every time, even if subsequent attempts are faster. The problem with this fix is that it will compile the backend every time. An improvement would be to add the headlamp-server binary as a dependency of that target, and then a new target to compile the backend if the headlamp-server doesn't exist.

Aah! I think creating a new target might be a good fix I'll look into it and will raise the pr soon I might have some doubts If i stumble upon them , I'll contact you through slack

BABAR-TAHSEEN55 avatar Jul 04 '25 11:07 BABAR-TAHSEEN55

Running make backend takes 0.5 seconds for me after it's compiled, and 8+ seconds normally. So it doesn't do a full rebuild, and is pretty fast. (This is on my very slow laptop)

I'm wondering if it's worth it?

pros

  • it solves the make app-linux breaking run-backend problem
  • it makes sure when there's changes to the backend run-backend runs with the latest

Cons:

  • it adds 0.5 seconds extra to run-backend (probably less on good laptops)

I have i7 13th gen and the building process is Pretty quick Is building Backend everytime bad? I'm confused

BABAR-TAHSEEN55 avatar Jul 04 '25 11:07 BABAR-TAHSEEN55

An improvement would be to add the headlamp-server binary as a dependency of that target, and then a new target to compile the backend if the headlamp-server doesn't exist.

The problem with having the binary file at backend/headlamp-server target as a dependency is that if it exists, then the bug is not fixed. It will not build, and try to run the arm binary on x86 machines.

I have i7 13th gen and the building process is Pretty quick Is building Backend every time bad? I'm confused

I understand @joaquimrocha not wanting run-backend to take extra time compiling. It is a significant extra time, and personally I'd be happy with that extra 0.5 seconds on run-backend... because it solves these issues:

  • it solves the make app-linux breaking run-backend problem
  • it makes sure when there's changes to the backend run-backend runs with the latest

However, it takes 12 seconds when I'm on my slow windows work laptop to compile when nothing has changed and 68 seconds running make backend from scratch. So definitely that's really bad for windows.


Another potential way to fix it that won't add much extra time to run-backend: In the run-backend make target you could detect the type of binary if it does not match the host, then it could build headlamp-server with make backend? (This could just happen in the unix branch inside run-backend, because this issue does not exist on windows.)

To detect if the binary matches the host, it's probably a dozen lines of shell scripting using file headlamp-server and uname -a.

@BABAR-TAHSEEN55 what do you think?

illume avatar Jul 04 '25 13:07 illume

An improvement would be to add the headlamp-server binary as a dependency of that target, and then a new target to compile the backend if the headlamp-server doesn't exist.

The problem with having the binary file at backend/headlamp-server target as a dependency is that if it exists, then the bug is not fixed. It will not build, and try to run the arm binary on x86 machines.

I have i7 13th gen and the building process is Pretty quick Is building Backend every time bad? I'm confused

I understand @joaquimrocha not wanting run-backend to take extra time compiling. It is a significant extra time, and personally I'd be happy with that extra 0.5 seconds on run-backend... because it solves these issues:

  • it solves the make app-linux breaking run-backend problem
  • it makes sure when there's changes to the backend run-backend runs with the latest

However, it takes 12 seconds when I'm on my slow windows work laptop to compile when nothing has changed and 68 seconds running make backend from scratch. So definitely that's really bad for windows.

Another potential way to fix it that won't add much extra time to run-backend: In the run-backend make target you could detect the type of binary if it does not match the host, then it could build headlamp-server with make backend? (This could just happen in the unix branch inside run-backend, because this issue does not exist on windows.)

To detect if the binary matches the host, it's probably a dozen lines of shell scripting using file headlamp-server and uname -a.

@BABAR-TAHSEEN55 what do you think?

I think the initial solution is better. If that significant extra time won't be a problem in the future & if its not a problem with compiling backend everytime , I think it's better For detecting in the UNIX branch , that would too add some extra time ? right ? Not as much as before but it'will I like both your suggestions @illume What are your thoughts on this ? @joaquimrocha

BABAR-TAHSEEN55 avatar Jul 05 '25 04:07 BABAR-TAHSEEN55

Hi. I don't think we need to check for the arch directly after all. We already have the file extension pre-checked. So we can do something like:

.PHONY: backend-check-build
backend-check-build:
	@if [ ! -f backend/headlamp-server${SERVER_EXE_EXT} ] || ! git diff --quiet HEAD -- backend/; then \
		echo "Backend changes detected, rebuilding..."; \
		make backend; \
	else \
		echo "Backend is up to date"; \
	fi

And then we add this target as the dependency to the run-backend:

run-backend: backend-check-build

Should give us the best of both worlds 😉

joaquimrocha avatar Jul 07 '25 12:07 joaquimrocha

Hi. I don't think we need to check for the arch directly after all. We already have the file extension pre-checked. So we can do something like:

.PHONY: backend-check-build
backend-check-build:
	@if [ ! -f backend/headlamp-server${SERVER_EXE_EXT} ] || ! git diff --quiet HEAD -- backend/; then \
		echo "Backend changes detected, rebuilding..."; \
		make backend; \
	else \
		echo "Backend is up to date"; \
	fi

And then we add this target as the dependency to the run-backend:

run-backend: backend-check-build

Should give us the best of both worlds 😉

Ahhh!!!! This is clearly better approach!! I didn't think of this before As you solved and provided the core logic , I feel it wouldn't be right for me to just copy and paste it as my own. Would you prefer that i continue this pr or should i close it ??😄

BABAR-TAHSEEN55 avatar Jul 07 '25 20:07 BABAR-TAHSEEN55

This is clearly better approach!! I didn't think of this before As you solved and provided the core logic , I feel it wouldn't be right for me to just copy and paste it as my own. Would you prefer that i continue this pr or should i close it ??😄

Just go for it. No worries.

joaquimrocha avatar Jul 08 '25 09:07 joaquimrocha

image @joaquimrocha I am still encountering the same issue . I think the problem is that , the build is of different architecture Should I add a check that would check what type of binary is the host ? or something similar ?

BABAR-TAHSEEN55 avatar Jul 09 '25 15:07 BABAR-TAHSEEN55

I added a check in your previous code and did something like this :

.PHONY: backend-check-build
backend-check-build:
	@if [ ! -f backend/headlamp-server${SERVER_EXE_EXT} ] || ! git diff --quiet HEAD -- backend/; then \
		echo "Backend changes detected, rebuilding..."; \
		make backend; \
	elif ! file backend/headlamp-server${SERVER_EXE_EXT} | grep -q "$(shell uname -m | sed 's/_/-/')"; then \
		echo "Backend architecture mismatch (expected $(shell uname -m)). Rebuilding"; \
		make backend;\
	else \
		echo "Backend is up to date"; \
	fi

It result in this : Screenshot from 2025-07-09 21-10-52 Screenshot from 2025-07-09 21-16-39

Should I push it ?

BABAR-TAHSEEN55 avatar Jul 09 '25 15:07 BABAR-TAHSEEN55

Note, it has to work on windows.

illume avatar Jul 10 '25 06:07 illume

@vyncent-t Can you please test on windows?

joaquimrocha avatar Jul 10 '25 11:07 joaquimrocha

@vyncent-t Can you please test on windows?

sure I will take a look

vyncent-t avatar Jul 10 '25 15:07 vyncent-t

This does not seem to run on windows

On main I can run make run-backend as normal, but when trying to do the same on this branch this is the error.

'uname' is not recognized as an internal or external command,
operable program or batch file.
process_begin: CreateProcess(NULL, uname -m, ...) failed.
Makefile:100: pipe: Bad file descriptor
! was unexpected at this time.
make: *** [Makefile:100: backend-check-build] Error 255

vyncent-t avatar Jul 11 '25 18:07 vyncent-t

@BABAR-TAHSEEN55 , can you fix this branch after the comments from @vyncent-t ?

joaquimrocha avatar Jul 18 '25 10:07 joaquimrocha

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 16 '25 10:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 15 '25 10:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Dec 15 '25 10:12 k8s-triage-robot

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Dec 15 '25 10:12 k8s-ci-robot