che icon indicating copy to clipboard operation
che copied to clipboard

Used CPU of a plugin is higher than limit on Developer Sandbox

Open l0rd opened this issue 3 years ago • 11 comments

Describe the bug

On developer sandbox the memory used by a plugin goes above the limit:

image

That should not happen. A container cannot use more CPU than its limit. Is the limit not applied? Is the used CPU mis-calculated?

Che version

7.36

Steps to reproduce

Use a devfile that includes the openshift connector and specify a low CPU limit for it. Start the workspace on developer sandbox.

Expected behavior

The used CPU should never go above the limit.

Runtime

OpenShift

Screenshots

No response

Installation method

other (please specify in additional context)

Environment

Dev Sandbox (workspaces.openshift.com)

Eclipse Che Logs

No response

Additional context

No response

l0rd avatar Nov 15 '21 09:11 l0rd

I have reproduced that for the Java extension too https://github.com/eclipse/che/issues/20769

l0rd avatar Nov 15 '21 09:11 l0rd

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

che-bot avatar May 14 '22 00:05 che-bot

/remove-lifecycle stale

l0rd avatar May 14 '22 18:05 l0rd

Could not overdraw the limit on dog-fooding with che-code editor: screenshot-che-dogfooding apps che-dev x6e0 p1 openshiftapps com-2022 08 18-16_41_44

vinokurig avatar Aug 18 '22 13:08 vinokurig

@vinokurig try to reproduce it on developer sandbox as it is described in the description

svor avatar Aug 18 '22 14:08 svor

Note that this issue was reproduced in Theia, not VS Code.

l0rd avatar Aug 18 '22 14:08 l0rd

Could not reproduce it on devspaces 3.0 with che-theia neither: screenshot-nimbusweb me-2022 08 19-10_59_39

vinokurig avatar Aug 19 '22 08:08 vinokurig

Could not overdraw the limit on dog-fooding with che-code editor:

On dogfooding we do apply neither limits nor quotas, it should be verified on the Developer Sandbox staging enviroment

ibuziuk avatar Aug 19 '22 09:08 ibuziuk

I've tried it on the Developer Sandbox with this devfile:

apiVersion: 1.0.0
metadata:
  name: bash-psfc
projects:
  - name: bash
    source:
      location: 'https://github.com/che-samples/bash'
      type: git
      branch: main
components:
  - mountSources: true
    command:
      - tail
    args:
      - '-f'
      - /dev/null
    memoryLimit: 64Mi
    type: dockerimage
    alias: dev
    image: 'registry.access.redhat.com/ubi8-minimal:8.3'
  - id: mads-hartmann/bash-ide-vscode/latest
    type: chePlugin
    registryUrl: 'https://eclipse-che.github.io/che-plugin-registry/7.42.0/v3/'
  - id: rogalmic/bash-debug/latest
    type: chePlugin
    registryUrl: 'https://eclipse-che.github.io/che-plugin-registry/7.42.0/v3/'
  - id: timonwong/shellcheck/latest
    preferences:
      shellcheck.executablePath: /bin/shellcheck
    type: chePlugin
    registryUrl: 'https://eclipse-che.github.io/che-plugin-registry/7.42.0/v3/'
  - id: redhat/vscode-openshift-connector/latest
    memoryLimit: 500Mi
    type: chePlugin

The workspcae failed to start because of the Pod crash loop back-off error see the video: ezgif-3-a6208c33c8

vinokurig avatar Aug 22 '22 11:08 vinokurig

It seems the problem with memory limit, did you detect which container was failed with OOM? Try to increase memoryLimit for that component

svor avatar Aug 22 '22 15:08 svor

After the developer sandbox has been updated to 3.1 I still can't reproduce the issue. When I setupt the limit to less then 750 Mi, workspaces crashes when it reaches the maximum of RAM, but if setup more than 750 Mi, I can't overload the containers RAM limit.

vinokurig avatar Aug 24 '22 09:08 vinokurig

I've reproduced it on the dogfooding instance:

  • Start a Quarkus workspace with VSCode editor
  • Install all recommended plugins
  • Enable the normal Java LS
  • See that Java LS is building the project
  • Open resource monitor info

Screenshot from 2022-10-11 12-48-24

svor avatar Oct 11 '22 09:10 svor

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

che-bot avatar Apr 09 '23 00:04 che-bot