glances icon indicating copy to clipboard operation
glances copied to clipboard

Glances Web Intermittently Unresponsive

Open bignay2000 opened this issue 2 years ago • 6 comments

Glance Web UI periodically exceeds 48 seconds load time.

Steps to reproduce the behavior: Run docker pull nicolargo/glances:3.4.0.3 image in Docker.

Glances Web loads in over 48 seconds load time. Trying to load from different computers it just hangs. Then it all the sudden starts working after multiple attempts. If I restart the container then it works quickly.

I monitor websites with Uptime Kuma (https://github.com/louislam/uptime-kuma) Glances_Uptime

Environement (please complete the following information) Alpine Linux 3.18.0 64bit on a Linux (From container: nicolargo/glances:3.4.0.3) Docker version 20.10.23, build 8659133e59 Linux 5.15.111-flatcar (https://www.flatcar.org/) ESXi-7.0U3m-21686933-standard HP Z440 Workstation (8 CPUs x Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz, 128 GB Ram)

docker-compose.yml

version: '3'
services:
 hiveglance:
   deploy:
     resources:
       limits:
         cpus: '2'
         memory: 1G
   image: nexus.hivetechnologies.net:5000/hiveglance:3.4.0.3
   hostname: glance.hivetechnologies.net
   environment:
     GLANCES_OPT: "-w"
   volumes:
     - /var/run/docker.sock:/var/run/docker.sock:ro
   restart: always
   ports:
     - "10.10.11.140:80:61208"
     - "10.10.11.140:61209:61209"
   pid: "host"

Docker Logs

Glances Web User Interface started on http://0.0.0.0:61208/

glances --issue

===============================================================================
Glances 3.4.0.3 (/app/glances/__init__.py)
Python 3.11.3 (/venv/bin/python3)
PsUtil 5.9.5 (/venv/lib/python3.11/site-packages/psutil/__init__.py)
===============================================================================
alert         [OK]    0.00001s
[]
amps          [OK]    0.00005s
[]
cloud         [OK]    0.00003s
{}
connections   [OK]    0.03981s
{'ESTABLISHED': 0,
 'LISTEN': 2,
 'SYN_RECV': 0,
 'SYN_SENT': 2,
 'initiated': 2,
 'net_connections_enabled': True,
 'nf_conntrack_count': 231.0,
 'nf_conntrack_enabled': True,
 'nf_conntrack_max': 262144.0,
 'nf_conntrack_percent': 0.0881195068359375,
 'terminated': 2}
containers    [OK]    0.05525s key=name
[{'Command': ['/bin/sh', '-c', '/venv/bin/python3 -m glances -C /etc/glances.conf $GLANCES_OPT'],
  'Created': '2023-06-10T21:10:14.665466525Z',
  'Id': '2701fc59f0a92f5f7b08e064c27c1c24e29a1a35628f2794b7993d299c9f0267',
  'Image': ['nexus.hivetechnologies.net:5000/hiveglance:20230610_2028-3.4.0.3'],
  'Status': 'running',
  'Uptime': '13 hours',
  'cpu_percent': 18.343446658851114,
  'engine': 'docker',
  'io_r': None,
  'io_w': None,
  'key': 'name',
  'memory_usage': 113602560,
  'name': 'glancehivevmhivetechnologiesnet-hiveglance-1',
  'network_rx': None,
  'network_tx': None}, ...]
core          [OK]    0.00075s
{'log': 8, 'phys': 8}
cpu           [OK]    0.00044s
{'cpucore': 8,
 'ctx_switches': 23648,
 'guest': 0.0,
 'guest_nice': 0.0,
 'idle': 84.4,
 'interrupts': 18685,
 'iowait': 0.1,
 'irq': 0.5,
 'nice': 0.3,
 'soft_interrupts': 6822,
 'softirq': 0.2,
 'steal': 0.0,
 'syscalls': 0,
 'system': 6.1,
 'time_since_update': 2.4507219791412354,
 'total': 31.1,
 'user': 8.5}
diskio        [OK]    0.00068s key=disk_name
[{'disk_name': 'sda',
  'key': 'disk_name',
  'read_bytes': 0,
  'read_count': 0,
  'time_since_update': 2.109610080718994,
  'write_bytes': 8192,
  'write_count': 2}, ...]
folders       [OK]    0.00003s
[]
fs            [OK]    0.00063s key=mnt_point
[{'device_name': '/dev/sdb',
  'free': 22918823936,
  'fs_type': 'ext4',
  'key': 'mnt_point',
  'mnt_point': '/etc/resolv.conf',
  'percent': 27.9,
  'size': 33501757440,
  'used': 8848171008}, ...]
gpu           [OK]    0.00002s
[]
help          [OK]    0.00000s
None
ip            [OK]    0.00002s
{}
irq           [OK]    0.00091s key=irq_line
[{'irq_line': 'LOC', 'irq_rate': 10074, 'key': 'irq_line', 'time_since_update': 2.1126980781555176}, ...]
load          [OK]    0.00003s
{'cpucore': 8, 'min1': 0.611328125, 'min15': 0.7275390625, 'min5': 0.7021484375}
mem           [OK]    0.00016s
{'active': 3648421888,
 'available': 42575618048,
 'buffers': 1047326720,
 'cached': 20972949504,
 'free': 42575618048,
 'inactive': 39864688640,
 'percent': 36.8,
 'shared': 337444864,
 'total': 67405365248,
 'used': 24829747200}
memswap       [OK]    0.00028s
{'free': 0, 'percent': 0.0, 'sin': 0, 'sout': 0, 'time_since_update': 2.455564498901367, 'total': 0, 'used': 0}
network       [OK]    0.00053s key=interface_name
[{'alias': None,
  'cumulative_cx': 270940,
  'cumulative_rx': 135470,
  'cumulative_tx': 135470,
  'cx': 1740,
  'interface_name': 'lo',
  'is_up': True,
  'key': 'interface_name',
  'rx': 870,
  'speed': 0,
  'time_since_update': 2.102654457092285,
  'tx': 870}, ...]
now           [OK]    0.00001s
'2023-06-11 17:31:16 UTC'
percpu        [OK]    0.00006s key=cpu_number
[{'cpu_number': 0,
  'guest': 0.0,
  'guest_nice': 0.0,
  'idle': 4.0,
  'iowait': 0.0,
  'irq': 0.0,
  'key': 'cpu_number',
  'nice': 0.0,
  'softirq': 0.0,
  'steal': 0.0,
  'system': 8.0,
  'total': 96.0,
  'user': 25.0}, ...]
ports         [OK]    0.00027s
[]
processcount  [OK]    0.21971s
{'pid_max': 0, 'running': 1, 'sleeping': 415, 'thread': 2078, 'total': 516}
processlist   [OK]    0.00059s key=pid
[{'cmdline': ['/venv/bin/python3', '-m', 'glances', '--issue'],
  'cpu_percent': 17.1,
  'cpu_times': pcputimes(user=0.65, system=0.44, children_user=0.0, children_system=0.0, iowait=0.0),
  'gids': pgids(real=0, effective=0, saved=0),
  'io_counters': [430080, 143360, 0, 135168, 1],
  'key': 'pid',
  'memory_info': pmem(rss=44494848, vms=86753280, shared=10383360, text=4096, lib=0, data=71823360, dirty=0),
  'memory_percent': 0.06601084028888964,
  'name': 'python3',
  'nice': 0,
  'num_threads': 18,
  'pid': 4152900,
  'status': 'R',
  'time_since_update': 2.3938422203063965,
  'username': 'root'}, ...]
psutilversion [OK]    0.00003s
(5, 9, 5)
quicklook     [OK]    0.00075s
{'cpu': 31.1,
 'cpu_hz': 0.0,
 'cpu_hz_current': 3192607000.0,
 'cpu_name': 'Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz',
 'mem': 36.8,
 'percpu': [{...}, {...}, {...}, {...}, {...}, {...}, {...}, {...}],
 'swap': 0.0}
raid          [OK]    0.00003s
{}
sensors       [OK]    0.00001s
[]
smart         [OK]    0.00006s
{}
system        [OK]    0.00001s
{'hostname': 'glance.hivetechnologies.net',
 'hr_name': 'Alpine Linux 3.18.0 64bit',
 'linux_distro': 'Alpine Linux 3.18.0',
 'os_name': 'Linux',
 'os_version': '5.15.111-flatcar',
 'platform': '64bit'}
uptime        [OK]    0.00028s
{'seconds': 307858}
wifi          [OK]    0.00003s
[]
===============================================================================
Total time to update all stats: 0.32664s
===============================================================================

Note: My docker setup is super stable for several years with several containers that host websites; Glance is the only container that triggers an uptime alert..

bignay2000 avatar Jun 11 '23 17:06 bignay2000

I will see if I can get a "glances --issue" when its in a hung state.

bignay2000 avatar Jun 11 '23 18:06 bignay2000

Hey, I have the same issue, have you done some progress on this?

auanasgheps avatar Jul 28 '23 10:07 auanasgheps

  • Glances works at terminal level
    • If I open the Container shell and I execute glances it is fine
  • At the same time, webGUI does not load
  • Randomly and eventually (?) the WebGUI will work

I don't see any error in logs.

auanasgheps avatar Aug 01 '23 09:08 auanasgheps

I know this is almost a year old but still shows open and is the issue I'm having. Running glances off raspberry pi directly. Webui and api is unresponsive, but running glances direct in terminal shows activity. Running glances again with -w show's address already in use. Running glances with --issue provides the following:

===============================================================================
Glances 3.4.0.3 (/root/.local/lib/python2.7/site-packages/glances/__init__.pyc)
Python 2.7.16 (/usr/bin/python)
PsUtil 5.9.8 (/root/.local/lib/python2.7/site-packages/psutil/__init__.pyc)
===============================================================================
Traceback (most recent call last):
  File "/root/.local/bin/glances", line 10, in <module>
    sys.exit(main())
  File "/root/.local/lib/python2.7/site-packages/glances/__init__.py", line 185, in main
    start(config=core.get_config(), args=core.get_args())
  File "/root/.local/lib/python2.7/site-packages/glances/__init__.py", line 128, in start
    mode.serve_issue()
  File "/root/.local/lib/python2.7/site-packages/glances/standalone.py", line 125, in serve_issue
    ret = not self.screen.update(self.stats)
  File "/root/.local/lib/python2.7/site-packages/glances/outputs/glances_stdout_issue.py", line 120, in update
    message = '\n' + colors.NO + pprint.pformat(stat, compact=True, width=120, depth=2)
TypeError: pformat() got an unexpected keyword argument 'compact'```

fadwen avatar Mar 07 '24 17:03 fadwen

Hi, I think I have a solution for this if I run glances via systemd. Here is my writeup about it: https://github.com/home-assistant/core/issues/110551#issuecomment-2124236506

zollak avatar May 27 '24 08:05 zollak