Comte
Comte
i don't know
but infinity is very bad for security so I really don't like that
The last default was no cache (0) I think it's a good default because it's the standard configuration of git.
Thx @rossf7. I have this format : ``` root@worker3:~# cat /proc/17081/cgroup 11:devices:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope 10:cpuset:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope 9:blkio:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope 8:perf_event:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope 7:memory:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope 6:net_cls,net_prio:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope 5:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope 4:rdma:/ 3:pids:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope 2:cpu,cpuacct:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope 1:name=systemd:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope 0::/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9add50d0_5889_4175_8acf_767d7e690c1c.slice/docker-d3ddba91db4c6518ec04834fc0597f6945033de3ea4eefe638b276ee7734318a.scope ```
running with backtrace full : ``` thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: ParseIntError { kind: Empty }', src/exporters/mod.rs:653:77 stack backtrace: 0: 0x55b27a6f0c70 - std::backtrace_rs::backtrace::libunwind::trace::h72c2fb8038f1bbee at /rustc/7eac88abb2e57e752f3302f02be5f3ce3d7adfb4/library/std/src/../../backtrace/src/backtrace/libunwind.rs:96...
@harshavardhana : Is `${jwt:groups[0]}` suppose to work ? ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::${jwt:groups[0]}", "arn:aws:s3:::${jwt:groups[0]}/*", "arn:aws:s3:::${jwt:preferred_username}", "arn:aws:s3:::${jwt:preferred_username}/*" ] } ]...
My workaround at the moment is to duplicate the mapping of user in groups. I create the following statements in my sts policy : ``` { "Effect": "Allow", "Action": [...
yes we should allow that
I think we should assert : - if there is no personnal project (ie they all have a group name) we should store personnal data as if there is no...
In the #660 , I understand the need of separating Jobs and Services. This kind of separation can be worth it. In this issue It's less clear and It seem...