Amos Kong

Results 26 comments of Amos Kong

I also touched this problem on Fedora 33, gtksourceview4-4.8.0-1.fc33.x86_64 had been installed in the past. I tried to install gtksourceview3-3.24.11-4.fc33.x86_64 by `sudo dnf install gtksourceview3`, the problem was solved :-)

If we execute client test in localhost, there is no job ID. the job_id can also be got from the URL http://autotest-virt.virt....../results/881-debug_user/virtlab6.virt.bos.redhat.com/job_report.html 881 is the job id.

Original: ``` 06:41:47,092 | restart_node_localhost_test - Restarting second node... 06:41:47,107 | restart_node_localhost_test - Source localhost sent {'change_type': 'DOWN', 'address': ('127.0.58.2', 9161)} 06:41:50,103 | node2: Starting scylla: .... 06:42:15,980 | restart_node_localhost_test...

> Hello Amos, If the node is going down, it makes little sense to send the NEW_NODE event because we want the clients to know the new node after cql...

Reproduced with recent master. ``` scylla-jmx-666.development-20171121.f4ef4a5.el7.centos.noarch scylla-conf-666.development-0.20171121.c1b97d1.el7.centos.x86_64 scylla-tools-core-666.development-20171121.c4ba9fc.el7.centos.noarch scylla-server-666.development-0.20171121.c1b97d1.el7.centos.x86_64 scylla-tools-666.development-20171121.c4ba9fc.el7.centos.noarch scylla-666.development-0.20171121.c1b97d1.el7.centos.x86_64 scylla-kernel-conf-666.development-0.20171121.c1b97d1.el7.centos.x86_64 ``` ``` $ cassandra-stress counter_write no-warmup cl=QUORUM duration=10s -schema 'replication(factor=1) compaction(strategy=DateTieredCompactionStrategy)' keyspace=keyspace2 -port jmx=6868 -mode cql3 native -rate...

The stuck also exists with 3.0.7 (ami-012abc8d72fd276b0) ``` $ rpm -qa |grep scylla scylla-libgcc73-7.3.1-1.2.el7.centos.x86_64 scylla-conf-3.0.7-0.20190624.b6fa715f7.el7.x86_64 scylla-libatomic73-7.3.1-1.2.el7.centos.x86_64 scylla-tools-core-3.0.7-20190624.24bd7f3aad.el7.noarch scylla-jmx-3.0.7-20190624.c9dd098.el7.noarch scylla-env-1.1-1.el7.noarch scylla-kernel-conf-3.0.7-0.20190624.b6fa715f7.el7.x86_64 scylla-ixgbevf-4.3.6-1dkms.noarch scylla-server-3.0.7-0.20190624.b6fa715f7.el7.x86_64 scylla-debuginfo-3.0.7-0.20190624.b6fa715f7.el7.x86_64 scylla-ena-2.0.2-2dkms.noarch scylla-ami-3.0.7-20190624.adbc493.el7.noarch scylla-libstdc++73-7.3.1-1.2.el7.centos.x86_64 scylla-tools-3.0.7-20190624.24bd7f3aad.el7.noarch scylla-3.0.7-0.20190624.b6fa715f7.el7.x86_64 ```

Yes in theory. But we have very large duration for 4days, 7 days longevity. Then we need a very very big population. On Wed, Jul 3, 2019 at 9:31 PM...

@slivne Can you involve or assign someone? The issue causes part of longevitys always fail, it's a TEST blocker.

> @amoskong to make it clear this is an issue with c-s it will happen even in cassandra, > > can you check that. The stuck problem still exist with...

> > @amoskong to make it clear this is an issue with c-s it will happen even in cassandra, > > can you check that. > > The stuck problem...