godown
godown copied to clipboard
Does it support permission verification?
Hello, does it support permission verification? I have started using godown in production projects.
@fangpianqi
Hello! Glad to hear that you have started using it in production!
Could you, please, clarify what you mean under permission verification
?
@fangpianqi Hello! Glad to hear that you have started using it in production! Could you, please, clarify what you mean under
permission verification
?
I am very happy to see your reply, I mean permission verification, such as accessing the secret key. My English is not good, please forgive me.
@fangpianqi Don't worry about English :) I am also not a native speaker.
Sorry, but I still don't understand what you mean:( Maybe you can provide some examples?
Hello, I mean access restrictions or authentication, you have to meet certain conditions to be able to access.
In fact, what I want to express should be a "password"? like requirepass
in redis.
And I found another problem, I set up the cluster, I lpush 30 entries in one of a list in application1, and executed rpop operation in the application2 once, application2 llen is 29, I found that application1's data are not change. Is this a bug?
Below here is my script,is this set correctly? Thank you for your help!
/application/godown/godown-server -dir=/application/godown/data1 -id=01 -listen=10.2.1.36:14001 -raft=10.2.1.36:24001
/application/godown/godown-server -dir=/application/godown/data2 -id=02 -listen=10.2.1.36:14002 -raft=10.2.1.36:24002 -join=10.2.1.36:14001
/application/godown/godown-server -dir=/application/godown/data3 -id=03 -listen=10.2.1.36:14003 -raft=10.2.1.36:24003 -join=10.2.1.36:14001
@fangpianqi
I mean access restrictions or authentication, you have to meet certain conditions to be able to access.
Authentication is not supported right now, but I plan to add this one in the future.
And I found another problem, I set up the cluster, I lpush 30 entries in one of a list in application1, and executed rpop operation in the application2 once, application2 llen is 29, I found that application1's data are not change. Is this a bug?
I have checked right now, and it works properly. Could you provide a list of commands that you have executed with specifying a node and output of the ones?
I have checked right now, and it works properly. Could you provide a list of commands that you have executed with specifying a node and output of the ones?
Hello, first, I am connected to the godown operation of the port 14001.
(127.0.0.1:14001) godown > lpush test a b c
OK
(127.0.0.1:14001) godown > llen test
(integer) 3
Then, I am connected to the godown operation of the port 14002.
(127.0.0.1:14002) godown > llen test
(integer) 3
(127.0.0.1:14002) godown > rpop test
(string) a
(127.0.0.1:14002) godown > llen test
(integer) 2
Now, in 14001 or 14003
(127.0.0.1:14001) godown > llen test
(integer) 3
I used the following command to start three godown instances.
./godown-server -dir=/application/godown/data1 -id=01 -listen=0.0.0.0:14001 -raft=10.2.1.36:24001
./godown-server -dir=/application/godown/data2 -id=02 -listen=0.0.0.0:14002 -raft=10.2.1.36:24002 -join=10.2.1.36:14001
./godown-server -dir=/application/godown/data3 -id=03 -listen=0.0.0.0:14003 -raft=10.2.1.36:24003 -join=10.2.1.36:14001
Maybe I am wrong, please guide.Thank you very much.
Maybe I know, in godown, the list is LIFO. If I use lpush, I need to use lpop to ensure the cluster synchronization. If I use rpop, only the list of the currently operating machine will change, and other machines will not.I think this should be a bug.
Maybe my question is a bit more, please forgive me. In the production environment, I also found another problem, restarting a godown, sometimes failing to start, similar to the following error.
[root@localhost godown]# ./godown-server -id=03 -dir=/application/godown/data -listen=10.2.1.246:14001 -raft=10.2.1.246:24001 -join=10.2.1.35:14001
2018/11/29 08:13:13 [INFO] raft: Initial configuration (index=4): [{Suffrage:Voter ID:02 Address:10.2.1.35:24001} {Suffrage:Voter ID:03 Address:10.2.1.246:24001}]
2018/11/29 08:13:13 [INFO] raft: Node at 10.2.1.246:24001 [Follower] entering Follower state (Leader: "")
could not add a new node to the cluster: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp: missing address"
Sometimes it may be possible to start the process successfully after I delete the data folder.
My deployment in a production environment is that each physical machine has at least one godown, the program connects to the local godown, and uses the godown feature to reduce the cost of application development.
@fangpianqi
And I found another problem, I set up the cluster, I lpush 30 entries in one of a list in application1, and executed rpop operation in the application2 once, application2 llen is 29, I found that application1's data are not change. Is this a bug?
Actually, there was a bug :(. I've fixed that in the 1.1.1 release. Please, check it out.
@fangpianqi I will investigate an issue with node reloading and let you know asap. Thank you!
@fangpianqi I will investigate an issue with node reloading and let you know asap. Thank you!
Thanks for the great author! I will use my free time to promote godown in China and let more people know godown.
@fangpianqi I believe that I've fixed node reloading issue in the release (1.1.3). Please, check it out and let me know about the result. Thank you!
I used version 1.2.0. At present, all the previous problems have been effectively solved. I will continue to track and find new problems will be reported in time :)
I found lpush, rpush api inconsistent.
// RPush appends a new value(s) to the list stored at the given key.
func (c *Client) RPush(key, value string, values ...string) StatusResult
//LPush prepends a new value to the list stored at the given key.
func (c *Client) LPush(key, value string) StatusResult
I think setting the api this way will make it more comfortable to use.
func (c *Client) RPush(key, values ...string) StatusResult
Found a new situation. When the master stops for a long time, the slave can't connect after the startup. At this time, restarting the slave will cause the following error.
2018/12/03 11:34:46 [DEBUG] raft-net: 10.2.1.246:24001 accepted connection from: 10.2.1.35:38434
2018/12/03 11:34:48 [DEBUG] raft-net: 10.2.1.246:24001 accepted connection from: 10.2.1.35:38472
2018/12/03 11:34:49 [DEBUG] raft-net: 10.2.1.246:24001 accepted connection from: 10.2.1.35:38498
2018/12/03 11:34:50 [DEBUG] raft-net: 10.2.1.246:24001 accepted connection from: 10.2.1.35:38530
2018/12/03 11:34:51 [DEBUG] raft-net: 10.2.1.246:24001 accepted connection from: 10.2.1.35:38556
2018/12/03 11:34:52 [INFO] raft: Initial configuration (index=5): [{Suffrage:Voter ID:01 Address:10.2.1.36:24001} {Suffrage:Voter ID:02 Address:10.2.1.35:24001} {Suffrage:Voter ID:03 Address:10.2.1.246:24001}]
2018/12/03 11:34:52 [INFO] raft: Node at 10.2.1.246:24001 [Follower] entering Follower state (Leader: "")
could not add a new node to the cluster: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.2.1.36:14001: connect: connection refused"
I kept restarting the slave. After trying it a dozen times, no longer disconnected the command line, but it stuck for a long time, and the client interface could not perform the write operation.