Marzban
Marzban copied to clipboard
separate panel xray config from nodes
please separate panel xray config frome nodes in some situation if you want to use something like xray tunnel on nodes or some dat files in nodes only you have to make so many changes in xray config to make it work in nodes, so separate the configs can help to avoid this issues we can tabs in core settings section to edit each node.
You can change node ports in docker file with adding ports part
I'm not talking about port i talk about json config
It's in plan but I don't think it happen soon
This could be done by adding XRAY_CONFIGS_DIR and searching for the one named as the node or an option to fallback to strictly use node's config
This could be done by adding XRAY_CONFIGS_DIR and searching for the one named as the node or an option to fallback to strictly use node's config
in this case we probably have some conflict in showing inbounds in this situation we need to resolve them first we can have multiple inbound with same tag or same port but different protocol and other things
We can use something like [node Name]tag so tags will be unique in database
We can use something like [node Name]tag so tags will be unique in database
xray configs can't be specify to a node , it can be use for multi node
in this case we probably have some conflict in showing inbounds in this situation we need to resolve them first
could be resolved by adding a column to "inbounds" table, called node to store id of the node. basically an improved version of immohammad2000's solution.
first we parse all node configs from the directory and the main one, all inbounds from the main config are stored in the db with a wildcard for node column; if there are any configs specific to any nodes they're stored with id of the specific node.(foreign key or something) the same happens to starting up the nodes so far startup is OK
after that there are seh chahar problems I guess:
- ~~user stats needs to change to get specific tags from some nodes, if there are specific tags for the node~~ not necessary, getting user stats doesn't need a tag
- removing and adding a user in xray/operations.py changes to support this
- host settings should list each inbound separately (which happens automatically but should also list name of the node)
- نمیدانم اطلاعی ندارم basically any mention of the word inbound or tag should be checked.
I think this works & I'd be more than happy if you prove me wrong as it will improve this solution. Otherwise try to improve the db structure I mentioned so that listing all inbounds from a node would be faster.
We can use something like [node Name]tag so tags will be unique in database
We could do it that way(and it works if there are separate files for some nodes) but It would mix xray config logic with marzban logic too much.
in this case we probably have some conflict in showing inbounds in this situation we need to resolve them first
could be resolved by adding a column to "inbounds" table, called node to store id of the node. basically an improved version of immohammad2000's solution.
first we parse all node configs from the directory and the main one, all inbounds from the main config are stored in the db with a wildcard for node column; if there are any configs specific to any nodes they're stored with id of the specific node.(foreign key or something) the same happens to starting up the nodes so far startup is OK
after that there are seh chahar problems I guess:
- ~~user stats needs to change to get specific tags from some nodes, if there are specific tags for the node~~ not necessary, getting user stats doesn't need a tag
- removing and adding a user in xray/operations.py changes to support this
- host settings should list each inbound separately (which happens automatically but should also list name of the node)
- نمیدانم اطلاعی ندارم basically any mention of the word inbound or tag should be checked.
I think this works & I'd be more than happy if you prove me wrong as it will improve this solution. Otherwise try to improve the db structure I mentioned so that listing all inbounds from a node would be faster.
If we do this every time we change the node xray config, database will generate a new row for inbounds and we're gonna have lots of useless inbounds in the database
Also we have a problem in API, how we should send inbound name for each user with node name ?
If we do this every time we change the node xray config, database will generate a new row for inbounds and we're gonna have lots of useless inbounds in the database
as i know it do this already
Also we have a problem in API, how we should send inbound name for each user with node name ?
we can use dictionary for each node :
{
"master":[
"inbound_name1",
"inbound_name2"
],
"mode1":[
"inbound_name1",
"inbound_name2"
]
}
if we want to do this, some logic must change
If we do this every time we change the node xray config, database will generate a new row for inbounds and we're gonna have lots of useless inbounds in the database
already happens, I checked a user's db yesterday and noticed 170+ rows in inbounds table. I'm not sure if we need that table. I'm still investigating the code and possibilities for a better db structure. haven't yet read much code.
Also we have a problem in API, how we should send inbound name for each user with node name ?
in case there are already enough (other than this) changes breaking the API we could hold the current API intact by adding an env variable
I don't think I'd do it with this db structure. the ability to add a separated config for each node means adding more inbounds. the most important problem this tries to solve is scalability. however under current circumstances adding too many inbounds removes the ability to easily select inbounds for each user to fix that we would need a wrapper for selecting inbounds for each user. I'd like to call that services; where each user has a set of services, each containing a set of inbounds, each inbound specific to a certain node(or even better, each inbound specific to a config specific to one or more nodes) adding this wrapper causes problems with the api, and also it would cause problems with the proxies table since we wouldn't know the protocols a user needs, or if it changes in the future
So backward-compatibility isn't going to happen easily; on the other hand if we do this with the current structure, It'd have to be re-done for scalability. I'm not going to do this twice.