Question: How to host multiple REALITY inbounds on the same port?
The use-case: I want to proxy 2 domain names on the 443 port of my server:
- One for cloaking my censorship bypass using REALITY. This is a public well-known domain name, for example
google.com. - The other one just to proxy as is. For example,
chatgpt.com. I use it together with an/etc/hostsrecord to bypass the geoblock ofchatgpt.com. In this use-case please assume that can't use an XTLS client to accesschatgpt.com.
google.com must be proxied to the real Google, when the request doesn't meet the requirements of the REALITY parameter. chatgpt.com must be proxied to the real ChatGPT regardless. How to implement it with Xray? I can configure the domains separately, but not together in a single Xray instance.
This advice https://github.com/chika0801/Xray-examples/issues/14#issuecomment-1842145924 looks related. It suggests to configure Xray for google.com with the local Nginx on another port as the destination, and configure Nginx to proxy chatgpt.com to whereever I need. The problem is, I don't have an SSL certificate for chatgpt.com, so I can't use server with proxy_pass in the Nginx configuration.
Do you have these conditions that I mentioned:
1 You purchased a VPS (VPS A for short) 2 You registered your own domain name, and you applied for a free SSL certificate (or a paid SSL certificate) for that domain name.
Then you set up Xray on top of the VPS, using steal_oneself as an example.
At this point, your clients can use the Xray configuration, normal connection to your VPS, browse the web, etc.. However, due to the IP of this VPS, this VPS cannot access the chatgpt website, and you want to solve this problem.
Then my suggestion is that you buy another VPS that can use chatgpt website (VPS B for short). We use some configurations in the Xray configuration file of VPS A to forward the request from chatgpt, to the VPS B side.
This is my personal use encountered this kind of problem, one of my solutions.
Translated with DeepL.com (free version)
This is a VPS_A configuration that I use to forward domains in the geosite openai category to another VPS in tag tokyo.
{
"log": {
"loglevel": "warning"
},
"dns": {
"servers": [
"https+local://1.1.1.1/dns-query"
]
},
"routing": {
"domainStrategy": "IPIfNonMatch",
"rules": [
{
"domain": [
"geosite:disney",
"geosite:netflix"
],
"outboundTag": "singapore"
},
{
"domain": [
"full:gemini.google.com",
"geosite:openai", // look here
"geosite:tiktok"
],
"outboundTag": "tokyo"
},
{
"ip": [
"geoip:cn"
],
"outboundTag": "tokyo"
},
{
"ip": [
"geoip:private"
],
"outboundTag": "block"
}
]
},
"inbounds": [
{
"listen": "0.0.0.0",
"port": 443,
"protocol": "vless",
"settings": {
"clients": [
{
"id": "chika",
"flow": "xtls-rprx-vision"
}
],
"decryption": "none"
},
"streamSettings": {
"network": "tcp",
"security": "reality",
"realitySettings": {
"dest": "/dev/shm/nginx.sock",
"xver": 1,
"serverNames": [
""
],
"privateKey": "",
"shortIds": [
""
]
}
},
"sniffing": {
"enabled": true,
"destOverride": [
"http",
"tls",
"quic"
]
}
}
],
"outbounds": [
{
"protocol": "freedom",
"settings": {
"domainStrategy": "ForceIPv6v4"
},
"streamSettings": {
"sockopt": {
"tcpFastOpen": true
}
},
"tag": "direct"
},
{
"protocol": "blackhole",
"tag": "block"
},
{
"protocol": "shadowsocks",
"settings": {
"servers": [
{
"address": "",
"port": 80,
"method": "2022-blake3-aes-128-gcm",
"password": ""
}
]
},
"streamSettings": {
"sockopt": {
"tcpMptcp": true,
"tcpNoDelay": true
}
},
"tag": "singapore"
},
{
"protocol": "shadowsocks",
"settings": {
"servers": [
{
"address": "",
"port": 80,
"method": "2022-blake3-aes-128-gcm",
"password": ""
}
]
},
"tag": "tokyo"
}
],
"policy": {
"levels": {
"0": {
"handshake": 2,
"connIdle": 120
}
}
}
}
Thank you for the response. In my original message I mentioned that I can implement both gateways using separate Xray configuration, either on 2 different servers or 2 different IPs on a single server. I'm looking for a solution with just 1 IP and server, hence I opened this issue.
This question is not up to date for me anymore, so I won't dig into this problem in the nearest future. Feel free to close this issue.