frp icon indicating copy to clipboard operation
frp copied to clipboard

Paralel requests performance

Open blue-genie opened this issue 1 year ago • 2 comments

Bug Description

This is not really a bug, but I would like to get the community's recommendation on how to implement for my use case.

I have a web server for which I have 2 ISPs, one primary and a 2nd as backup for when the primary goes down, which runs frpc. I also have a AWS server which runs frps and handles XXX.ZZZ.com and YYY.ZZZ.com, which is supposed to forward the requests to the in-house server. The configuration for the client and server are below. I have a page which on initial load, loads maybe about 30MBs, in 20-25 images, the images are supposed to be loaded in parallel.

frpc Version

fatedier/frps:v0.59.0

frps Version

fatedier/frps:v0.59.0

System Architecture

linux/amd64 all server

Configurations

# frps.toml
bindPort = 7000
vhostHTTPPort = 80
vhostHTTPSPort = 9443
# frpc.toml
serverAddr = "XXX.XXX.XXX.XXX"
serverPort = 7000

[[proxies]]
name = "XXXX-https"
#type = "tcp"
type = "https"
localIP = "nginx"
localPort = 443
customDomains = ["XXX.ZZZ.com", "YYY.ZZZ.com"]

# now v1 and v2 are supported
transport.proxyProtocolVersion = "v2"

[[proxies]]
name = "XXX-http"
type = "http"
localIP = "nginx"
localPort = 80 
customDomains = ["XXX.ZZZ.com", "YYY.ZZZ.com"]

Logs

no errors just optimization query

Steps to reproduce

Issue, using the same client computer - just messing with the posts file to decide if the request is sent to AWS&frps or directly to the server:

If my clients go directly to the server, bypassing AWS & frps, they get really fast, 3x, on a full page load, with cache disabled and 33 http requests.

If the request goes through frps I see performance issues. The page that used to load in 3s not it's loaded in 9-10s. My nginx server reports that the http request time, $request_time, was 0.5s, but the some images load in 6s - 3-4s waiting for server response and 2-3s content download

Question: Is my configuration optimum for my needs? Anything that I should change?

Affected area

  • [ ] Docs
  • [ ] Installation
  • [X] Performance and Scalability
  • [ ] Security
  • [ ] User Experience
  • [ ] Test and Release
  • [ ] Developer Infrastructure
  • [ ] Client Plugin
  • [ ] Server Plugin
  • [ ] Extensions
  • [ ] Others

blue-genie avatar Aug 15 '24 00:08 blue-genie

This usually depends on your network conditions and bandwidth.

fatedier avatar Aug 15 '24 02:08 fatedier

In general I would agree, but I would assume that AWS has symmetric 1Gbps connection. I have 1Gbps symmetric on the server.

I understand that there will be an extra delay, because the request goes from local to AWS then comes back a different local, instead just between two locals, but still ...

Is my configuration the most optimum for the high concurent traffic? I tried to mess with the connection pooling, didn't see a difference.

What would be your configuration for this setup? Or how do I get more stats to debug and see if anything can be optimized?

blue-genie avatar Aug 15 '24 03:08 blue-genie

Issues go stale after 21d of inactivity. Stale issues rot after an additional 7d of inactivity and eventually close.

github-actions[bot] avatar Sep 06 '24 00:09 github-actions[bot]