Viceroy icon indicating copy to clipboard operation
Viceroy copied to clipboard

send_async never sends in tests

Open StarpTech opened this issue 4 years ago • 16 comments

Hi, as mentioned in the docs req.send_async(API) can be used to make requests without waiting for the response, therefore it doesn't block the initial request.

This method is also useful for sending requests where the response is unimportant, but the request may take longer than the Compute@Edge program is able to run, as the request will continue sending even after the program that intitiated it exits.

I use viceroy as a lib to run integration tests with ctx.handle_request. In my tests, the endpoint is never called. I also tried to wait several seconds before I test the assertion but without success. The same code with req.send or req.send_async + fastly::http::request::selectworks fine!

StarpTech avatar Nov 20 '21 23:11 StarpTech

Thank you for the report. I'll try to reproduce in the next day or so, and may have some further questions if I have trouble.

aturon avatar Nov 30 '21 22:11 aturon

I've attempted to reproduce without success.

In particular, I set up a service that:

  • Calls send_async on a request
  • Loops for 100s
  • Calls wait on the resulting PendingRequest

I set up a backend to point to localhost on a specific port, and set up a netcat listener on that port.

When I curled the Viceroy service, I immediately saw a request against the netcat listener.

Are you able to provide a small test service and instructions that will reproduce the issue?

aturon avatar Dec 01 '21 20:12 aturon

Could you also share your code?

StarpTech avatar Dec 01 '21 22:12 StarpTech

Sure. The service just creates a request, send it async, sleeps, and then returns the response.

use fastly::{Error, Request, Response};

#[fastly::main]
fn main(_: Request) -> Result<Response, Error> {
    let req = Request::get("http://localhost/");
    let resp = req.send_async("backend")?;
    std::thread::sleep(std::time::Duration::from_secs(100));
    Ok(resp.wait()?)
}

The backend is defined as url http://localhost:9000 where I had a netcat session listening, and confirmed the request was received immediately.

aturon avatar Dec 01 '21 22:12 aturon

Could you also share your test? Is it necessary to wait? The docs are different.

StarpTech avatar Dec 01 '21 22:12 StarpTech

That's right, wait is not required, and the request goes through even if you replace resp.wait()? with Response::new().

The test setup is:

  • Start up local testing: fastly compute serve. It will attempt to health check a backend at port 9000, and the check will fail with a warning (because no host is there yet).
  • Afterwards, start a netcat session acting as the backend: nc -l localhost 9000 on Linux
  • Then send a request to the service: curl http://localhost:7676

The netcat session immediately receives a request, showing that the send_async immediately makes a request to the backend:

GET / HTTP/1.1
host: localhost
content-length: 0

then, after 100s passes, the service will yield a response back to curl. The backend connection is still running, waiting for a response from netcat. If you kill the netcat session, you'll see a log about that in fastly compute serve's output.

aturon avatar Dec 01 '21 22:12 aturon

Thanks for the report. As mentioned in the issue, I use the library to run an integration test! Run it with fastly compute serve works, though.

StarpTech avatar Dec 01 '21 23:12 StarpTech

Can you say more about what you mean by using the library for an integration test?

FWIW, the implementation of async sending always immediately spawns a task to perform the send, but perhaps there's an issue with the Tokio runtime for your integration testing setup?

aturon avatar Dec 01 '21 23:12 aturon

Basically this (pseudocode)

    #[tokio::test]
    async fn it_should_query_with_post() {
		// Start a mock server for the origin.
		// ....
       let response = viceroy::handle_request(req, url).await.unwrap();

       origin_mock.assert();

       assert_eq!(response.status(), StatusCode::OK);
    }
    
     pub async fn handle_request(
        req: Request<Body>,
        url: String,
    ) -> Result<Response<viceroy_lib::body::Body>, Error> {
        // Load the wasm module into an execution context
        let mut ctx = ExecuteCtx::new("./../compute_service/bin/main.wasm")?
            .with_log_stderr(true)
            .with_log_stdout(true);

        let config_path = "./../compute_service/fastly.toml";

        let cfg = format!(
            r#"
    # This file describes a Fastly Compute@Edge package. To learn more visit:
    # https://developer.fastly.com/reference/fastly-toml/

    language = "rust"
    manifest_version = 2
    name = "test"
    
    [local_server]
        [local_server.dictionaries]
            [local_server.dictionaries.config]
                    file = "fixtures/config.dict.json"
                    format = "json"
        [local_server.backends]
            [local_server.backends.fastly_api]
                    url = "{}"
            [local_server.backends.chained_vcl_service]
                    url = "{}"
    
    "#,
            url, url
        );

        let config = FastlyConfig::from_str(&cfg).unwrap();
        let backends = config.backends();
        let dictionaries = config.dictionaries();

        ctx = ctx
            .with_backends(backends.clone())
            .with_dictionaries(dictionaries.clone())
            .with_config_path(PathBuf::from(config_path));

        Ok(ctx
            .handle_request(req, "127.0.0.1".parse().unwrap())
            .await?)
    }   

and again this works flawlessly with req.send.

StarpTech avatar Dec 02 '21 09:12 StarpTech

Hi @StarpTech, you're likely hitting this issue because #[tokio::test] uses a single-threaded runtime by default. Does it work if you customize the attribute to use a multi-threaded runtime as described here?

acfoltzer avatar Dec 02 '21 18:12 acfoltzer

Hi @acfoltzer this doesn't work.

StarpTech avatar Dec 07 '21 17:12 StarpTech

@StarpTech thanks for giving the Tokio change a try!

So to recap, the send_async API seems to be working properly when Viceroy is used with the CLI as a harness, but something in your custom test harness setup -- using VIceroy as a library -- is preventing async requests from making independent progress.

Pending requests are spawned into their own Tokio task, so as @acfoltzer alluded to, this seems most likely to be an issue with the ambient Tokio runtime configured in your harness. It's difficult to say more, however, without being able to see more details about your harness setup. If you're able to produce some cut-down version of your harness that you could provide openly, I'd be happy to take a look!

On a separate note, I'm also curious to hear more about the use-case for a custom test harness for your work. We're interested in deeper integration with Viceroy and e.g. the Rust unit test infrastructure, and if you're exploring that space it'd be great to hear what you're finding so far.

aturon avatar Dec 13 '21 22:12 aturon

Hi @aturon, it's been a while. Today, I had the same use case. My workaround is to use .send in the local development and .sendAsync in production.

After reading through the issue I saw that you write tests differently. Could you try to embed victory as a library and call .handle_request in the test as outlined in https://github.com/fastly/Viceroy/issues/97#issuecomment-984467425 ? We can also set up a call and I will show you my setup.

StarpTech avatar Apr 06 '22 20:04 StarpTech

Could you also share your test? Is it necessary to wait? The docs are different.

hello @StarpTech :wave: could you possibly try running your tests again with a call to wait in your Wasm program?

i think i may have been able to reproduce this problem this morning, but would like to confirm if you also see this request go through properly if you wait for the pending request to resolve before finishing.

cratelyn avatar Apr 11 '22 15:04 cratelyn

...and after a bit more digging, i believe i've reproduced your problem and produced a fix. the short summary of this is that after ViceroyCtx::handle_request finishes, these "dangling" async requests being polled to completion.

that would explain why this problem wasn't reproducible when running fastly compute serve like @aturon did above, but it does manifest in the test case you provided!

:tada: working test example

if we change this around to use ViceroyService instead, being sure to use a tokio runtime with two worker threads, i was able to see an origin request come in successfully!

use std::collections::HashMap;
use std::path::PathBuf;
use std::sync::Arc;
use viceroy_lib::config::Backend;

#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn it_should_query_with_post() {
    // Load the wasm module into an execution context
    let mut ctx = viceroy_lib::ExecuteCtx::new("./async-no-wait.wasm")
        .unwrap()
        .with_log_stderr(true)
        .with_log_stdout(true);

    let backend = Backend {
        uri: "http://127.0.0.1:8080".parse().unwrap(),
        override_host: None,
    };
    let mut backends = HashMap::new();
    backends.insert("backend1".to_owned(), Arc::new(backend));

    ctx = ctx
        .with_backends(backends.clone())
        .with_config_path(PathBuf::from("./fastly.toml"));

    tokio::spawn(async move {
        viceroy_lib::ViceroyService::new(ctx)
            .serve("127.0.0.1:8081".parse().unwrap())
            .await
            .unwrap()
    });

    let response = reqwest::get("http://127.0.0.1:8081").await.unwrap();

    assert_eq!(response.status(), http::StatusCode::OK);
}

this was done using the following Wasm program:

use fastly::{Request, Response};
fn main() {
    let req1 = Request::get("http://www.example.com/")
        .with_header("it", "works")
        .send("backend1")
        .unwrap();
    Response::new().send_to_client();
}

:thinking: now what?

i was a little bit surprised when i was able to figure out the solution here, and i think this is something that our documentation could probably be updated to mention. a small warning could be helpful, and i am sorry about what a sharp edge this was!

if it is helpful background, we wrap that context object in the aforementioned ViceroyService to provide a tower::Service implementation for hyper. that's probably the best tool for what you're doing :slightly_smiling_face:

cratelyn avatar Apr 11 '22 16:04 cratelyn

Hi @cratelyn I can't test it right now. I'm going to validate this tomorrow but I want to thank you already. This looks very reasonable.

StarpTech avatar Apr 11 '22 17:04 StarpTech