OHHTTPStubs
OHHTTPStubs copied to clipboard
[Question] (Quick's) Testcases run twice whenever stubbing happens
New Issue Checklist
- [x] I have tried with the latest version of OHHTTPStubs
- [x] I have read the README
- [x] I have read the Using the right Swift Version of
OHHTTPStubsfor your project section - [x] I have searched in the existing issues
- [x] I have read the OHHTTPStubs wiki to see if there wasn't a detailed page talking about my issue
Environment
- version of OHHTTPStubs: 6.0.0
- integration method you are using:
- [x] Cocoapods
- [ ] Carthage
- [ ] submodule
- [ ] other
- version of the tool you use: 1.1.0
Issue Description
Test cases get executed twice whenever I'm using OHHTTPStubs to stub a response. I'm using Quick/Nimble together with OHHTTPStubs, which works fine. But whenever I'm stubbing a response, the test case is run twice. Now I'm fairly sure the testcases are run twice, because I see multiple prints...
Complete output when you encounter the issue (if any)
I.e. a test code snippet:
override func spec() {
describe("something") {
beforeEach {
print("describe before")
stub(isHost("example.com") && isPath("/")) {
// Just a response for the sake of showing this example snippet
let notConnected = NSError(domain: NSURLErrorDomain,
code: Int(CFNetworkErrors.cfurlErrorNotConnectedToInternet.rawValue),
userInfo: nil)
return OHHTTPStubsResponse(error: notConnected)
}
}
afterEach {
print("describe after")
OHHTTPStubs.removeAllStubs()
}
context("of something") {
beforeEach {
print("context before")
}
afterEach {
print("context after")
}
it("should behave like something") {
print("in the `IT`")
didFinish = false
let provider = MoyaProvider<MyService>
provider.request() {
// This eventually reaches the `failure` report of the network operation (because of the No-network error response)
didFinish = true
}
expect(didFinish).toEventually(beTrue())
}
}
}
}
Test Case '-[CheckerTests something__of_something__should_behave_like_something]' started.
13:35:29.046 CheckerTests.spec():21 - describe before
13:35:29.046 CheckerTests.spec():21 - describe before
13:35:29.048 CheckerTests.spec():29 - context before
13:35:29.048 CheckerTests.spec():29 - context before
13:35:29.048 CheckerTests.spec():36 - in the `IT`
13:35:29.048 CheckerTests.spec():36 - in the `IT`
13:35:29.049 CheckerTests.spec():32 - context after
1Test Case '-[CheckerTests something__of_something__should_behave_like_something]' passed (0.024 seconds).
Wow very strange indeed. Thanks for taking the time to file an issue.
I don't see how calling stub could influence Quick/Nimble that way, especially since quick drives the tests then execute stub not the other way around so an potential bug influenced by stub on quick seems backwards… I have no idea what's happening there.
Just to be sure:
- did you try without the stub to confirm that in that situation it only prints once?
- put breakpoints in the beforeEach, afterEach, and
it?
Given that the print for "describe before" (which I don't see in your code btw) and "context before" are seen twice BUT the "context after" is only printed once, I'm wondering if it's not a glitch with print and the console (eg I've seen such behaviors in some projects when the print isn't executed on the main thread etc)?
Oh I'm sorry. I pasted in a snippet and added the print's after, not updating the snippet; let me correct that.
Yeah, I'm not sure what to believe anymore. I made two new test classes and in one of them it works like it's supposed to, in the other class I get the same problem again however.
I did noticed though that it might have something to do with how I stub. There is a lot of repeating code in tests so I made a helper class for stubbing. See the snippet below:
/// A convenience helper to stub certain targets
struct StubsHelper {
/// Convenience function to stub the network requests
/// - Remark: For an explanation of some params check: https://github.com/AliSoftware/OHHTTPStubs
static func stubHost(_ host: String,
statusCode: Int32 = 200,
headers: [AnyHashable: Any]? = nil,
timeBeforeSendingRequest: Double = 0,
stretchResponseOverTime: Double = 0,
simulateNetwork: SimulatedNetwork? = nil,
verbose: StubVerboseLevel = .none,
willStub: (() -> Void)? = nil) {
print("stubbin once!")
let condition = isHost(host)
stub(condition: condition) { _ in
willStub?()
if verbose > .none {
var msg = "stubbing \(host)"
msg += " | statusCode: \(statusCode)"
if verbose > .short {
if let aSimulatedNetwork = simulateNetwork {
msg += " | simulatedNetwork: \(aSimulatedNetwork.networkType)"
}
if timeBeforeSendingRequest > 0 {
msg += " | delayed by: \(timeBeforeSendingRequest) second(s)"
}
if stretchResponseOverTime > 0 {
msg += " | stretched over: \(stretchResponseOverTime) second(s)"
}
}
if verbose > .long {
if let someHeaders = headers {
msg += " | with headers: \(someHeaders)"
}
}
tLog.verbose(msg)
}
var response: OHHTTPStubsResponse
do {
let json = try JSONSerialization.jsonObject(with: Data(),
options: []) as? [AnyHashable: Any]
var responseHeaders: [AnyHashable: Any] = ["Content-Type": "application/json"]
if let someHeaders = headers {
for header in someHeaders {
responseHeaders[header.key] = header.value
}
}
response = OHHTTPStubsResponse(jsonObject: json ?? ["StubError": "No sampledata"],
statusCode: statusCode,
headers: responseHeaders)
} catch {
response = OHHTTPStubsResponse(data: Data(),
statusCode: statusCode,
headers: headers)
}
if (simulateNetwork != nil) {
guard simulateNetwork != .noNetwork else {
let notConnected = NSError(domain: NSURLErrorDomain,
code: Int(CFNetworkErrors.cfurlErrorNotConnectedToInternet.rawValue),
userInfo: nil)
return OHHTTPStubsResponse(error: notConnected)
}
response.responseTime(simulateNetwork!.rawValue)
} else {
response.requestTime(timeBeforeSendingRequest,
responseTime: stretchResponseOverTime)
}
return response
}
}
}
Now whenever I try to use StubsHelper.stubHost(...) in the beforeEach and removing the stubs again in the afterEach (with OHHTTPStubs.removeAllStubs) I get the same problem.
Sadly I don't see how this is somehow wrong...
Do you have a (simplified) sample project you could share reproducing the issue so we can investigate more?
I'll try, but the project is littered with corporate sensitive data. Going to check if I can manage to create one.
Hey @geraldeersteling ! Any news on that issue? Did you manage to understand what was going on, or maybe even solve it? Or otherwise come up with a sample project reproducing it? Thanks!
Hi @AliSoftware,
Sadly no news yet 😞. To be honest, I haven't been able to look into it further as the project is currently on hold (and I'm on an other project right now). It will probably take a few months before the project continues, I'll set a reminder for myself to come back to this issue at that point.
Thanks for coming back at me though, much appreciated!