onc-certification-g10-test-kit
onc-certification-g10-test-kit copied to clipboard
Inferno testing session conflicts
We have an inferno cluster deployed on our own and have been hitting it pretty hard in the last few weeks. We've been noticing that if two different users are testing using the same FHIR base URL, that there is nothing stopping Inferno from mismatching the users. So users A and B will start the inferno EHR launch tests at roughly the same time. At that point inferno is waiting for both EHR launch redirects. Whichever user happens to finish their launch workflow first will get matched up with the whichever one started the EHR tests first, and the second to the second. There is no guarantee that the users get matched up with session they kicked off.
Can Inferno set some session state in the browser to uniquely identify the inferno launch users and prevent this?
The complicating factor here is that there is no expectation that the browser which initiated the test run is the same browser that performs the launch from the EHR. The only piece of identifying information that we can count on from the EHR is the iss
.
Oh, that's interesting.. Whenever I run the tests I usually go into a new tab to do the EHR launch, but it's always in the same browser session.
The only piece of identifying information that we can count on from the EHR is the iss.
We couldn't count on it but we could use session data or optional url params as a hint if we had it, right?
I don't think url params are an option. Sticking something in a cookie could be an option.
Cookie approach is what I was imagining. maybe it could check for the cookie to be set, then if not fall back on the iss matching. Would be better for our use cases and shouldn't break any existing usages
Hello @nathanloyer
Do you still have the same issue? If not, I will close this issue
Thank you
I haven't specifically tried this recently, but I expect it to still be an issue if yall haven't done anything specifically to resolve it. We haven't been hitting our server as hard since we got certified, but we still execute the tests daily and we do still have issues with flakiness around it. I am not sure if the issues we still have there would be resolved with a fix for this or if it is from our UI automation code not being written as well as it could be.
We have not done anything to address this issue, so it should remain open.