can-merge
                                
                                
                                
                                    can-merge copied to clipboard
                            
                            
                            
                        Generate mock data for tests from specific repositories
Closes #7.
Fetches the last PR status for the repos specified in test/repos.json, determines the expected test results, and saves the responses to test/mocks.json. Added fetch-mocks script to fetch those responses for the first time. On subsequent runs, the evaluation subroutine can be tested against those responses.
I will be jumping into this as well but there is one mock that's going to be hard to generate which are the watch mocks. Should I do everything else and skip that one?
Sure, let’s start with that :-)
So currently, we fetch from GitHub api and run the relevant tests on those responses.
- Get the latest pull request from 5 separate repositories.
 - Compare 
evaluatePullRequesttostatusCheckRollUp.stateon those responses and if they're the same thenevaluatePullReqestpasses. - Run the other tests such as 
filterPullRequest,parsePullRequest,evaluateCommitStatuson those responses. 
Some issues I have with this implementation:
- We don't have full control on the statuses. For example, we won't be able to test for all possible pull request statuses i.e. since only the latest PR is being returned from the 5 repositories we have chosen.
 - We are getting the response and then checking it with our own implementation. So essentially we don't know the actual status and only check when its returned. Maybe something better would be to know the actual status (as you see on github.com) before and then running our tests on that so that we will know whether our implementation is actually valid.
 
Which is why I would like to propose that we can set up a dummy repo with set statuses (hopefully as much as possible) which we can test against. Also this will help validate our implementation because we already know beforehand what the status is - instead of checking it only when the response is returned.
I think a dummy repo makes perfect sense. Want to make it and set it up, and then later you can transfer it to me so it can live alongside this one?
Awesome that sounds like a plan!
One thing I came across which may be a non-issue is that if the user doesn't have an internet connection they may not be able to fetch the mocks and run the tests.
Maybe we should leave the current mocks in as default and then if the user wants to fetch the latest they can do so? We could also have a script or action that can periodically update the mocks as well on GitHub Actions or something
Or maybe it can just be left as is because the tool is reliant on having an internet connection?
My expectation is that the mocks should be committed to git, and updateable only manually by an explicit command; that way, no internet connection is required to run tests.