Run integration tests on Windows
Now that unit tests are passing on Windows, make sure integration tests pass on Windows. There's really no way to get to 1.0 without taking this step.
- [x] Verify current integration tests work on macOS:
- [x] flyio
- [x] platformsh Note: this is the only platform that needed no updating.
- [x] heroku (See #242)
- [x] Update integration tests to use Python, and remove all dependence on shell scripts.
- [x] fly_io
- [x] platform_sh
- [x] heroku
- [x] Remove all shell scripts.
- [x] Require pytest to be run with either
unit_testsorintegration_testsargument. - [ ] Convert build_dev_env.sh to Python, and move out of tests dir.
- [ ] Update integration tests to work on Windows.
- [ ] Update test organization.
- [ ] Move unit tests under a new tests/ dir.
- [ ] Move integration tests under tests/.
- [ ] Rename unit_tests -> e2e_tests.
- [ ] Update docs to reflect all of these changes.
- [ ] Make a new release, as this fixes some bugs ie ALLOWED_HOSTS on Heroku.
- [ ] Update integration tests to pass on VMs with a minimal development environment.
- [ ] Move all non-critical open tasks to other issues, to prioritize pre-1.0 work.
- [x] Consider requiring that only one platform's integration tests be specified on each test run?
Working notes
- [ ] Update docs with the variety of ways you can call tests: cd unit_tests && pytest, pytest unit_tests, etc. Not root pytest, pytest unit_tests integration_tests.
- [ ] Check that the target platform's CLI is installed before building sample project.
- [ ] Check they are logged in to their CLI.
- [x] It's hard to use
input()in pytest. Add flags for anything I was using prompting for. - [x] Probably want to suggest running with
-sflag, so you see the output of deployment runs. - [ ] Maybe prompt for initial confirmation in root conftest.py?
- This only works with the
-sflag. - This may be okay, because it only really makes sense to run integration tests when you can see the output.
- This may change with fully integrated CI, but I'm not sure that will ever be the case.
- This only works with the
- [x] Write a
build_dev_env.shscript, that will become a--build-dev-envflag. It creates a sample project in an active venv, ready for dev work. (This is really helpful!)
Update fly_io deployments
- [x] Integration test without automate-all fails. Complains that a fly app has not yet been created, even though one has.
- [x] Integration test with automat all fails. Complains about not recognizing a region.
#243
- [ ] Add a divider between user docs and dev docs in menu?
- [ ] Make correct assertion about DEBUG for local test.
- [ ] Refactor utils that are common between unit tests and integration tests?
- [ ] Support
--open-test-project-dirin this set of tests as well. - [ ] Add note in docs that pop up browser window may show an error, but then functionality tests pass. It's hard to get timing perfect without significantly slowing down tests. Consider refreshing browser before destroying test deployment.
Convert platform_sh test from shell to python
- [x] Probably need to activate the test environment? (Use python from venv directly.)
- [x]
platform pushruns manually in terminal, but is not running from script:subprocess.CalledProcessError: Command '['platform', 'url', '--yes']' returned non-zero exit status 1.- [x] Try
platform push --yesfrom terminal. - [ ] Maybe
subprocess.run(cmd_parts, capture_output=True)? - [ ] Does this work from a non-active venv?
- [x] Pause before pushing? (Yes, 30s seems to work.)
- [x] Try
- [x] Fix issue with testing remote project.
- [x] Fix issue with final test of local project (DEBUG).
- [x] If pauses are working, how short can they be? Does 10s work for each?
Refining platform_sh test
- [x] Test pypi deployment.
- [x]
msp.setup_project()needs to check for--pypi.
- [x]
- [x] Test poetry project.
- [x] Failed because simple_deploy not installed. Added this in platform_sh/deploy.py.
- [x] This should have been done in simple_deploy.py originally. No idea why it wasn't. Add simple_deploy unless using Pipenv.
- [x] Update unit tests affected by change. (Only for platform.sh, as integration tests are more authoritative.)
- [x] Make sure Pipenv does end up with simple_deploy in its requirements.
- [x] Re-run pipenv integration test.
- [x] Re-run poetry integration test.
- [x] Re-run req_txt integration test.
- [x] Test pipenv deployment.
- [x] Test
--automate-all. - [ ] Consider adding a check that
my_blog_projectdoes not already exist in user's Platform.sh projects. (Not now.) - [x] Refactor all pytest code for this test.
- [x] Re-run integration tests after refactoring.
- [x] Which
__init__.pyfiles do I really need?
pkg_manager
- [x] Handle different package managers better in testing setup scripts. I think they were used in the shell setup scripts, but not in the Python scripts?
- [x] Remove reset_test_project(); that needs to be integrated into setup_project().
- [ ] On Windows, will need some different commands in setup script.
- [x] poetry using pypi passes
- [x] poetry using local install passes
- [x] pipenv using local dsd install passes (Deployment works, but see note below about lock failing when running simple_deploy.)
- [x] pipenv using pypi dsd install passes. Working, but keep running into the certificate issue below. First attempt fails, and the retry happens after the functionality tests have started. (30s pause before pushing fixed this.)
Provisioning certificates
Validating 1 new domain
E: Error validating domain main-bvxea6i-riad2wf5ov7ek.us-3.platformsh.site: Couldn't complete challenge [HTTP01: The client lacks sufficient authorization]
Unable to validate domains main-bvxea6i-riad2wf5ov7ek.us-3.platformsh.site, will retry in the background.
(Next refresh will be at 2023-06-25 05:54:59.392156+00:00.)
- [ ] Review
simple_deploy.py/_add_simple_deploy_req(). - [x] Right now a failed deployment results in a passed test, because I have no assert statements. Is there an assert I can make using requests? Maybe have the final functionality tests return True or False? But, that doesn't really work with this overall flow, ie not wanting to destroy resources, not wanting to exit based on a failed assert and leave resources up... Maybe if fail, show confirmation about destroying? I think this could be done through a
yieldin the tmp project fixture, and put the destroy confirmation after the yield? Or, consider generating my own summary of what worked and what didn't, rather than the pass/fail nature of most tests? - [ ] Refactor.
- [x] Refactor to support testing next platform.
- [ ] Consider making this a class, so all the helper functions don't need a bunch of parameters.
- [x] Consider failing the test if either remote or local functionality tests fail, and moving summary and destroy to fixture after yield?
- [x] Yield won't work easily, because I need platform-specific calls to destroy the project. Instead, tear down the project and then assert that remote and local functionality tests passed.
Pipenv fails to generate lock file when calling simple_deploy?
From an integration test of local dsd install:
--- Your project is now configured for deployment on Platform.sh. ---
To deploy your project, you will need to:
- Commit the changes made in the configuration process.
$ git status
$ git add .
$ git commit -am "Configured project for deployment."
- Push your project to Platform.sh' servers:
$ platform push
- Open your project:
$ platform url
- As you develop your project further:
- Make local changes
- Commit your local changes
- Run `platform push`
- You can find a full record of this configuration in the simple_deploy_logs directory.
Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning.
Locking [packages] dependencies...
Building requirements...
Resolving dependencies...
✘ Locking Failed!
⠼ Locking...
ERROR:pip.subprocessor:[present-rich] Getting requirements to build wheel exited with 1
[ResolutionFailure]: File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/resolver.py", line 811, in _main
[ResolutionFailure]: resolve_packages(
[ResolutionFailure]: File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/resolver.py", line 759, in resolve_packages
[ResolutionFailure]: results, resolver = resolve(
[ResolutionFailure]: ^^^^^^^^
[ResolutionFailure]: File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/resolver.py", line 738, in resolve
[ResolutionFailure]: return resolve_deps(
[ResolutionFailure]: ^^^^^^^^^^^^^
[ResolutionFailure]: File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/utils/resolver.py", line 1165, in resolve_deps
[ResolutionFailure]: results, hashes, markers_lookup, resolver, skipped = actually_resolve_deps(
[ResolutionFailure]: ^^^^^^^^^^^^^^^^^^^^^^
[ResolutionFailure]: File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/utils/resolver.py", line 964, in actually_resolve_deps
[ResolutionFailure]: resolver.resolve()
[ResolutionFailure]: File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/utils/resolver.py", line 701, in resolve
[ResolutionFailure]: raise ResolutionFailure(message=str(e))
[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
You can use $ pipenv run pip install <requirement_name> to bypass this mechanism, then run $ pipenv graph to inspect the versions actually installed in the virtualenv.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: Getting requirements to build wheel exited with 1
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/__main__.py", line 4, in <module>
cli()
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/cli/options.py", line 58, in main
return super().main(*args, **kwargs, windows_expand_args=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/vendor/click/decorators.py", line 84, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/vendor/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/cli/command.py", line 369, in lock
do_lock(
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/routines/lock.py", line 79, in do_lock
venv_resolve_deps(
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/utils/resolver.py", line 1107, in venv_resolve_deps
c = resolve(cmd, st, project=project)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/gk/y2n2jsfj23g864pdr38rv4ch0000gn/T/pytest-of-eric/pytest-316/blog_project0/b_env/lib/python3.11/site-packages/pipenv/utils/resolver.py", line 1001, in resolve
raise RuntimeError("Failed to lock Pipfile.lock!")
RuntimeError: Failed to lock Pipfile.lock!
Committing changes...
[main 95dd4d9] Configured for deployment.
5 files changed, 115 insertions(+), 1 deletion(-)
I believe this was related to psycopg2. That package requires pg_config being on the path. If it's not on the path, Pipenv can't resolve the psycopg2 dependencies.
Fix: Install Postgres, and make sure pg_config is on PATH.
- [ ] Consider setting a marker that will only install psycopg2 to the remote environment.
- [ ] How can we fend off issues with locking not working on end user systems without Postgres?
- [ ] Consider failing the test if "CommandError" in
simple_deployoutput.- [ ] Do this more carefully later, in a way that still allows the remote resources that were created to be easily destroyed. Look at exactly where to make this assertion, and whether there's a way to get the project id/ name before leaving the test function.
Update Fly.io tests
- [x] Update main test file to run Fly.io deployment.
- [x] Need to get
app_nameandproject_url. - [x] grep
integration_tests/and make sureflyctlis not used anywhere; Fly.io is standardizing onflyoverflyctl. - [ ] Make sure
fly infoworks if the user has more than one project, or specify the name to be sure we're using the right project. (This is a Fly-specific task, not critical to this issue.) - [x] req_txt
- [x] poetry
- [x] pipenv
- [x]
--automate-all - [ ]
--pypi(Getting this to pass may require a new release.)
- [x] Need to get
- [x] Carry over any improvements from this file to Platform.sh tests.
- [x] Better names in helper functions.
- [x] Change ie
flyio_utils->platform_utils, so that name is used across all platform-specific integration tests. - [x] Move
commit_configuration_changes()to platform-agnostic utils. - [x] It does seem worthwhile to support destroying deployed resources on test failures. It's annoying to have to manually destroy resources during a series of failures. To test this, break the re that recognizes the app name; it fails quickly and in the right way.
Update Fly.io deployments
- [ ] Testing against PyPI, fails to recognize app name. Fix re in deploy.py?
- [x] In deploy.py,
flyctl->fly - [ ] Write a better re for getting app name.
Destroy project in yield section of tmp project fixture
This will allow destruction of the project even if there is a breaking failure.
- [x] Cache the app name, project id, or any other information needed for destruction in the pytest cache.
- [x] Cache the platform name as well.
- [x] In yield, pull from cache and call appropriate destroy utility.
- [x] Works for flyio; update platform_sh to match this workflow.
- [x] Remove diagnostics.
Update Heroku testing
- [ ] Cache app_name.
- [ ] req_txt
- [ ] poetry
- [ ] pipenv
- [ ]
--automate-all
Note: This took about 20 minutes after all the work on the other two platforms.