scrapy-cluster icon indicating copy to clipboard operation
scrapy-cluster copied to clipboard

Uitest

Open backtrack-5 opened this issue 6 years ago • 21 comments

Added UI testing using selenium and python

backtrack-5 avatar Mar 29 '18 10:03 backtrack-5

Do you know why all the tests failed on your PR? The last time the ui branch was tested it was still passing, seems odd that everything would fail all of a sudden.

  • Can you outline more clearly what kinds of programs or libraries are expected to be installed on the machine? It says to install the requirements but I see no reference of selenium in your chages.
  • What kind of state should the cluster be in to run these tests (do all apps need to be running or just the UI + Rest service)?
  • Can you make sure that when run_docker_tests.sh is called that your selenium tests only run during the online phase?

madisonb avatar Mar 30 '18 15:03 madisonb

Hi @madisonb,

My bad, I forgot to include the requirement file. I have added it now.

Can you outline more clearly what kinds of programs or libraries are expected to be installed on the machine? It says to install the requirements but I see no reference of selenium in your chages.

We need to install

  1. selenium-3.11.0
  2. nose-1.3.7
  3. Chrominum driver ( needs to include in PATH)

What kind of state should the cluster be in to run these tests (do all apps need to be running or just the UI + Rest service)?

If we run only UI + Rest service except 2 tests test_scrapy_cluster_panel and test_submit_with_info in indexpage_test.py all will pass. If we run entire cluster all the tests will pass.
Since it is more of the UI test, I have concentrated only on the UI parts.

Can you make sure that when run_docker_tests.sh is called that your selenium tests only run during the online phase?

I have added ui_test.sh to execute all the scripts.

NOTE: in the settings file we need to set the UI URL.

Please let me know if you need any other details.

Thanks,

backtrack-5 avatar Apr 02 '18 11:04 backtrack-5

@sornalingam could change the location/structure of your tests to be the same as the other scrapy-cluster components. So, crawlerpage_test.py, indexpage_test.py, kafkamonitor_test.py and redismonitor_test.py would need to all be added to online.py in the ui/tests directory. That way when run_online_tests.sh is executed your new tests will also be run. You can move utils.py and settings.py to that same directory and there won't be a need for ui_test.sh.

damienkilgannon avatar Apr 02 '18 14:04 damienkilgannon

@damienkilgannon , Thanks for the feedback. I have done the changes as you mentioned and commited the code.

  1. I have added a TestSuite in online.py to execute all the tests.
  2. Merged requirements.txt file
  3. Removed unwanted files and directory.

Please have a look on it and let me know if it needs to be updated.

Thanks,

backtrack-5 avatar Apr 03 '18 04:04 backtrack-5

@sornalingam so when I run the following steps the new tests you added don't get called.

docker-compose up --build -d
docker exec -it scrapycluster_ui_1 bash
./run_docker_tests.sh

This is the pattern all the other components use so it would be nice to follow that. To make it work you need to merge your new test classes IndexTest, CrawlerPageTest, KafkaPageTest and RedisPageTest into the main TestAdminUIService test case.

damienkilgannon avatar Apr 03 '18 13:04 damienkilgannon

Hi @damienkilgannon, I have done the changes as you mentioned and commited the code for your review.

My understanding is that, When i looked into the docker-compose.yml file, docker-compose is pulling images from dockerhub.

docker-compose.yml is pointing to the istresearch/scrapy-cluster:ui-dev image. I think that is the reason new test is not executed.

I believe with the latest changes also, When we execute the following it won't execute the tests.

docker-compose up --build -d
docker exec -it scrapycluster_ui_1 bash
./run_docker_tests.sh

Hi @madisonb , @damienkilgannon - Please correct me if i am wrong.

Thanks,

backtrack-5 avatar Apr 04 '18 07:04 backtrack-5

@sornalingam sorry your dead right ... so I built the image manually from the project root using docker build -t istresearch/scrapy-cluster:ui-dev -f docker/ui/Dockerfile . and then ran the commands again.

docker-compose up -d
docker exec -it scrapycluster_ui_1 bash
./run_docker_tests.sh

The tests fail with the following output ...

root@8f2ed0ac2b32:/usr/src/app# ./run_docker_tests.sh
test_close (tests.test_ui_service.TestAdminUIService) ... ok
test_index (tests.test_ui_service.TestAdminUIService) ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.274s

OK
called
ERROR

======================================================================
ERROR: setUpClass (__main__.TestAdminUIService)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tests/online.py", line 33, in setUpClass
    cls.driver = get_webdriver()
  File "/usr/src/app/tests/utils.py", line 16, in get_webdriver
    return webdriver.Chrome()
  File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/chrome/webdriver.py", line 68, in __init__
    self.service.start()
  File "/usr/local/lib/python2.7/site-packages/selenium/webdriver/common/service.py", line 83, in start
    os.path.basename(self.path), self.start_error_message)
WebDriverException: Message: 'chromedriver' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home


----------------------------------------------------------------------
Ran 0 tests in 0.126s

FAILED (errors=1)
integration tests failed

damienkilgannon avatar Apr 04 '18 10:04 damienkilgannon

Hi @damienkilgannon , Yes that is expected. Because there is no ref for installing Chrome and Chrominum driver in docker container.

I have commited lated and it should solve the issue. For me it run fine and passed all test.

[appadmin@eon-app-dev04 scrapy-cluster]$ sudo docker exec -it scrapycluster_ui_1 bash
root@479f657928f2:/usr/src/app# ./run_docker_tests.sh
test_close (tests.test_ui_service.TestAdminUIService) ... ok
test_index (tests.test_ui_service.TestAdminUIService) ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.359s

OK
called
2018-04-04 11:03:10,355 [ui_service] INFO: Running main flask method on port 52976
 * Running on http://0.0.0.0:52976/ (Press CTRL+C to quit)
test_scrapy_cluster_crawlerpage_panel (__main__.TestAdminUIService) ... 2018-04-04 11:03:20,508 [ui_service] INFO: 'index' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:27] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [04/Apr/2018 11:03:27] "GET /static/css/custom.css HTTP/1.1" 200 -
127.0.0.1 - - [04/Apr/2018 11:03:27] "GET /static/img/rsz_11logo.png HTTP/1.1" 200 -
2018-04-04 11:03:28,194 [ui_service] INFO: 'crawler' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:28] "GET /crawler HTTP/1.1" 200 -
ok
test_scrapy_cluster_kafkapage_panel (__main__.TestAdminUIService) ... 2018-04-04 11:03:28,526 [ui_service] INFO: 'index' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:33] "GET / HTTP/1.1" 200 -
2018-04-04 11:03:33,305 [ui_service] INFO: 'kafka' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:33] "GET /kafka HTTP/1.1" 200 -
ok
test_scrapy_cluster_panel (__main__.TestAdminUIService) ... 2018-04-04 11:03:33,440 [ui_service] INFO: 'index' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:34] "GET / HTTP/1.1" 200 -
ok
test_scrapy_cluster_redispage_panel (__main__.TestAdminUIService) ... 2018-04-04 11:03:34,317 [ui_service] INFO: 'index' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:34] "GET / HTTP/1.1" 200 -
2018-04-04 11:03:34,417 [ui_service] INFO: 'redis' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:34] "GET /redis HTTP/1.1" 200 -
ok
test_status (__main__.TestAdminUIService) ... 2018-04-04 11:03:34,609 [ui_service] INFO: 'index' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:34] "GET / HTTP/1.1" 200 -
ok
test_submit_with_info (__main__.TestAdminUIService) ... 2018-04-04 11:03:34,617 [ui_service] INFO: 'index' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:34] "GET / HTTP/1.1" 200 -
2018-04-04 11:03:35,073 [ui_service] INFO: 'submit' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:35] "POST /submit HTTP/1.1" 302 -
2018-04-04 11:03:35,089 [ui_service] INFO: 'index' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:35] "GET / HTTP/1.1" 200 -
ok
test_submit_without_info (__main__.TestAdminUIService) ... 2018-04-04 11:03:35,164 [ui_service] INFO: 'index' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:35] "GET / HTTP/1.1" 200 -
2018-04-04 11:03:35,264 [ui_service] INFO: 'submit' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:35] "POST /submit HTTP/1.1" 302 -
2018-04-04 11:03:35,268 [ui_service] INFO: 'index' endpoint called
127.0.0.1 - - [04/Apr/2018 11:03:35] "GET / HTTP/1.1" 200 -
ok
2018-04-04 11:03:35,331 [ui_service] INFO: Trying to close UI Service
2018-04-04 11:03:36,299 [ui_service] INFO: Closed UI Service

----------------------------------------------------------------------
Ran 7 tests in 29.096s

OK

Can you please check the latest commit and let me know.

  1. Pull latest code
  2. docker build -t istresearch/scrapy-cluster:ui-dev -f docker/ui/Dockerfile .

and

docker-compose up -d
docker exec -it scrapycluster_ui_1 bash
./run_docker_tests.sh

Thanks,

backtrack-5 avatar Apr 04 '18 11:04 backtrack-5

@sornalingam I can confirm the tests are been executed and passing with the addition of the latest changes. 👍 Now the travis-ci build is failing as Madison pointed out, I haven't had a chance to look into the reason just yet; will revert back to you on that if its related to the new changes you added.

damienkilgannon avatar Apr 04 '18 12:04 damienkilgannon

I reran the tests a couple days ago, most likely something within this branch isnt up to date with what is on dev as the rerun failed for the same reason.

madisonb avatar Apr 04 '18 13:04 madisonb

Hi @madisonb @damienkilgannon , I have noticed the same behavior that travis-ci build is failing. When i looked into the error in travis-ci build, It is showing error in TestKafkaMonitor which i did not change anything. I suspect that my branch is not upto date, I will checkout dev and verify it.

Thanks for the confirmation @damienkilgannon

backtrack-5 avatar Apr 04 '18 16:04 backtrack-5

Hi @madisonb @damienkilgannon , I tried rebase with DEV and commited my branch, Still travis-ci is failing. Not sure what is the issue :(.

backtrack-5 avatar Apr 05 '18 09:04 backtrack-5

Hi @damienkilgannon, @madisonb - I have pulled branch ui and added my test case. I am wondering, If i do a simple commit and check whether the ui branch is passing in travis-ci .

Please let me know your if that works.

backtrack-5 avatar Apr 09 '18 05:04 backtrack-5

Hi @damienkilgannon , Did you get a chance to look into it ?

backtrack-5 avatar Apr 18 '18 09:04 backtrack-5

@sornalingam ... i believe @madisonb found an issue #175 which is likely related

damienkilgannon avatar Apr 18 '18 09:04 damienkilgannon

Hi @damienkilgannon , Thanks for the update. I will wait for @madisonb response.

backtrack-5 avatar Apr 18 '18 09:04 backtrack-5

@sornalingam so #175 is closed and merged into dev ... so if you could try merging that back into your branch to start and see does that resolve the failing build.

damienkilgannon avatar Apr 18 '18 10:04 damienkilgannon

Coverage Status

Coverage increased (+4.2%) to 65.96% when pulling 730737807e47326fece6d13a574a28be4d6e8175 on sornalingam:uitest into 5a2bf12229b014ce57b1dcbf40cfdd2df690079b on istresearch:ui.

coveralls avatar Apr 18 '18 12:04 coveralls

Coverage Status

Coverage increased (+4.2%) to 65.96% when pulling 730737807e47326fece6d13a574a28be4d6e8175 on sornalingam:uitest into 5a2bf12229b014ce57b1dcbf40cfdd2df690079b on istresearch:ui.

coveralls avatar Apr 18 '18 12:04 coveralls

Hi @damienkilgannon @madisonb ,

I found the problem. Installing Chrome dependency is casing the issue in Tavis-ci build.Refer the latest build build#652

I have added the chrome dependency installation steps in docker and Dockerfile.py3 and it is working fine.

I am not sure how to add it in the Dockerfile.py2alpine, and env's centos and ubuntu from .travis.yml file

I will look into it and fix those issue. In the meantime if you know the solution please leave the comment.

Thanks,

backtrack-5 avatar Apr 18 '18 14:04 backtrack-5

I don't have a further comment on how to fix the issue, other than we should have all tests passing in all environments before we merge stuff in.

Luckily, everything is container based and you should be able to work with things if you have the ability to get it working in Py3 land.

For ansible, you would add your installation commands to here. You can view the testing containers here.

For alpine, you probably just need to work with the alpine environment and install some extra dependencies in order to get Chrome to work. Think of it as a slim flavor of linux that needs extra packages installed to get stuff going, but makes super small container images.

madisonb avatar Apr 18 '18 18:04 madisonb