start-testing icon indicating copy to clipboard operation
start-testing copied to clipboard

Exploratory tester

Open dialex opened this issue 7 years ago • 5 comments

  • https://dojo.ministryoftesting.com/dojo/lessons/three-digestible-diagrams-to-describe-exploratory-testing
  • structure/template https://docs.google.com/document/d/1rKYmujVhUlNgfeYIBot12Z8E7S0Y_Z4pk5pefK7xO3g/edit
  • Session based https://instil.co/2016/01/07/exploring-session-based-testing/
  • http://www.satisfice.com/articles/what_is_et.shtml
  • http://www.satisfice.com/articles/et-article.pdf
  • https://dojo.ministryoftesting.com/lessons/three-digestible-diagrams-to-describe-exploratory-testing
  • https://www.utest.com/articles/importance-of-exploratory-testing-in-agile-methodology
  • http://www.testingexcellence.com/testers-quick-guide-exploratory-testing/
  • http://visible-quality.blogspot.pt/2016/07/exploratory-testing-its-not-what-you.html
  • http://dancedwiththesoftware.blogspot.co.uk/2011/04/10-ways-to-do-exploratory-testing-badly.html
  • https://www.ministryoftesting.com/dojo/lessons/common-misconceptions-about-exploratory-testing
  • http://www.satisfice.com/blog/archives/1509
  • https://gorillalogic.com/blog/agile-testing-striking-a-balance-between-manual-and-automated-testing-in-an-agile-environment/
  • https://callumakehurstryansblog.wordpress.com/2021/11/02/why-adhoc-testing-is-not-exploratory-testing/
  • https://club.ministryoftesting.com/t/what-does-the-future-hold-for-a-manual-tester-moving-towards-automation/10467
  • https://www.kenst.com/exploratory-testing-faqs/
  • Bug Bash, exploratory sessions https://devblog.songkick.com/songkick-bug-bash-2018-30846ecf9c33 (PRACTICE EXAMPLE)
  • example https://www.youtube.com/watch?v=VSk7bLqwLDg
  • example https://www.kenst.com/2018/03/a-typical-day-of-testing-2018/
  • https://www.ministryoftesting.com/dojo/lessons/a-really-useful-list-for-exploratory-testers
  • https://www.mariedrake.com/post/a-brief-introduction-to-exploratory-testing

Personalities

dialex avatar Jan 11 '18 15:01 dialex

img_20170322_131043053

dialex avatar Jan 11 '18 15:01 dialex

A list of requirements is never really complete, there will always be requirements which are not stated, which are assumed, or omitted. Regardless of how comprehensive your requirements are, they will never be an exhaustive list. You won’t know everything the software will do up front. That’s where exploratory testing comes in.

Exploratory testing is defined as simultaneous learning, test design and execution [2]. The tester explores the application, discovering new information, learning, and finding new things to test as they go. They could do this alone, or pair with another tester, or a developer perhaps.

Software testing shouldn’t be perceived only as a task where the tester works through a list of pre prepared tests or test cases giving a firm pass or fail result. If you have a user story, or set of requirements, it is of course important to make sure what you are testing adheres to those things, however it can be helpful to reframe acceptance criteria as ‘rejection criteria’. When the acceptance criteria are not met, the product is not acceptable, but if they are met, that doesn’t mean the product has no issues.

Checking and verifying should be combined with exploration and investigation, asking questions of the product like ‘What happens if…’ that you may not know the answers to before you start, and that test cases written in advance may not cover.

https://dojo.ministryoftesting.com/dojo/lessons/so-what-is-software-testing

A test script will check if what was expected and known to be true, still is.

dialex avatar Mar 11 '18 16:03 dialex

One of the things I’ve noticed over the years is that anyone is capable of doing Exploratory Testing. Anyone at all. It just happens some do it better than others. Some want to do it. Some don’t want to do it. Some don’t realise they are doing it. Some don’t know what to label it.

Have you ever opened a new electronic device and explored around what it can do? Or tried to get something working without reading the instructions?

In our testing world though I’ve observed a great many people attaching a stigma to Exploratory Testing; it’s often deemed as inferior, or something to do rarely, or it’s less important than specification based scripted testing. I think much of this stigma or resistance to exploration comes about from many testers feeling (or believing) Exploratory Testing (ET) is unstructured or random.

The whole section on "Experienced Exploratory Testers"

I believe that the more advanced a practitioner becomes in Exploratory Testing the more they are able to structure that exploration, but more importantly to me, the more they are able to explain to themselves and others what they plan to do, are doing and have done. (...) It’s this notetaking (or other capture mechanism) that not only allows them to do good exploratory testing but also to do good explanations of that testing to others.

Good exploratory testing is searchable, auditable, insightful and can adhere to many compliance regulations. Good exploratory testing should be trusted.

Being able to do good exploratory testing is one thing, being able to explain this testing (and the insights it brings) to the people that matter is a different set of skills. I believe many testers are ignoring and neglecting the communication side of their skills, and it’s a shame because it may be directly affecting the opportunities they have to do exploratory testing in the first place.

http://thesocialtester.co.uk/explaining-exploratory-testing-relies-on-good-notes/

dialex avatar Aug 17 '19 17:08 dialex

image https://mavericktester.com/2019/12/31/heuristics-sfdipot/

dialex avatar Jan 27 '20 16:01 dialex

At the end, all the acceptance tests (and unit tests) are passing. There is no hand-off to Testers to make sure the system does what it is supposed to. The acceptance tests already prove that the system is working (according to spec).

This does not mean that Testers do not put their hands on the keyboard and their eyes on the screen. They do! (...) They perform exploratory testing. They get creative. They do what Testers are really good at—they find new and interesting ways to break the system. They uncover under-specified areas of the system.

So, in short, the business specifies the system with automated acceptance tests. Programmers run those tests to see what unit tests need to be written. The unit tests force them to write production code that passes both tests. In the end, all the tests pass. In the middle of the iteration, QA changes from writing automated tests, to exploratory testing.

-- https://sites.google.com/site/unclebobconsultingllc/tdd-with-acceptance-tests-and-unit-tests

dialex avatar Apr 24 '20 15:04 dialex