We use cookies to ensure that we give you the best experience about vital software issues 'n stuff. A cookie is a small text file containing information that a website transfers to your computer's hard disk for record-keeping purposes and allows us to analyze our site traffic patterns. It does not contain chocolate chips, you cannot eat it and there is no special hidden jar. If that's still okay for you, and you want to continue to use our website, you can close this message now (don't forget to read our Data Privacy Notice). Otherwise you gotta leave now :-( Close

Digging in the mud

As a software tester, I often feel like being an archeologist. I am analyzing artefacts that are found in requirements, use-cases, user-stories, emails, phone calls, balcony talks, meetings, defects, etc. I am collecting all these wide spread pieces of information and try to put them back together as accurate as possible. Unlike the two men in the excavation, I like to do this kind of job.When the connected pieces turn into a nice picture in my head, I am gluing them together in the form of well documented test cases or - if it is a really large "dinosaur" - in a final report that explains all information needed for other stakeholders but also for me so we don't need to start digging at the same place when later someone asks a similar question.


(Source: Simply the Test)

It was a bear, for sure (overvalued bugs)

(Source: Simply the Test)

The Little Tester #110

These are the made up stories of a team working in an Agile environment. Their daily struggles and successes are presented in a comic/parody/satirical way. Click on the image to see it in full size.

The team members are:

  • Little, the main character. The team’s tester.
  • Coffee, the team’s Java developer.
  • Mr. Fancy, the team’s UI developer.
  • Senor, the Senior Developer of the team.
  • Kitty, the Scrum Master.
  • Glasses, the Business Analyst.
  • And the manager.

Disclaimer

  • This is a work of fiction. Names, characters, businesses, places, events, situations presented are either the products of the author’s imagination or used in a fictitious manner. Any resemblance to actual persons or events is purely coincidental.
  • The sole purpose of this comic strip is to be humorous.
  • The drawings are made by hand on paper, by means of pencils and fine liners, except for the outline, by the author. Hence their imperfection.
(Source: imalittletester)

Cyndi wants you to know: "The formatting of this article went shite due to causeless use of HTML <blockquote> tags in the original post." 🤷‍♀️

The Little Tester #109

These are the made up stories of a team working in an Agile environment. Their daily struggles and successes are presented in a comic/parody/satirical way. Click on the image to see it in full size.

The team members are:

  • Little, the main character. The team’s tester.
  • Coffee, the team’s Java developer.
  • Mr. Fancy, the team’s UI developer.
  • Senor, the Senior Developer of the team.
  • Kitty, the Scrum Master.
  • Glasses, the Business Analyst.
  • And the manager.

Disclaimer

  • This is a work of fiction. Names, characters, businesses, places, events, situations presented are either the products of the author’s imagination or used in a fictitious manner. Any resemblance to actual persons or events is purely coincidental.
  • The sole purpose of this comic strip is to be humorous.
  • The drawings are made by hand on paper, by means of pencils and fine liners, except for the outline, by the author. Hence their imperfection.
(Source: imalittletester)

Cyndi wants you to know: "The formatting of this article went shite due to causeless use of HTML <blockquote> tags in the original post." 🤷‍♀️

333-How much time do you need for testing?

About one third of the total test effort should be spent on determining how to test [1]. I’ve usually counted another third to prepare reusable test data for regression tests and yet another third to finally execute the test. That’s what the 333 stands for me. This is for the current sprint where new features are being developed.

 But I was often asked, how much time does my team need to perform a full manual regression test. While I could - of course - give a rough estimation, I liked to challenge the interviewer whether she can tell me the number of bugs that I am going to find with our tests. I asked that because I often experienced tests that were originally scheduled for 30-45 minutes, could easily take up to 2 hours and more depending on the number of defects we found during the tests.

How can this happen?

  • You observe an anomaly and need to analyse it. Is it really a bug or a feature that you just don't remember working exactly this way?

  • You need to find a reliable scenario to reproduce the anomaly on the target test environment

  • Then you need to know, since when is this anomaly in the system? We need to know because this may impact the priority of the issue. Is it there since months and none noticed? Is this recently broken feature that worked fine in the previous version? To find answers to these questions you need to test the same scenario on a different test environment that runs an older version. Maybe you have to test it yet a third test environment to get more accurate information about the history of that anomaly.

  • Then follows the documentation for developers. One needs to create screenshots that highlight the situation before/after, adding labels to the pictures so the bug becomes clear for everyone without losing time reading a lot of text.

  • Sometimes, an issue cannot easily be reproduced and you may need to examine the exact previous steps you took. Easy if you followed a script, more challenging if you followed an exploratory approach while testing. The issue may occur in one particular specifity of the workflow while it may still work fine if you execute the workflow from a different starting point.

These are all tasks that impact how far the estimation deviates from the actual time needed to perform a regression test.

T. Zelger, August 2020