We use cookies to ensure that we give you the best experience about vital software issues 'n stuff. A cookie is a small text file containing information that a website transfers to your computer's hard disk for record-keeping purposes and allows us to analyze our site traffic patterns. It does not contain chocolate chips, you cannot eat it and there is no special hidden jar. If that's still okay for you, and you want to continue to use our website, you can close this message now (don't forget to read our Data Privacy Notice). Otherwise you gotta leave now :-( Close

Fatal exception – or how to get rid of errors

Dave, a good friend,  sent me a WhatsApp message and made fun of a software that– when put under load - did not properly log invalid login attempts.

All he could see was “error:0” or “error:null” in the log-file.

 This reminded me to a similar situation where we were hunting down the root causes of unhandled exceptions showing several times per day at customers out of the blue. 

 The log-files didn’t really help. We suspected a third party provider software component and of course the vendor answered that it cannot come from them.

We collected more evidence and after several days of investigation we could prove with hard facts that the cause was indeed the third party component. We turned off the suspected configuration and it all worked smoothly after that.

 Before we identified the root cause, we were overwhelmed about the many exceptions found in the log-files. It was a real challenge to separate errors that made it to a user’s UI from those that remained silent in the background although something obviously went wrong with these errors, too. The architects decided the logs must be free from any noise by catching the exceptions and properly deal with it. Too many errors can hamper the analysis of understanding what’s really going wrong.

 Two weeks later, the logfiles were free from errors. Soon after one of the developers came to me and moaned about a detection he just made. He stated someone simply assigned higher log-levels for most of the detected errors so these could not make it anymore into the logfiles. In his words, the errors were just swept under the carpet.

I cannot judge whether that was really true or whether it was done with just a few errors where it made sense, but it was funny enough to put a new sketch on paper.

(Source: Simply the Test)

Leaving Springfield

 

(Source: Simply the Test)

The Birthday Paradox – and how the use of AI helped resolving a bug


While working / testing a new web based solution to replace the current fat client, I was assigned an an interesting customer problem report.

The customer claimed that since the deployment of the new application, they noticed at least 2 incidents per day where the sending of electronic documents contained a corrupt set of attachments. For example, instead of 5 medical reports, only 4 were included in the generated mailing. The local IT administrator observed the incident to happen due to duplicate keys set for the attachments from time to time.

But, they could not evaluate whether the duplicates were created by the old fat client or the new web based application. Both products (old and new) were used in parallel. Some departments used the new web based product, while others still stick to using the old. Due to compatibility reasons, the new application inherited also antiquated algorithms to not mess up the database and to guarantee parallel operation of both products.

One of these relicts was a four digit alphanumeric code for each document created in the database. The code had to be unique only within one person’s file of documents. If another person had a document using the same code, that was still OK.

 At first, it seemed very unlikely that a person’s document could be assigned a duplicate code. And, there was a debate between the customer and different stakeholders on our side.
The new web application was claimed to be free of creating duplicates but I was not so sure about that. The customer ticket was left untouched and the customer lost out until we found a moment, took the initiative and developed a script to observe all new documents created during our automated tests and also during manual regression testing of other features. 

The script was executed every once an hour. We never had any duplicates until after a week, all of a sudden the Jenkins script alarmed claiming the detection of a duplicate. That moment was like Xmas and we were so excited to analyze the two documents. 

In fact, both documents belonged to the same person. Now, we wanted to know who created these and what was the scenario applied in this case. 

Unfortunately, it was impossible to determine who was the author of the documents. My test team claimed not having done anything with the target person. The person’s name for which the duplicates were created occurred only once in our test case management tool, but not for a scenario that could have explained the phenomena. The userid (author of the documents) belonged to the product owner. He assured he did not do anything with that person that day and explained that many other stakeholders could have used the same userid within the test environment where that anomaly was detected.

 An appeal in the developer group chat did not help resolve the mystery either. The only theory in place was “it must have happened during creation or copying of a document”.  The most easy explanation had been the latter; the copy-procedure.

Our theory was that a copied document could result in assigning the same code to the new instance. But, we tested that; copying documents was working as expected. The copied instance received a new unique code that was different from its origin. Too easy anyway.

 Encouraged to resolve the mystery, we asked ChatGBT about the likelihood of duplicates to happen in our scenario. The response was juicy. 

It claimed an almost 2% chance of duplicates if the person had already 200 assigned codes (within his/her documents). That was really surprising and when we further asked ChatGBT, it turned out the probability climbed up to 25% if the person had assigned 2000 varying codes in her documents.

This result is based on the so called Birthday Paradox which states, that it needs only 23 random individuals to get a 50% chance of a shared birthday. Wow!

 Since I am not a mathematician, I wanted to test the theory with my own experiment. I started to write down the birthdays of 23 random people within my circle of acquaintances. Actually, I could stop already at 18. Within this small set I had identified 4 people who shared the same birthday. Fascinating!

 That egged us to develop yet another script and flood one exemplary fresh person record with hundreds of automatically created documents.

The result was revealing:


Number of assigned codes for 1 person

 

500

1000

1500

2000

2500

Number of identified duplicates (Run 1)

0

0

5

8

11

Number of identified duplicates (Run 2)

0

0

3

4

6

 With these 2 test runs, we could prove that the new application produced duplicates if we had enough unique documents assigned to the person upfront.

 What followed was a nicely documented internal ticket with all our analysis work. The fix was given highest priority and made into the next hot-fix.

 The resolution could be as simple as that:

  • When assigning the code, check for existing codes and re-generated if needed (could be time-consuming depending on the number of existing documents)
  • When generating the mailing, the system could check all selected attachments and automatically correct the duplicates and re-generate these or warn the user about the duplicates to correct it manually.

When people ask me, what  I find so fascinating about software testing, then this story is a perfect example. Yes sure, often, we have to deal with boring regression testing or repeatedly throwing back pieces of code back to the developer because something was obviously wrong, but the really exciting moments for me are puzzles like this one; fuzzy customer ticket descriptions, obscure statements, contradictory or incomplete information, accusations while none really has the time to dig deeper into it.

That is the moment where I love to jump in. 

 But, the most interesting finding in this story has not been betrayed yet. While looking at the duplicates, we noticed that all ended up with the character Q. 


 And when looking closer at the other non-duplicated codes, we noticed that actually ALL codes ended up with character Q. This was even more interesting as learning about this fact reduced the number of possibilities from 1.6 million variants down to only 46656 and with it, the probability of duplicates to a more than 30%.

 

 

See below the response from ChatGBT supporting the analysis.






 

 

 

 

(Source: Simply the Test)

Bug’s survival strategy

 



(Source: Simply the Test)

Astronomy for developers


 

(Source: Simply the Test)

AI and a confused elevator

A collegue recently received a letter from the estate agent, stating that several people reported a malfunction of the new elevator. The reason as it turned out after an in depth-analysis: the doors were blocked by people moving into the building while hauling furniture. This special malfunction detection was claimed to be part of the new elevator systems that is based on artificial intelligence.
The agent kindly asked the residents to NOT block the doors anymore as it confuses the elevator and it is likely for the elevator to stop working again. 

I was thinking..."really"? I mean...if AI is put into an elevator software, then I first expect the elevator to learn at what times the elevator is called on which floors most often and then automatially move to that position when it makes most sense...based on what the elevator learned over time. For example, in our office, at around noon, most of the employees go for lunch. An opportunity to do employees a great favor is to move back to the floor where people press the button right after having delivered the previous ones. But, when an elevator gets irritated due to blocking doors and cannot be settled within several days...then what benefit is such kind of software providing to the people?

 

(Source: Simply the Test)

Software made on Earth


This is a remake of my original cartoon which was published at SDTimes, N.Y. in their newsletter as of April 1, 2008

(Source: Simply the Test)

Testing under the Hood

 Even when a particular test passed at first glance, there might still be things going wrong. You may just not have noticed it, because the User Interface stays quiet; at least for the moment. Things can go wrong in a black box long after you executed the test; hours, days or even weeks later. The longer such problems remain undetected the more effort it takes to fix the problem and repair the damage it caused, especially if the system is already LIVE in production. See also Cheerful Debugging Messages and its Consequences in this blog.

It is not enough to look only at the front-end of an application. You should also watch carefully what’s going on behind the curtains. Give all testers a facility to query the underlying database. A lot of things can go wrong there and remain undetected for too long. It will start hurting only when such data is shared with or passed to other programs using a corresponding interface to read or exchange data. I have seen a lot of things stored inappropriate only to hurt when such data was later used by another program.

I developed an SQL query tool with some extra facilities like an analyser to compare all tables before and after a triggered action.

 How can testers live without such tools? It opens a whole new universe of potential problems just waiting to get reported.

(Source: Simply the Test)

Revise the Test Report

I thought, I'd published this cartoon in 2020 already, but couldn't find it, so I am doing it now, with a 3 years delay...=;O)
 


(Source: Simply the Test)

Mutation Testing and why we don’t need it

 

(Source: Simply the Test)