On 25th of January I went to meet with some great folks from the testing industry at a testathon (hackathon for testers).

Our purpose was to team up and find some bugs. We had a team in less than 5 minutes from meeting up so we were team 1.

We did plenty of bug finding, networking and overall good fun.

We were logging bugs as quickly as possible and the rules were that if somebody else was raising the same bug with an earlier time stamp they would have the full rights to it.

Overall the number of bugs per app went down as we went through the day. Not sure if it was because the apps were getting more mature or if our blood alcohol level was increasing.

Lots of prize categories , and I managed to snap the one for "The Most Hacker Way of finding a bug" which came with a ticket to Test Bash from The Ministry of Testing and 50 pounds to spend on a luxury cab ride with Uber. Also team 1 got some King marketing prizes for being the most chatty (ahem....collaborative) team.

My strategy was to focus on a few heuristics and apply them to obscure areas of the mobile applications we were testing. All three apps had account management features that could be exercised from both the mobile app and the web app. So I started verifying how synchronising between the two worked. And I came across some interesting findings. Mostly it was around modifying my account details in one of the apps and checking for that update in the other one. Or removing something from my account in the web app and checking the side effects in the mobile app. The app provided by King had a limitation here as it uses Facebook for sharing game state (scores, lives etc) between devices and Facebook doesn't allow too many requests to be made by third parties in a 5 minute window so getting different devices into different states was quite easy. Knowing the limitations of any 3rd party libraries employed by the app is always a good thing to know.

Some impressions from the providers of the mobile apps: Anthony Rose from Zeebox admitted being surprised at the many ways everybody approached testing while Dominic Assirati from King was impressed at the level of attention to detail of the testers.

Pros:

- was on a Saturday for a full day in an excellent location; food was abundant; prizes; excellent organisation and presentation; lots of interesting people from all over the globe.

Cons:

- apps were already known; not a lot of interaction within the team in terms of strategizing and learning new testing techniques/tools/etc.

 

I would definitely attend the next one. Also the organisers promised a document to sum up the participants' experiences. When that surfaces I'll add it here.

LMAX Exchange is looking for a new senior tester who enjoys technical challenges. If you're interested here are some of the things we're looking for:

  • you are doing exploratory testing with all sorts of tools,
  • you have been using Webdriver for quite some time now but you don't shy away from an API level test,
  • you understand web technologies and you know why Chrome/Firefox are better browsers than IE at the moment,
  • you understand how Javascript and HTML tie together,
  • you have dabbled into Linux and liked it for your back-end testing needs,
  • you like coding your own tools to uncover different emergent behaviours in the system.

If you join LMAX Exchange, you'll get to play with a very mature custom testing framework (think API clients and Webdriver) and interact with some of the best Java developers out there. Also you'll be part of a proper agile team where the business actually gets it.

Let me know if your interested or checkout this URL - http://www.lmax.com/careers#senior-test-analyst

It is not a job requirement, but when getting hired as a software tester your creativeness will be assessed either through some interview questions or in a practical task. The funny thing is that employers should be careful what they wish for, as an overly creative tester can sometimes be less productive at finding those tricky bugs.
This is because as a creative tester you need to understand that the mind plays tricks on you. It needs to do so to keep you sane. The brain is hardwired to draw conclusions of your experiences and store them for easy access, instead of recording every bit of information that your senses pick up. So you won’t remember what the layout of all the pages on a website are or what it said on page 4 of a 15 page wizard. You are a tester - you’ll build trust in the system while exploring. For the benefit of your future self and the team, write down details of your experience. Don’t let your mind draw the conclusions, as the only thing you will get is how worrying it felt when you got a 404 but couldn’t remember how you got there in the first place. Frustration ensues, another nice sentiment that your brain will store in association with the application.
Don’t over-engineer. Enthusiastic/creative testers will often attack problems straight on. This is usually how an exploratory testing session works. Your brain will brainstorm some thoughts more or less plausible and you will start choosing one and then another, linking them together in what you hope to be a good lead to a buggy area of the system. If you’ve identified a task that needs a clever solution to get it done, remember to take a step back and assess all of your options. It might not hit you straight away, but if your solution takes more than one hour to carry out then you might have to think of another one. If you still can’t find that elegant or clever or simple or just plain beautiful solution then look around for answers. You will soon find out that taking your time and discussing different options with your fellow testers might have saved you the exercise of spending almost a day on over-engineering a solution.
I can’t get enough of quoting this little phrase which has such a huge impact in understanding the world we live in:
“The brain and the eye may have a contractual relationship in which the brain has agreed to believe what the eye sees, but in return the eye has agreed to look for what the brain wants.”
[Stumbling on Happiness – Daniel Gilbert]
Now in order to avoid getting mixed feelings about the application instead of what should have been a clear and concise mental model of the system here is a recommendation:
Start out with a mind map and jot down all your creative thoughts. After finishing, take a step back and analyse which nodes of your mind map might be too outrageous to happen or it’s not that they will never happen but let’s just say that it’s not in the user’s best interest to unplug the computer. Your rule must be “if it makes sense for the business it stays in the mind map”. Get a business analyst to help you focus and remove extraneous ideas from your mind map. Use a tool to make notes on what you observe while testing. Your mental model of the system will need a refresh from time to time. This usually happens when you think that there are no bugs left in the system. One of the best ways to refresh your ideas is to get rid of what you’ve already covered. With a repository you will more easily remember everything you did and all the conclusions you’ve drawn along the way.

I'll be joining Tony Bruce to present at the Agile Testing & BDD Exchange at Skills Matter in London, on 22nd of November 2013. We'll be covering the subject "What do testers do?". Topics will include: what makes a tester tick, what sort of skills are useful to a tester and more than anything else how a tester brings value to a project, to the organisation and the client.
For more details check out the official page of the talk
http://skillsmatter.com/podcast/agile-testing/what-do-testers-do

To find out more about the eXchange you can visit the official Skills Matter event page
http://skillsmatter.com/event/agile-testing/agile-testing-bdd-exchange-2013

Hope to see you there.

Conference finished, presentation done, time to reflect.

Monday's keynote - "Skeptical self-defense for the serious tester or, how to call a $37 billion bluff" by Laurent Bossavit - being bullied with false claims, metrics. Don't rely on hearsay. Apply science to your ways, measure and back your claims with relevant data. At one point the speaker made a claim that he's not speaking to an audience of Agile testers. Now how did he know that? Did he apply any scientific method to substantiate his claim?

Track session - "One more question..." by Tony Bruce - Tony's talk fell into the informative category. He talked about different types of questions that can be used to explore. Probably the one that I heard being mentioned a lot after his talk was the one around "Quiet" or "intentional dead air", mainly to make the counter party uncomfortable enough that the silence is broken. He really tried not to make the presentation software testing specific and instead turned it into a life lesson that can be applied to any context. I really enjoyed his personal touch with examples from his own life. From the talks I attended this one seemed like the only one where the audience was very engaged and having a laugh from time to time.

Then came my session "Questioning Acceptance Tests" which you can have a look here in case Prezi is down. Probably the main conclusion of my talk was that with property based testing, even before considering what tools to use, you first have to ask yourself if this is the right strategy for me. It can get pretty complex and time consuming to try and model a big system when the foundation hasn't been laid through simpler models from when the SUT started emerging. One nice finding was that Kristian Karl from Spotify, in his talk on "Experiences of test automation at Spotify" (which unfortunately I missed) had used some of the tools which I've used to extend my research into property based testing. Mainly the model based testing approach of creating chains of states and transitions with Graphwalker and yEd. But that's a post for another day.

After talking it over with James Lyndsay, I got the impression that some of the attendees felt that I was trying to present the new and only way of testing. This couldn't have been further from the truth. Yes, in my simple case, plain old boundary analysis and equivalence partitioning would've covered the testing of the story but what about the next story that comes down the line and wants to use the same type of objects as input? With QuickCheck you get reusability and increased coverage due to the input objects covering all combinations that can be found in production. Spock also made for a handy way of abstracting almost half of the initial 50 integrations tests I was trying to replace.

The 2nd keynote of the day replaced the initially advertised keynote with "Using sociology to examine testing expertise" by Robert Evans. Robert is a doctor in social sciences with the Cardiff School of Social Sciences and talked about polimorphic and mimeomorphic actions. How humans distinguish themselves from machines through interactions in a social context described by tacit and cultural knowledge. And thus with the current level of artificial intelligence this is not attainable just yet although he did say if somebody had a positronic brain lying around he would like to hear about it. Some books he recommended: "The shape of actions: What humans and machines can do" by Martin Kusch and "Tacit and explicit knowledge" by Harry Collins. I'll be sure to check them out.

Second day's morning keynote was entitled "Creating dissonance: Overcoming organizational bias toward software testing" by Keith Klain. This one was a personal war story of how Keith managed to fight his way to the top, becoming a manager for 800 testers at Barclays and how he overcame biases against testing. His talk didn't resonate that much with my current context so all I wish is that the stories he presented are less and less prevalent.

I then went to attend "Specification by example with GUI tests - How could that work" by Emily Bache and Geoff Bache. They covered using TextTest to automate testing of desktop applications. The tool allows you to define domain objects while automating and thus avoiding the pain of later refactorings. I really liked the way the tool allowed you to interact with the application under test making a lot of the defining of domain objects quite easy. Also the ASCII art output was amazing and you can see there's been a lot of effort involved in creating the tool. Once a "screenshot" of the app was created as ASCII art, you could then diff that against a later version of the app. It had the option of defining filters on what to output so you don't end up diffing everything under the sun. For instance maybe you don't care that the font on a button label changed so you could filter that out. My initial impression was "not another play/record tool" but that quickly got dispelled. Another idea I had was cement. When they initially showed the ASCII output I was thinking that's way too much information being captured which would act as cement against any future software changes. Imagine you had a lot of unit tests trying to test everything and you did a refactoring of a few classes and suddenly 200 unit tests fail. But they introduced filters which can be defined as regular expressions. Although filters are a good idea I'm not so sure about regular expressions. So if you have a desktop GUI to test that's Java or Python based you might want to give it a shot. Also Emily's workshop on Thursday, "Readable, executable requirements: hands-on" on using Cucumber feature files was a treat. We wrote Cucumber tests for the Gilded Rose using Emily's cyber dojo which she had setup here. I only wish more workshops were integrated within the conference itself rather than as separate paid for.

Another good talk was from Jouri Dufour on "How about security testing". Now his talk was full of really useful tips on common sense security testing. This is so vital to our trade as testers that you should just go now and have a look at the examples in his presentation. You'll probably come out more knowledgeable about security testing and especially of how easy it is to think like a hacker.

I also went to Anna Baik's presentation on "The unboxed tester". I had a chance to chat with her about what drove her towards this subject and it was quite a personal one, having returned to work after a long period of time she was confronted with different mentalities/prejudices from her new peers.

After all this I went to check out the Test Lab as last year's one was the main attraction point for me. This time they had more stuff to do and to try out. What I did notice is that when a tester tries to take on an app in the Test Lab, they'll eventually revert to doing security testing. Some of them more prepared than others. While it's nice to see testers being concerned with security testing I can't but think about the other stuff they're missing out on: pairing with other testers when exploring the app, using different tools to record their findings, mind mapping, modelling their understanding of how the app should behave, brainstorming ideas. Some of the apps in the Test Lab didn't lend themselves to security testing - James's puzzles, for instance, one of which I managed to solve and got the official Lab Rat badge and a kudos tweet from James. I only managed to solve it after modelling the states of the system on a piece of paper. I also had a chance to pair with Richard on one of the puzzles but we didn't get too far with that as I joined mid way through and instead of trying to take notes on the behaviours of the app, I was interfering with Richard's train of thoughts and ended up being a "two cooks..." kind of story. I later found out he solved the puzzle. Congrats!

The last keynote of the conference came from Martin Pol, "Questioning the evolution of testing: What's next". The presentation showcased the history of software testing all the way from the 70s when anybody doing testing was a pioneer. But by working together in close collaboration and being flexible in meeting their goals, much like Agile, they managed to find the issues before reaching production. He associated this to a pioneering stage in the evolution of software testing . Later on managers demanded more reproducibility of the testers' ways, more process such that new people didn't have to reinvent the wheel all the time. You might even compare this to a waterfall approach. According to Martin this was the maturation phase of software testing. Later on came the optimising stage through the Agile way of building software when all the team members are working in close collaboration to deliver software, building on the great techniques and tools developed over the decades.

The Android/iOs app for the conference was an absolute delight to use, although for some reason the sessions didn't have the room number on them. I don't think it was promoted to its full potential. It could've proved an excellent tool for people to interact with each other and get feedback on presentations, events, test lab sessions you name it. Unfortunately not a lot of people used it for that purpose, or not as much. One explanation might be the lack of anonymity when offering feedback or when engaging into discussions. So maybe next time the app could have an option for submitting feedback anonymously or through an avatar. Hopefully more and more conferences will start using such apps to engage with the audience. I could even see such an app being used for submitting questions from the audience after a session.

The Q&A sessions at the end of each talk seemed to help with organising the questions and thus felt a bit more efficient than other conferences I've been to. Sadly not all speakers were keen on keeping to the format and found it awkward to work with. If only they would've relied more on the facilitators which really did an excellent job at managing the stream of questions.

Overall I found some of the sessions useful and when there wasn't something I wanted to attend, I had the discussions with fellow testers and of course the Test Lab.

And here's a tweet cloud of all esconfs tweets from Monday 4th of November to Sunday 17th of November.

created at TagCrowd.com