Blog

Resilience Testing Workshop

The End of May and half of June Mark Abrahams and I gave a workshop about Resilience Testing. This Blog is for everyone who want to continue with experimenting after that workshop and/or want to learn more about the subject.

Resilience testing: 

Why: To help you build more stable application and perform well in real life situations;
To avoid: Service loss, Data loss at the end Customer loss!

This workshop introduces the following: Basic load testing & Resilience testing via Infra Evens. You will find all the materials on google drives.

Basic Load testing:

For load testing we used Gatling. So to read more about Gatling go to their website. I will cover the basics to start using it with our workshop.

If you look at the presentation you will find the steps you have to take on slides 9 & 10 to setup the environment. If you have questions, send me a e-mail. But if you have done that we can start with fixing your loadtest.

So for this you need your IP and to open the folder with the load test called ‘load.scala’. This loadscript written in Scala needs to have the IP from the VM to start stressing it.

You need to put the IP of your Virtualbox in the loadscript on the place below.

val httpProtocol = http
.baseUrl(“http://{YOURIP}:8080”)

Next step is to tune the loadscript to create enough load. We used the following setup:

setUp(scn.inject(rampUsers(1400) during (5 minutes))).protocols(httpProtocol)

Which spawns a 1400 users over a time period of 5 minutes. With the standard VM settings about 1500 is the normal limit without causing errors. You can also use other settings, read more about those on the gatling wiki.

Stress Test

To find the boudary there is a stress test included in the gattling scenarios as well.  If you run this you will also get an idea of the resilience of our application. Because the application doesn’t break and keeps responing to incoming request. But when it gets to much, you still get some responses.

Resilience Testing

The Resilience tests we do focus on infrastructure failures, which is something you will encounter when you will move to the cloud that there are infra related events that cause short drops in CPU, memory or IO performance and/or network issues. So while you run a basic load test and introducing these events, you can start with your resilience tests at a low level and build upon that.

The scenario’s we provide are based on either linux commands or a tool called Stress which you can install as part of your linux distro.

By using the following command you will introduce a a worker process for one cpu threat for 40 seconds.

stress –c 1 –t 40

Memory 30 seconds.

stress –m 5 -t 30

IO load 60 Seconds.

stress –i 1 –t 60

For creating a network time out we use the Linux command to reinitialize the network adapter. That creates a short network failure.

/etc/init.d/networking restart

Extra ways of stressing the CPU

There are ofcourse other commands to use aswell, because we noticed that stressing the CPU with stress did not work as effectivity because of build-in OS/Docker components, you can use the following commands as well:

dd if=/dev/zero

of =/dev/null

More about Chaos Monkey will follow in the next blog post.

Powerful Refinements at TestCon Moscow

This is the blog post for the workshop I did with Mehmet Sahingoz at TestCon Moscow 2019.  If you are looking for the slides, you can find them here.

In the morning we started out with an exercize about three amigo’s and see how collaboration helped them to make a better product.

After that we did an exercize with the first refinement technique: Specification by Example.

Step one: We just start writing examples.

IMG_20190402_164458.jpg

Step 2: We align on a sertain model and create more examples

IMG_20190402_164504.jpg

Step 3: Go to key examples and final alignment on the model.IMG_20190402_164512.jpg

The second refinement technique we did is Example mapping: These are good examples of the exercise.

Also a less good solution because there is so much documentation on the side. This again becomes complex and is hard to understand. So that is why you want all the information on the sticky notes and not on the slide. IMG_20190402_164540.jpg

Feature Mapping is the third refinement technique we used.

IMG_20190402_164616.jpg

After that we made the step to writing some Gherkin to create living documentation and your test automation (exactable specs).

 

DevOps Pro Days – Vilnius

This is a short blog about my highlights during the DevOps Pro talks I visited. I took out 3 talk to talk a bit about. That were both the opening keynotes and a talk about Business Analists. Besides the talks I also highlighted some items I found interesting during this conference.

A Practical Path towards Becoming a High Performance (IT) Organization

Horizon 1 dilemma like Kodak, they invented the first digital camera way before other companies thought of it. But kepth investing in Horizon 1, almost becoming there downfall.  Great blog in Dutch: https://www.adformatie.nl/design/wie-wordt-de-kodak-jouw-branche

1_6Pu8kGUMxHffncJ6EsQBdw

Img for a great blog about it: https://medium.com/frameplay/planning-for-future-growth-with-the-three-horizons-model-for-innovation-18ab29086ede

We need to Experiment into Horizon 3 and don’t be afraid to kill your darlings. We need to develop oppertunities in Horizon 2. Optimise and lower running cost in Horizon 1.

Michel also reffered to the State of DevOps report that tells you about the current status of DevOps within Companies.

Business Analist activities in CD Environment

A talk about the Business Analist and what he can do in a DevOps team. According to the speaker he is a support role to the team. He will support with writing documentation, writing specs, working our requirements, writting Given When Then and getting the customer feedback.

Question: Why not let the team self-organise to do these activities? What makes the BA special to do this talk?

Answer: Members of the team can focus on delivering value and the BA on things that are not as valuable.

My opinion: This is dangerous and an anti-pattern, because we have someone that has focus on jobs that are not producing value by itsself and we might have someone that is looking for more work. Adding work for the team, but not adding value.

The summary of this talk.

IMG_20190320_124915

Portable Pipelines

Take away 1: Slow pipelines are killing us. Just imagine if you have to deploy a new version to production because of a bug and you have to wait for an hour (Or even someone out of the audience had a pipeline that would take six hours to run).

Take away 2: A lot of companies are using pipelines with plugins and with that they create statefull pipeline that creates a big vendor lockin. If you don’t want this and want a vendor portable pipeline. The best solution is moving to Yaml & Bash to not have a vendor login and the basis you can easily migrate to another CI/CD tool.

Extra Google Go: Easy to use development language you can use cross OS/Platform without any problem. 

Other nice outtakes of the conference:

Security: We cannot Prevent, We cannot Protect, We can only Detect. Use honeypots to lure hackers to other environments and not your production system where your customers are on.  More on Honeypots: https://us.norton.com/internetsecurity-iot-what-is-a-honeypot.html  

GitHub Actions: Bas Peters from GitHub had a talk about the use of GitHub Actions to automate your workflow in GitHub directly. This was a really good demo about GitHub Actions and how it will help us automate directly in GitHub. Read more about GitHub Actions: https://github.com/features/actions

DevOps Lets Change QA: My own talk was about how QA should change with the change to DevOps. The slides of my talk you will find here on slideshare.

For me it was the first time at the DevOps Pro Days. I really was suprises with all the good mix between technical & non technical subjects. All the technical subjects were good to follow and I really got a lot new insights!

P.s. The video’s to DevOps Days will follow soon, I will add the link to those later.

Powerful Refinements with BDD @ DevOpsPro

On the 19th of March I did the Workshop Powerful Refinements together with Jos Punter for the DevOps Pro days. To give you an impression of the workshop see the following pictures.

During the workshop people create their own documentation, below the results of all their exercises and results.

Slides: https://www.slideshare.net/GeoffreyvanderTas/powerful-refinements-with-bdd-19-03-2019/edit?src=slideview&type=privacy

Workshop Impressions

Any questions after reading this blog or interested. Feel free to contact me.

Lessons from our Unicorn Detectives

At the Agile Testing Days 2018 we did the workshop ‘Be a Detective, use Forensic Sciences, Improve your skills‘ This workshop was about using your agile & testing skills to help solve the mystery disappearance of the Unicorn (the Agile Testing Days symbol).

This blog is dedicated to the top 5 lessons that people can take from this workshop. Things that helped during the workshop & things that could have helped during the workshop. We had 3 sessions, all different, all having different valuable lessons in them.

Structure

A lot of teams started to tackle the problem in an unstructured way, led by their emotions and going about it without thinking about the bigger pictures. To make the investigation way more effective one of the things they should have done is create a structure.

A structure that at least has some basic things like: Teams, and what will each team investigate. When to share information. But also where to leave the evidence.

We all learn about creating structure via the Scrum framework. Use those things we already know and apply it also to this situation.

Two of the groups almost lost a valuable piece of information because of the way the handled the evidence.

Communication & Collaboration

Alignment of communication was something that was a big challenge in this workshop. What we saw happening is that it’s not only important to share information, but also with who.

If they would have created a collaboration structure with teams, which some groups did, you could have focused your investigation (Sprint Goal) and have one report back in a group-leader group. Because not all information always need to be shared with everybody and have 2 or 3 moments to share information with everyone.

All groups tried to share all the information people found, but ask yourself “is it important to share this information with everyone”? Some information had no value or value yet, why share it if it is not important?

Time boxes

Because of the massive amount of parts that can be investigated it’s good to set a goal and a time box for certain investigations. Teams sometimes were stuck for a long timen on minor details without having a clear goal in mind.

One of the examples was the password of the camera. This was a enormous waste for teams and not a real big gain in information. If they would have used a time box they would not have lost hours of time in the workshop in guessing a password.

None of the groups were working in time boxes. Yes ,they tried. While some teams started with a time box, they never kept to the time box or appointed a timekeeper. They never finished a time box.

Proper investigation

A lot of the times clues where forgotten. To find something that you don’t know you are looking for is hard. That really requires you to investigate properly. This is something all teams had problems with. Once you have created a picture of the scene, take it apart. See what you can find. Do not stop when you found a clue, there are probably more to find.

Like testing, back trace steps, go forward, take it apart and strip it. Do propper investigation! Think about things that could have been hidden and you should not stop when finding one or two clues. There are so many more clues for you to find.

In total the 3 groups only 70% of the clues. We also needed to help you to find a clue if to really needed to solve the mystery. There was way more fun stuff to discover.

Clues are like bugs, you don’t stop testing when you find one. You start testing more, because there might be more interesting things to find.

Scaling

When dealing with multiple teams or large groups of people you need to scale. Create small teams and not too many teams. To not have too many communication lines within teams. Communication lines make communication and collaboration hard.

We would have suggested groups of 4/5 people in the first two session we had to create 3 or 4 teams. For the friday sessions with 34 people in it, we would have suggested 5/6 people in a team and create 6 teams. This reduces the communication lines to a minimum in teams and between teams to a minimum.

There was also a lesson for us, as facilitators. The last group was 34 people in size. That made it hard for the people attending, because there were only so many spots to investigate. The group also posed even more difficulties when sharing information. This lead to people not being part of the investigation sometimes.

When you set a maximum of 25 people, stick to it and say “no” to extra people that want to attend!

Concluding

These lessons were a bit foreseen by us. If you would have watched the lessons in the Unicorn Police learning box, we had all these amazing lessons and answers for you already. Besides all the information you learned during the Agile Testing Days.

The movies of the ATD Unicorn police box are found on Google Drive, if you want to continue learning.

DSC09382.JPG