Plug: Robot Framework is quick to setup, easy to write tests for, and super fast to triage failures in.
The last point really sets it apart from all other frameworks, where no other test framework seems to have screenshots, test steps executed and every variable assigned in the results output file. You can also switch seamlessly between Robot commands and Python in tests. It's a dream.
Here's a quick Proof of Concept / Hello World
*These instructions assume a Unix based os.
Setup:
Protip: Use virtual envs to manage your python environments. Google what this means if you need more help as I won't go in to this here.
Pre-Requisites
virtualenv
python 3.x
pip
Install Instructions
pip install robotframework
pip install robotframework-selenium2library
And we want a chromedriver to run some tests, so I went for this one:
pip install chromedriver_installer --install-option="--chromedriver-version=2.46" --install-option="--chromedriver-checksums=e287cfb628fbd9f6092ddd0353cbddaf,f63b50301dbce2335cdd442642d7efa0,d498f2bb7a14216b235f122a615df07a"
If you want good test structure you should have Models, Keywords and Tests folders. The contents of each should represent page objects (Models), a set of actions you can perform on these (Keywords) and then the actual Tests you are executing (in the Tests folder).
I've created some Keywords which are useful to have:
Keywords/test-completion-steps.robot
*** Settings ***
Library Selenium2Library
*** Keywords ***
Complete Test Actions
Log Test execution completed
Run Keyword If Test Failed Complete Failed Test Actions
Close All Browsers
Complete Failed Test Actions
Log Source
Log Location
Log Title
Capture Page Screenshot
Get Window Titles
And I've created a Test just to run something
Tests/matts-test.robot
*** Settings ***
Library Selenium2Library
Resource ../Keywords/test-completion-steps.robot
Test Teardown RunKeywords Complete Test Actions
*** Test Cases ***
Lets Test Google Works
Create Webdriver Chrome
Go To http://google.com
Input Text xpath=//input[@name='q'] Test Level Up Blogger
Submit Form
Wait Until Element Is Visible xpath=//div[contains(text(), 'About') and contains(text(), 'results')]
Finally, run it!
type
robot Tests/
A blog following the arduous journey along test paths and details of the interesting, bizarre and inane encountered en route.
Tuesday, 25 June 2019
Monday, 24 June 2019
From Coding to Support - Which Tests at Which Stage
The art of testing lies in the ability to put the right checks in the right places to provide the right people with information at the right time. And, tests failing is just one form of an error.
This means that while building software we need to be questioning how this information gets surfaced.
In terms of writing code this should obviously take the form of unit tests. We automate these because the cost of running these manually all the time is ridiculous.
In terms of integrating these components we create integration tests which check integration points. We should also look at an End to End test to ensure everything can talk successfully on the infrastructure. Again, we should automate these because the cost of running these manually all the time is ridiculous.
Finally we should put monitoring in place. This is a form of testing - providing feedback to the team on whether the product/feature they own is providing the expected value to users.
We should always put the right level of testing in place at each stage (unit, monitoring, UI, integration) to provide us the confidence we need. We should always put as much effort in to the earlier testing phases, and avoid duplicating any tests; by doing this we will see a natural funnel appear (aka the Testing Pyramid), where the majority of tests are Unit, and minimal tests are UI based.
Putting the logic-checks in to earlier test stages (such as Unit tests) makes sense because tests further down the Software Development Life Cycle (i.e. UI tests) cost more in terms of time and resources to run (think of the time cost of running 1 unit test vs 1 UI test; milliseconds vs minutes).
We get one other added benefit which is that the right people are notified at the right time of breakages. We often see a delay in a code change and UI tests being run for a variety of very reasonable reasons. When Unit tests break the developer is aware of this early on, while the code is still fresh in their mind.
We need to also consider testing after a product is released though, looking at what checks are in place to provide us the "Four Golden Signals" as Google has so kindly given us; we need to monitor our apps and ensure the team is notified of failures, post deployment.
The responsibilities and roles of the QA are suddenly very twisted; where does the Quality Analyst role end and the Sys-Ops role sit. More on the Cross Functional Developer role in another post.
This means that while building software we need to be questioning how this information gets surfaced.
In terms of writing code this should obviously take the form of unit tests. We automate these because the cost of running these manually all the time is ridiculous.
In terms of integrating these components we create integration tests which check integration points. We should also look at an End to End test to ensure everything can talk successfully on the infrastructure. Again, we should automate these because the cost of running these manually all the time is ridiculous.
Finally we should put monitoring in place. This is a form of testing - providing feedback to the team on whether the product/feature they own is providing the expected value to users.
We should always put the right level of testing in place at each stage (unit, monitoring, UI, integration) to provide us the confidence we need. We should always put as much effort in to the earlier testing phases, and avoid duplicating any tests; by doing this we will see a natural funnel appear (aka the Testing Pyramid), where the majority of tests are Unit, and minimal tests are UI based.
Putting the logic-checks in to earlier test stages (such as Unit tests) makes sense because tests further down the Software Development Life Cycle (i.e. UI tests) cost more in terms of time and resources to run (think of the time cost of running 1 unit test vs 1 UI test; milliseconds vs minutes).
We get one other added benefit which is that the right people are notified at the right time of breakages. We often see a delay in a code change and UI tests being run for a variety of very reasonable reasons. When Unit tests break the developer is aware of this early on, while the code is still fresh in their mind.
We need to also consider testing after a product is released though, looking at what checks are in place to provide us the "Four Golden Signals" as Google has so kindly given us; we need to monitor our apps and ensure the team is notified of failures, post deployment.
The responsibilities and roles of the QA are suddenly very twisted; where does the Quality Analyst role end and the Sys-Ops role sit. More on the Cross Functional Developer role in another post.
About Testing
I'm in the process of changing countries (and sadly leaving my dream job at Xero behind) and I've been asked to provide some What is Testing handover docs. While I've been a developer in all but name for years I was pleasantly surprised at the eagerness of the team to want to learn more about testing from me, and so I may as well expound these thoughts here and at work.
You want confidence that negative flows don't work.
You want to be creative in trying to break things.
A tester also needs to understand the technical implementation like a developer. This allows you to come up with creative ways to break a feature/product, and possibly to identify what shortcuts you can take in testing as well (it's very inefficient to test everything all the time, more on this later).
While normally everything will work as intended, sometimes you'll unearth some funky behaviour. Protip: follow anything that looks suspicious or broken. Bugs do tend to cluster together too, so it can be really good to check each feature or integration point briefly (can system A talk to system B in the right way) rather than getting bogged down with all conceivable edge cases on a single area.
People sometimes have rules “If it failed once put a test in place!”, but really it depends how much testing is needed. It's easy to fall in to Test-paralysis where a release is held up as completely redundant tests are executed just so “everyone is doing their job”.
This is influenced by how many users there are (a few, vs millions), the effects of a bug existing (wrong text displayed, or a satellite will fall out of the sky), and the likelihood of there being bugs (is the change trivial, or has something been fundamentally altered).
We can guide our intuition on this by understanding the technical implementation, the code changes which have occurred since the last good release, what bugs have been identified in the past, what testing is already in place (unit, UI, other?).
As a tester you have to think about those things, and build an idea of what to check before you release. This isn't something I can teach, and should really be used to drive discussions with the team and wider company to reach consensus/understanding of the testing required at each stage.
Here we seem to have small release cycles which means the quality levels can never drop very far (if you have a lot of changes between releases then quality naturally drops quite a bit). By not having a big quality problem it means we don't need to spend weeks performing massive regression cycles to catch/fix all the bugs. This seems trivial. Oh how painful the old ways were.
What is Testing
Providing validation to ensure that a product is fit for intended purpose and all parties are made aware of unintended side effects.How Much Testing Should I do?
You want confidence that logical flows work.You want confidence that negative flows don't work.
You want to be creative in trying to break things.
What does a Tester need to think about?
A tester needs to use the application like a user. This allows you to form realistic expectations relating to what a feature/product should allow you to do.A tester also needs to understand the technical implementation like a developer. This allows you to come up with creative ways to break a feature/product, and possibly to identify what shortcuts you can take in testing as well (it's very inefficient to test everything all the time, more on this later).
What to Check
Feature Testing
Identify some acceptance criteria and a few things which you want to know about as part of working on the story.Regression Testing
These are tests which provide sufficient confidence that your new product/feature has no adverse effects elsewhere. What are some of the other functions/features of the component that has been changed? Maybe just check that some of them work.Exploratory Testing
When a new feature/product is released it can be quite good just to have an exploratory session where you use the system and just play around with changing things almost at random.While normally everything will work as intended, sometimes you'll unearth some funky behaviour. Protip: follow anything that looks suspicious or broken. Bugs do tend to cluster together too, so it can be really good to check each feature or integration point briefly (can system A talk to system B in the right way) rather than getting bogged down with all conceivable edge cases on a single area.
How Much Testing to do
It's inefficient to test all the things all the time.People sometimes have rules “If it failed once put a test in place!”, but really it depends how much testing is needed. It's easy to fall in to Test-paralysis where a release is held up as completely redundant tests are executed just so “everyone is doing their job”.
This is influenced by how many users there are (a few, vs millions), the effects of a bug existing (wrong text displayed, or a satellite will fall out of the sky), and the likelihood of there being bugs (is the change trivial, or has something been fundamentally altered).
We can guide our intuition on this by understanding the technical implementation, the code changes which have occurred since the last good release, what bugs have been identified in the past, what testing is already in place (unit, UI, other?).
As a tester you have to think about those things, and build an idea of what to check before you release. This isn't something I can teach, and should really be used to drive discussions with the team and wider company to reach consensus/understanding of the testing required at each stage.
Here we seem to have small release cycles which means the quality levels can never drop very far (if you have a lot of changes between releases then quality naturally drops quite a bit). By not having a big quality problem it means we don't need to spend weeks performing massive regression cycles to catch/fix all the bugs. This seems trivial. Oh how painful the old ways were.
Subscribe to:
Comments (Atom)
Robot Framework, Basic Setup
Plug: Robot Framework is quick to setup, easy to write tests for, and super fast to triage failures in. The last point really sets it apart...
-
So, {} represents a dictionary, e.g. {'key': value} and we can get a value out of a Robot Framework dictionary via ${dictionar...
-
For those short on time the most significant material is in black . Elaborated this becomes Hewlett Packard’s Quality Centre eleven ...
-
The art of testing lies in the ability to put the right checks in the right places to provide the right people with information at the right...