Wednesday, 12 December 2012

Selenium xpath - issue locating WebElement using contains(text(), 'somevalue')

I've recently encountered an issue where my xpath should be easily expressed as
xpath=//div[contains(text(), accountId)]
but this did not work. I thought I'd share the problem here.

I am trying to identify


What does work 


will return the object.

Conclusion

So it appears that text() within xpath identification will only get text up until the start of the next tag.. which is limiting.

I am not sure of a workaround yet, but will post one as soon as I've answered it.


Tuesday, 4 December 2012

Test-creators should maintain

I’ve had the opportunity once to work on a project where test-creation and test-execution were two activities performed by two entirely different teams. Interestingly this produced the problem of considerable test rework being required by the maintenance team on all newly created assets. The problem was that, while the tests were fit for purpose and passed on day 1, there always were enough slightly fragile components which gave newly created asset an embarrassingly short life-span. This was the result of some genuinely overly-complex actions, reliance on particular dates/times, and assumptions about existing data.

For a long time the execution team was responsible for these assets, but it became increasingly obvious that the creation team were not improving on the quality front. If anything they/we were getting worse over time, as we continued to build test upon test using our slightly dodgy implementations.

We ended up changing the ownership of assets by introducing a warranty period. In order for the creation team to appreciate and learn the onus of fixing broken assets remained with the creation team for several months after the script was declared ‘ready’. While productivity took a  hit initially, it wasn’t long before we ended up with considerable time-savings. Knowing what pain was in store resulted in a real shift in culture when creating assets. The amount of code-reuse increased considerably, and the overall maintenance effort decreased. The future tests we created were built with the knowledge of what went wrong previously; that makes for a pretty solid foundation.  

Thursday, 29 November 2012

Selenium over QTP

Here is an excerpt from a recommendation made for my current project, where we chose to use Selenium rather than the established (and, in our case, mandated) QTP.

Reasons for:

·         Developers will be able to easily run tests in local sandbox environment
·         Tests can be written in Java
o   Tests can be dev-friendly; developers could assist with tests
o   Java is tidier and easier to deal with than VBS (QTP’s language), and considerably more powerful in application
o   Social element, as dev and test have another thing in common
·         Tests will be faster to execute
o   QTP has a time lag on loading (when libraries being loaded are sizeable, and also when acquiring a license from the license server)
·         Browser support includes possibility of iOS and Android emulators
·         Eclipse is the better IDE
o   Tests will be easier to create and maintain
o   Debug time will likely be faster as a result
·         Integration with continuous build systems. Selenium tests can be easily initiated from command line already, this is not quite so easy with QC/QTP.   


Wednesday, 28 November 2012

Agile Automation - tooling overview


I've spent a lot of time working with QTP over the years, and have successfully driven tests using a combination of batch processes, VBS processes and QC OTA API calls. While QTP works nicely (has it's niche and so on) I have a personal preference for Selenium WebDriver[0] for integrating testing within an agile team (light weight, cost, versatility of tool, reporting, debugging, targeting tests, ease of writing and updating assets, scaling is actually affordable, actually means working with developers, and so on). 

[0] Full list of tools involved in current project's implementation
  • Selenium RC/Grid for distributing tests, 
  • TestNG for driving the tests, 
  • Subversion for assets, 
  • Cruise Control at present, 
  • moving to Jenkins for building all artefacts, 
  • Python for serving up results, and 
  • PSExec, VBS and batch scripts for deploying/initiating/plumbing everything that's needed. 

Automation and Continuous Integration

Familiar story: A few devs commit some lovely code, one dev commits some lovely bugs. Testers deploy the latest code compiled from all devs that evening. Testers begin running some tests the next morning. A bug is located. Who made it requires a little bit of chasing. What was the change that made it requires a bit of thinking. Dropping current tasks to pick up bug fixing requires a little bit of context-switch-time.  That's inefficient. Finding bugs early is cheap. Any delay and costs start ramping (illustration of a graph with a curve available here

What's pretty cool/sleek/shiny is the far-from-new-or-novel idea that we perform a bunch of tests against everything a developer creates. Immediately. For every change. Just run stuff. 

Amazon are typically the prime example of having the sleekest of processes, as they are able to go from a developer making a change to actually seeing this change in production within 1/2 an hour[0], having completed a full regression in this time. 
[0]this is from a conversation I had with some random person at some random pub, credibility is definitely lacking. But the idea is there, captured in this (possibly fictitious) scenario, that with the right testing in place we can provide however much feedback is needed. 

So, what do we want? We want a range of tests which are easy to create, easy to maintain, that cover off a fairly vast portion of the application, and detect defects quickly. This just requires having a healthy build process within a team, and plugging in a test automation framework that meets the requirements. It'll all need to be easy, largely (100%?) automated, and have valuable feedback to the team. 

There's nothing profound with any of this; it's a simple idea: test early and often. It's cheaper, and will allow everyone to spend their energies on making the best darn product they can. 

Tuesday, 27 November 2012

Grouping Tests - TestNG

One of the many brilliant TestNG features is the ability to group tests.

This is a high-level overview of how I structure this; I will try and flesh this out when I get some down time.

From command line, invoke TestNG
java org.testng.TestNg FullRegression.xml
(from memory) (note, classpaths will probably need to be set up)

My file, FullRegression.xml just references other files.


<suite name="MyProjectTestSuite" verbose="3" >
 <suite-files>
  <suite-file path="./Regression_Story101.xml" />
  <suite-file path="./Regression_Story102.xml" />
  <suite-file path="./Regression_Story103.xml" />
  <suite-file path="./Regression_Story104.xml" />
  <suite-file path="./Regression_Story105.xml" />
  <suite-file path="./Regression_Story106.xml" />
  <suite-file path="./Regression_Story107.xml" />
  <suite-file path="./Regression_Other.xml" />
  <suite-file path="./Regression_Failing.xml" />
 </suite-files>
</suite>


Example of a subsequent file looks something like this


<suite name="Story104" verbose="3" >
 <test name="Regression1">
  <groups>
   <run>  
    <exclude name="overkill" />
    <exclude name="failing" />
<include name="story104" />
   </run>
  </groups>
  <packages>
   <package name="com.whiteboxit.mattTestA.tests" />
  </packages>
 </test>
</suite>

Why have I done this

Well, visibility. My auto-generated html run results are now broken down by story. This looks awesome. And it is really easy for me to keep on top of something which has started failing, or whatever.

How I've approached test result reporting with TestNG

This is a look at how I've approached result reporting on my current project. Results, current and historic, are served up on my company's intranet at a URL.

Using:
Selenium, TestNG, Java and Python

So, I run my TestNG tests and these result in some cool looking results automatically (win). These look a bit like this:


Two limitations though:


  1. I want everyone to be able to view this easily, and 
  2. I want a historical record of past runs to be present
Solution, I'm saving these in to a folder which is hosted up by a simple HTTP server, which is built in to Python. 

Python / simple HTTP server

Go to directory. 
Invoke the following:
c:\dev\opt\python-2.7\python -m SimpleHTTPServer 8000
Obviously I've put in the path to Python, obviously you'll need to change the path to wherever your Python is. 


Historical Record

This was achieved by just creating a copy of the assets at the location being served up whenever I wanted. 
By navigating to the root dir in a browser I could see all of my result folders. 
 Nice, now I just click on a link. 

Batch Scripting a folder copy for historical run results

Just because I am too lazy to copy a folder and rename the new one to a date stamp I created the following batch script which would do it for me. 

@echo off
ECHO Generating timestamp to use in folder name
For /f "tokens=2-4 delims=/ " %%a in ('date /t') do (set mydate=%%c%%b%%a)
For /f "tokens=1-2 delims=/:" %%a in ("%TIME%") do (set mytime=%%a%%b)
echo %mydate%-%mytime%

SET newResultFolderName=%mydate%-%mytime%

ECHO New folder name will be %newResultFolderName%

SET locationOfLatestRunResults="C:\dev\tests\DrivingXml\test-output\"
SET uriToPublishResults="\\myserver01\selenium-test-output\welc\%newResultFolderName%\"

ECHO Location of latest run results: %locationOfLatestRunResults%
ECHO URI to publish results to: %uriToPublishResults%

XCOPY /S /V /Z %locationOfLatestRunResults%* %uriToPublishResults%

EXPLORER %uriToPublishResults%




Wednesday, 14 November 2012

Executing TestNG Selenium tests via Command Line

My Java's a little rusty.
I'm new to Selenium.
This will be a guide for me: the 'above novice programmer' and 'below dev'.

Starting Assumptions

Java jre is installed.
Class path for Java has been set up.
You know how to add/edit class paths (for most (all?) Windows machines this is: Start > right click My Computer > Properties > Advanced tab > Environment Variables)

There is a Selenium project kicking around, already set up with TestNG. 
We're on a Windows computer.

I'm going to assume that you've been able to run these from your IDE - in my case, Eclipse. For help on this, or getting TestNG set up originally, refer to my older post on Implementing TestNG

What two things we need to set up

Two classpaths are required. Please be aware that a computer reboot is required in order for a class path / environment variable to be set.

Class path to Selenium jar

So I have a file on my computer: C:\Selenium\selenium-server-standalone-2.25.0.jar
So I add in to my class path
c:\Selenium\*
From memory I think this didn't work without the \* - I can't really be bothered analysing why, or double checking this. 

Class path to my project

And I have a project at C:\dev\Workspace\MyProject\
So I will add the bin to my class path
C:\dev\Workspace\MyProject\bin;
This seemed to throw a wobbly when I added \* ..again, I am a bit perplexed but can't really justify analysing why, or double checking this. 

Command to Run

Refer to the official documentation for examples on creating a driving xml file.
C:\Selenium\DrivingXml>java org.testng.TestNG myDrivingXmlFile.xml

Work log (and issues encountered)

Could not find or load main class org.testng.TestNG

Check your class paths. I had this because I failed to include my class path to my selenium jar. Or had an error in one of my class paths.

Exception in thread "main" java.lang.NoClassDefFoundError: org/testng/TestNG

Computer probably doesn't hvae TestNG jar file. Will test this on Monday though as today is POETS day. 

Tuesday, 13 November 2012

QTP/vbs - Dynamically loading executable code - pt II.

I very recently posted an article on loading vbs code dynamically, and noted that this had a shortcoming with functions loaded not being globally available.

Solution

ExecuteFile "c:\myFile.vbs"

Where myFile.vbs is a file filled with a bunch of executable commands and such. 

QTP/vbs - Dynamically loading executable code - pt I.

There is a cool function in vbs which allows you to execute a string as regular vbs code.
This function is Execute.
The downside is that the Execute command is not persistent; i.e. the variables declared in memory, and functions 'registered', will not be available globally[0]. I didn't realize this in 3 years of using QTP until I came to reimplement it on another project.

[0] Example
strMyVbs = "strSomeText = ""Hello World"" "


f_loadMyVariable
msgbox (strSomeText) 'This does not print Hello World to the screen, because the executed code was/is local to the function



Function f_loadMyVariable()
  Execute strMyVbs
  msgbox (strSomeText) 'This will print Hello World to the screen
End Function

Hopefully, for the sake of my productivity today, there will be a post following showing how to make this a global thing. -- solved, QTP/vbs - Dynamically loading executable code - pt II. --

Implementation

A quick elaboration on implementation
Dim uri
uri = PathFinder.Locate("[QualityCenter\Resources] Resources\SomeOtherFolder\myFile.qfl")

Dim oFileSystem, oFile
Set oFileSystem = CreateObject("Scripting.fileSystemObject")
Set oFile = oFileSystem.OpenTextFile(uri)
Dim strInputText
strInputText = oFile.ReadAll
Execute strInputText
oFile.Close


Monday, 29 October 2012

Implementing TestNG

I'm starting with a Selenium test framework which I have hammered together over the last few weeks which involves Selenium + Junit, all written in Java and with the help of Eclipse. There are now a few tests performing over 300 validations, and now I am starting to realize that maybe driving my tests through JUnit is just not good enough. 

The immediate issues I am aiming to resolve is the ability to tag and execute specific tests, report tests as being 'not run' (rather than passed/failed) and get some cool xml reporting going on. From a skim through of the TestNG stuff it looks like that's going to make my life easier. 

So, what to do. 

Using TestNG in your code (for Java)

Get TestNG jar
2) save/extract somewhere

In your project/build tool
3) configure your project's build path to include the TestNG Jar

In your code
4) Remove your JUnit references, just to force us in to importing/using TestNG's stuff
//import org.junit.*;
5) Add in references to TestNG
import org.testng.annotations.Test;
import org.testng.Assert;
6) replace your JUnit assert(..., ...) commands with Assert.assert(..., ...)

Please note, TestNG goes
Assert.assertEquals(actual, expected)
rather than expected, actual. 

Driving TestNG with eclipse

Get Eclipse/TestNG plugin (offline download)
And the link to download the plugin (not necessarily the most up to date, but at least a working one) is here: https://groups.google.com/forum/?fromgroups=#!topic/testng-users/v8mlca9oQoo
With this I have simply placed the .jar in my eclipse plug in dir (e.g. C:\dev\eclipse\plugins) and the started Eclipse, and I seem to have gained the ability to run TestNG tests. 

And now the TestNG plug in / program runner is available within the Debug As and Run As context menu options. 

Alternative / Eclipse's easy as 'Install' feature
You can do this cool thing where you download and install the software by going Help > Install new Software > and then chose TestNG from http://beust.com/eclipse
I work under archaic, fear based corporate policies however, and so this option is not available to me. 



Monday, 3 September 2012

QTP Clean Up Script Perfection II

Following on from the last time I posted something like this I have largely overhauled my clean up procedure.  

I suppose this isn't really limited to QTP as it's mostly VBS code, and objects are identified through Windows/desktop calls.

So, it starts with a driving function. I think it's self explanatory thus far.


I don't want to go overboard explaining all the calls and stuff, I have pasted below what I believe to be a near-complete working copy of the code. You'll need to remove any custom functions which point to other things (e.g. fDone, shown in the above picture).

For each fCleanUpDialogs(intIterations) and fCleanUpBrowsers(intIterations) objects are gathered, and then I try to close each of these. Seems simple enough.

An example of fCleanUpDialogs(...)

Again, this makes subsequent calls to other functions. This is largely for tidiness, and allows for easy modification of this function without needing to think too much.

And here's my code (formatting and commenting has proven to be a bit mind-numbing, forgive the messiness):



Function fCleanUp(intIterations)
fDone "fCleanUp() [CALLING]", "Function called. Paramater intIterations: " & intIterations
Dim i
i = 0
Do while i < intIterations
If (fCleanUpDialogs(1) = 0) AND (fCleanUpBrowsers(1) = 0) Then
Exit Do
End If
i = i + 1
Loop
fDone "fCleanUp() [EXITING]", "Function exiting after " & i & " iterations."
fCleanUp = i
End Function


'********************************************************************************************
'    Function    fCleanUpDialogs
'    Automater: Matt S
' Purpose: A function with will close all dialogs currently present
' Returns: The number of dialogs that the clean up proc performed
' 0 means that there were no dialogs cleaned up
'********************************************************************************************
Function fCleanUpDialogs(intIterations)
   Reporter.ReportEvent micDone, "fCleanUpDialogs() [CALLING]", "Function called. Parameter intIterations: " & intIterations
   Dim oDialog
Do While intLoopCount < intIterations
'bDialogDetected = false
oDialog = null
On Error resume next
set oDialog = fGetADialog()
On Error goto 0
If isNull(oDialog) = False Then
If fCleanUpDialogs_tryClickButtonWithName(oDialog, "Cancel") = False Then
If fCleanUpDialogs_tryClickButtonWithName(oDialog, "OK") = False Then
If  fCleanUpDialogs_tryClickButtonWithName(oDialog, "Close")= False Then
fDone "fCleanUpDialogs() [WARNING]", "Dialog detected, but unable to recognise which button closes the thing"
End If
End If
End If
Else
Exit do
End If
intLoopCount = intLoopCount + 1
Loop
fDone "fCleanUpDialogs() [EXITING]", "Function exiting after " & intLoopCount & " iterations."
fCleanUpDialogs = intLoopCount
End Function


'Supports function fCleanUpDialogs
'********************************************************************************************
'    Function    fCleanUpDialogs
'    Automater: Matt S
' Purpose: A function with will attempt to click on a dialog's button, by button name
' Returns: True if click was successful, else false
'********************************************************************************************
Function fCleanUpDialogs_tryClickButtonWithName(oDialog, btnName)
If Dialog("hwnd:="&oDialog.getRoProperty("hwnd")&"").WinButton("text:="&btnName&"").Exist(1) Then
Dialog("hwnd:="&oDialog.getRoProperty("hwnd")&"").WinButton("text:="&btnName&"").Click
Reporter.ReportEvent micDone, "fCleanUpDialogs()", "Attempted to close a dialog by (successfully) clicking on button: " & btnName
fCleanUpDialogs_tryClickButtonWithName = true
Else
Reporter.ReportEvent micDone, "fCleanUpDialogs()", "Attempted to close a dialog but could not locate any button of name: " & btnName
fCleanUpDialogs_tryClickButtonWithName = false
End If
End Function


Function fGetADialog()
fDone "fGetADialog()", "Calling function fGetADialog()"
fGetADialog  = null 
If Browser("index:=0").Dialog("index:=0").Exist(1) Then
fDone "fGetADialog() [RETURNING]", "Successfully identified a dialog by searching for Browser(""index:=0"").Dialog(""index:=0"").Exist(1)"
set fGetADialog = Browser("index:=0").Dialog("index:=0")
Else
Dim oDialogDesc, oDialogs, oTemp
Set oDialogDesc = Description.Create
oDialogDesc("Class Name").value = "Dialog"
Set oDialogs = Desktop.ChildObjects(oDialogDesc)
If oDialogs.count > 0 Then
fDone "fGetADialog() [RETURNING]", "Successfully identified a dialog by searching for objects with Class Name = Dialog"
set fGetADialog = oDialogs(0)
End If
End If
End Function


Function fCleanUpBrowsers(intIterations)
fDone "fCleanUpBrowsers() [CALLING]", "Function called. Parameter intIterations: " & intIterations
Dim i
i = 0
Do While i
Dim oBrowser
oBrowser = Null
On Error Resume Next
set oBrowser = fGetABrowser()
On Error Goto 0
If IsNull(oBrowser) = True Then
Exit Do
Else
oBrowser.close
End If
i = i + 1
Loop
fDone "fCleanUpBrowsers() [EXITING]", "Function exiting after " & i & " iterations."
fCleanUpBrowsers = i
End Function


Function fGetABrowser()
fDone "fGetABrowser() [CALLING]", "Calling function"
fGetABrowser = null
If Browser("index:=0").Exist(1) Then
fDone "fGetABrowser() [RETURNING]", "Successfully identified a browser by searching for Browser(index:=0)"
set fGetABrowser = Browser("index:=0")
Else
Dim oBrowserDesc, oBrowsers, oTemp
Set oBrowserDesc = Description.CreateCreate
oBrowserDesc("Class Name").value = "Browser"
Set oBrowsers = Desktop.ChildObjects(oBrowserDesc)
If oBrowsers.count > 0 Then
fDone "fGetABrowser() [RETURNING]", "Successfully identified a browser when searching by Class Name = Browser"
set fGetABrowser = oBrowsers(0)
End If
End If
End Function

QTP Troubleshooting – Objects not recognized


In ALM11 I’ve had an issue with QTP. When running a test no objects were being recognised in the application. They should be, they have been before, the components work in other tests, but not one test, for no reason. Using the Object Spy I can see QTP clearly doesn’t know what to make of the Browser, thinking it’s just a Windows object.


This is a problem I’ve been dealing with on and off for the last couple of months, and I am pleased to announce that the culprit was a certain Component making an appearance at the start of my BPT Test. Sadly I haven’t worked out what exactly is the reason why this component has sent everything in to a spin, but I am happy enough that I know generally where the failure is, and that it is not with my entire test.

It was a scripted component which did a very simple check on an input parameter.

I have since moved the function from the first to the second step in my test and now everything works again.

If anyone knows about this issue feel free to leave a note in the comments.

Thursday, 30 August 2012

QC: using Datatables within BPT Components

Here's a quick tutorial (with awesome mspaint screen-shot narrative assistance) on loading test data from an excel spreadsheet in to a QC/ALM BPT Component.

At a high level we add the data in to QC, associate the data with a test, and then use the data in the test. Read on for the breakdown on what this involves:

1)      Upload data to QC > Test Resource area, to a new asset termed ‘Data Table’ (don't ping me on the lack of variable naming consistency - this is just a rushed example).


a.       The data must be in the format where the data(/variable) name is across row 1, and the actual data values are contained on row 2
b.      I assume this is data driven (I have not tested this) meaning you could have data on rows 2, 3, 4, …, n, and this would just result in the test being called multiple times.


2)      In QC > Test Plan > Some Test > Parameters tab
a.       In Data Resource, locate your recently uploaded spreadsheet
b.      Add new test parameters
c.       Map parameters

That looks like:


3)      In your Test > Test Script tab
a.       Plug in your parameters here, but clicking on the iterations hyperlink




And now you’re done.


A note on QC/ALM11 behaviour. Ensure ‘Automatic Runner’ is closed and re-launched from Test Lab for data changes to be applied.

Thursday, 16 August 2012

Communicating with SCQA

A crucial aspect of any job is communicating your point of view clearly and susinctly. Putting words together to form and support an idea is a learnt and practised art.

People often ask me how I am so awesome at communicating. I have a technique which I always rely on. SCQA; simply break the message down on four points: the situation, complication, question and answer. This allows the reader (/listener) to understand your point of view nicely and easily.

So, what is the S, C, Q and A?

Situation
An overview/introduction in to what is going on

Complication
This is the climax, some tension or drama that really gets people going 'oh, that's not good'. Amount of drama to use obviously depends on the situation and your own agenda, don't over do it crying wolf.

Question
What can we do to resolve this?

Answer
You can lead the witness and help out with an answer.. Or you could structure the above three so that there is no option but what you're suggesting.


I should mention, the Q and A parts are optional. 


And voila, messaging made easy for the sender and receiver.

In example
S As you're aware TesterA has been allocated to two projects, C and as a result they have been unable to complete testing on ProjectX. Q Should we do something about this? A I suggest hiring a Project Manager who can count. It would make sense to have them assigned full time to ProjectX.

Yes, No, I don't know

There are three right answers to any question; yes, no and I don't know. It's intriguing how some people seem intent on using the wrong one for their situation. Perhaps it's a mask to cover not knowing everything.

Pro Tip: feel comfortable using each of these answers, and always use the right one; don't be a douche.

Wednesday, 20 June 2012

A short rant on naming conventions

So today I am having to run automated scripts manually - part of the triage process - and I thought I had  come to grips with all major vague statements which make up the automated suite. That was until I encountered the instruction to "[click] lnk2". 

Whatever is being created, whether it is declaring variables, reusable test scripts, or naming objects for an automation library, remember that in a year from now someone else will be picking it up and trying to work out what on earth you were on about. The key - I am not going to preach my own flavour of conventions here - just think about it. Be proactive. Have a structure. Go back and update names which were fit for purpose but are now a bit rubbish. Otherwise you will end up with a bigger pile of rubbish. Just do it.

Tuesday, 19 June 2012

QTP tests are not running from QC - Failed (!)

This is a rushed post as I collate my thoughts around why my tests are not running 2 1/2 days before a release.

So I've recently set the remote agent to restart after 10 runs, in hope of reducing the likelihood of QTP crashing mid-run. 

My problem: suddenly every tenth test I run from QC crashes.
Hmm. 

Change remote agent restart value to 1

Now every test fails. 

Logs
With medium level logging I see: 
"Error...CRemoteAgent:get_value Pier is NULL", and
"Error...SetDisplayKeywordView : Automation is null"

I tried restarting the machine

Firstly, I've tried updating Remote Agent settings (all values aside from the restart testing tool after x). Problem persists.

I've given up and set QTP to restart after every 0 tests. My tests are now running. I will have to carry on investigating when I am not 2.5 days out from a major release. If you've seen this / know how to fix please post a reply.

Notes on my environment
Windows XP
QC ALM 11 (mid-tier license, from memory)
QTP 11.00.0.0 Build 1018



Tuesday, 22 May 2012

QC - a cursory glance at Baselining

Launching right in - we have the concept of Baselining in HP's Quality Centre which is different to, but highly complimentary with, Version Control. We'll have a quick look at what can be done with Baselining below and Version Control in a subsequent post.

When writing this my focus is around automated quick test professional keyword-driven Business Process tests, but the findings below [should] transcend and apply to each test type and all QC assets.

Out of scope: I haven't explored behaviour around QC's Requirements or Reporting, but I don't anticipate there being any surprises there.

Baselining

"Hey, did you know I play bass?" - the bass-line, when used correctly, is a highly effective pun.
Baselining saves a copy of a bunch of assets (termed a Library) at a particular point in time. 

Should you fail to include some linked/required assets in your Library, don't worry, they will be automatically included. 

What it's not

Version Control
Branch/merge tool. 
It is a pure snapshot in time, nothing more, nothing fancy.

What can we do with a baseline

We can see how an asset once looked. This is a slow process.

We can compare between two different libraries of assets to spot the differences. This is a slow process and very click-intensive, nothing like using a real version-comparison tool.

Baselined assets can then be reimported in to the same project if you're on QC enterprise (I'm not 100% sure about QC starter edition) or in to a different project if you've forked out for the conglomerate license. 

Test Lab can be set up to execute tests from an historic/baselined snapshot. This is achieved easily by navigating to a Test Set in Test Lab, viewing the Details page, and then selecting the appropriate baseline. Be warned, doing so will clobber all existing test run information and will also remove any tests that are not associated in the baseline. Concerns on this use. [0] This makes me wonder how we can execute a test which was part of a baseline but is part of one no more. [1] And I am also curious how this works when stepping through and triaging. [2] I am also guessing it is impossible to update a baselined version of a test. These are important questions so I am going to start experimenting now. 

[0] You can't, the test is gone forever - see below about deleting.

[1] So, take an asset which has been baselined, modified, and then the baselined test is added to a test set. Invoking the context menu for the test (in the test set) and selecting 'show test in test plan' takes us to the updated (current version of the) asset. Upon first glance there are few/zero clues to draw my attention to the baseline. Within Test Plan I am able to view the historic asset (from the Test in Test Plan, navigate to the History tab, then chose the Baselines tab). Using this for triage is not going to be straight forward.

[2] And I have attempted to update the historic test asset, and can confirm that I seem to be unable to. A guessed-at workaround workflow might be to re-import the library elsewhere, update the test and... well... mash it back in to place, loosing a tonne of historic run information and possibly all requirement/defect links for the original test in the process. 

Switching between baselines

This is an extremely valuable workflow - the ability to roll-back.

There is no reverting back to a previous version of an asset with baselining. What can happen is the assets are re-imported in to a new folder, and then you can copy these in to replace existing assets - but you wouldn't actually want to do this as you will end up loosing all Test Lab run results for a particular test, and possibly all links between items. 

At best you can reference back to each test, one by one, and pick out what is going on in the test, and then manually re-write/update the existing asset based on this historic record. 

Importing a baseline

I imagine this is only useful once and for setting up a project. There are no other workflows I can think of where you would want to re-import.

Deleting a Test

Will remove the test from the baselines when referenced from Test Lab. 
The test will still exist though, but a re-import is the only way that the test will ever see the light of day again.

Deleting a Baseline

Will also delete all Test Sets associated with it. 

Closing

So we've had a look at the finer detail of what baselining does and what its limitations are. Next up we'll look at QC's built in Version Control capabilities and see whether that gives us any freedom around branching, merging and rolling back.

Tuesday, 13 March 2012

Single Fastest Form of Manual Testing

The title of this post promises big things, and I assure the reader that I am not blowing smoke when I boast of possessing the secret to the fastest form of manual testing under the sun.
Well we're use to the model of the time elapsed since code was created vs the cost of fixing a bug. It's extremely relevant to what I'm talking about so here it is again.
 
Cost of fixing a bug
Well enough of the theory, on to the massive time-saving secret.

It's simple actually.

All it is is sitting down next to a developer for 5-10 minutes before they commit code and have a play with it.

This Over the Shoulder test approach was used with at my last company with great success. The benefits include:
    
  • Bugs will be found while sitting there, even if the developer 'tested it'. Testers will look with more tenacity than most developers, I suspect this is largely because a developer's primary goal is to create, a tester's, to break
  • Bugs found at this stage are cheap to fix. Undoubtedly cheaper than if they'd been found a few hours later
  • Frustration felt by the developer can be vented in a healthy fashion (I've heard some interesting expletives relating to requirements and time pressures whilst doing this exercise...)
  • Feedback to developer is almost instantaneous / issues which would have undoubtedly arisen incur no context switching penalty for the developer
    • A subtle benefit, there is no bug-process for a developer to get used to placing work in and out of when the feedback is this quick; it's reinforcing a sense of accountability
  • Conversation on dubious behavior is brought to light quickly
  • The developer is educated to what they missed without it being a big deal
  • No one needs to go through bug reports
  • The developer and tester have a chance to talk about what's been done - this simple chat can trigger all kinds of 'ohh, I didn't think about that' type statements, from either party
  • The developer/tester relationship is strengthened just that little bit extra by talking (unless they fought, which is good to get out of the system)

The representative graph of cost to fix over time becomes:
Cost of fixing a bug with Over the Shoulder moment added

The Poem of the Good Tester, pt 1

Since there is an abundance of blogged opinion on what makes a good tester I'm merely going to reiterate other's thoughts via the slightly more delightful medium of poetry.

Our tale starts off quite slow
With a project all down with woe
No manager was humming
As no releases were coming
Ah yes, a problem that we all know

Upon inspection there's functional delay
Because Test are barring, oi vhey
We ask 'what's the matter
As it's all in a tatter'
"The testers are wild!" you say

Well let's start off on this first thing
Listen closely, I'd like you to sing
Regardless management, dev, analyst
I don't give the faintest
It's to each other that you must Ping


Tune in for parts 1, 2, and, ..., n -1, n.

And if you're lucky, n+1 ...

The Fundamental Principals of a Good Test

Good tests are an oddity in so far as you will only appreciate the heuristics of a good test by encountering things which have gone wrong due to a 'bad' test.
Admittedly there aren't hard and fast rules which one should follow to make a test 'good', but merely some tricks for young players to be weary of to ensure the test that they are creating is fit for purpose in the present and the future.

Somewhat idealized end goals:

  • Have a bug result in one and only one test failing
  • Have only one artifact to update when a single piece of functionality is changed
  • Have others be able to follow your work quickly and with great comfort

New terminology
Verificational and navigational steps: in a test script there are really two types of test steps, one is verifying the behaviour of a specific feature which you are interested in, and one is navigating you to a place where you can verify from. These are verificational steps and navigational steps respectively. For the latter it doesn't matter how the script navigates around or whether defects are encountered en route, the only thing a test script is concerned with is verifying what it was intended to verify. 

In a nutshell, I'd put the 'good heuristics' as:
  • Scripts
    • Atomic - they shouldn't be larger than they need to be
      • A fail should represent one single piece of functionality failing
      • A single piece of functionality failing should result in one fail
      • A test shouldn't encompass so much that one missing piece of functionality prevents another twenty pieces of functionality from being clearly tested/reported/tracked
  • Steps
    • Clear - anyone should understand what you're saying
    • Succinct - it's quick to read, quick to understand
    • Contain one and only one action to perform per step
    • Contain at most one item to check in the expected result
    • No confusing logic, if possible
    • Strictly no branching in verification steps (a single test cannot do X or Y, test x1 can do X, test y1 can do Y)
    • Link to defects - if something is failing then link the test to a defect and vice versa
  • Identify repeated content
    • Repeated steps should be put in a central location and referenced, rather than written out repeatedly
    • A clear description of pre and post conditions
    • Avoid verification steps where possible - this is added complexity, disastrous should a step ever be excluded in the future (who will know to test it), and will result in multiple 'fails' being reported
  • Data
    • Pull test data out of tests and place it in a central place
  • Have a script creation process
    • Ensure assets are reviewed
  • Have a script maintenance process
    • Ensure assets are up to date
  • Give tests attributes
    • Status is a good attribute, so we can see whether the test is 'ready', or 'in design', or 'awaiting review', or whatever
    • Priority - you are unlikely to ever have sufficient time to execute all tests, and when you need to choose which ones to run you often already low on time. Decide before hand
  • Execute the tests
    • !! it seems obvious, but it actually does need to be proactively scheduled and done
  • Have traceability
    • Meaningfully link tests' pass/fail statuses to functionality/requirements allowing pretty charts of the system's health to be generated
  • Store relevant knowledge (in steps, defects, or even wikis)
    • If someone has to ask a question about anything, realize that they aren't going to be the only person to ponder it, so, put the answer somewhere useful
    • If you have to ask a question also put the answer somewhere useful
As I said, the above arn't hard and fast rules but more of a guide as to what's worth considering when creating your own test scripts.

Robot Framework, Basic Setup

Plug: Robot Framework is quick to setup, easy to write tests for, and super fast to triage failures in. The last point really sets it apart...