Wednesday, 20 November 2013

Selenium and JBehave, a Hello World

This is intended to be the simplest JBehave hello world out there. And my implementation really is the bare minimum I can do and still sleep at night.

To get this going we must... 

Obtain the following

(version numbers are for my reference more than anything else; I'd always recommend getting the latest of everything)
Eclipse - my ide of choice - (I've gone for Kepler Service Release 1, Build 20130919-0819)
Java SDK (1.6.0_23-b05)
JBehave - I downloaded the latest jbehave-web-3.5.5
Selenium Standalone Server (2.37.0)
Selenium-java (2.37.0)

Eclipse JBehave Plugin (1.0.0.20130304-2201) http://jbehave.org/reference/eclipse/updates/ 
FireFox (25.0)
Helpful stuff/FireFox add-ins: Firebug (1.12.4), Firefinder for Firebug (1.4), xPath Checker (0.4.4) 

Step 1 - Create a Project in Eclipse

Should be simple. 

Step 2 - Configure Build Path

I have included all of the jars located in jbehave-web-3.5.5\lib in my build path library.
I have included selenium in my build path library.
There might also be a selenium-firefox driver in the library also...perhaps if needed add in the standalone-server to get this jar, or search the web.


Step 3 - Create Assets

Three files are needed for this. The .story file which contains Given|When|Then BDD steps, a Step-Implementer which will do all the work, and a Test-Configuration which will manipulate Eclipse in to thinking we have a JUnit test sitting here, and that it knows how to run it.

My understanding of the Test-Configuration is limited - I gather it has something to do with telling our old wolf, JBehave, a bit about how to dress up in lamb's clothing. Or something like that.

Time to get in to creating the example files. Please note the file names, google_hello_world.story, GoogleHelloWorld.java and GoogleSteps.java
The first two actually are closely related, everytime the .java contains a capital the .story contains an '_'. This is a simple implementation where the Test is simply trying to find a story with the 'same' name.

inside google_hello_world.story I place the text:
Scenario: Perform a search for the almighty Book of the Faces
Given I navigate to google.co.nz
When I perform a search for "facebook"
Then A link "Welcome to Facebook - Log In, Sign Up or Learn More" exists in the results





inside GoogleHelloWorld.java I place the text:
package nz.co.whiteboxit.jbehaveselenium.demo;

import java.util.List;
import org.jbehave.core.configuration.Configuration;
import org.jbehave.core.configuration.MostUsefulConfiguration;
import org.jbehave.core.io.CodeLocations;
import org.jbehave.core.io.LoadFromClasspath;
import org.jbehave.core.io.StoryFinder;
import org.jbehave.core.junit.JUnitStory;
import org.jbehave.core.reporters.Format;
import org.jbehave.core.reporters.StoryReporterBuilder;
import org.jbehave.core.steps.CandidateSteps;
import org.jbehave.core.steps.InstanceStepsFactory;

public class GoogleHelloWorld extends JUnitStory{
   
    @Override
    public Configuration configuration(){
        return new MostUsefulConfiguration().useStoryLoader(
            new LoadFromClasspath(this.getClass()))
            .useStoryReporterBuilder(
                new StoryReporterBuilder().withCodeLocation(
                    CodeLocations.codeLocationFromClass(this
                        .getClass())).withFormats(   
                    Format.CONSOLE, Format.TXT, Format.HTML, Format.STATS))
                    ;   
    }
   
    @Override
    public List<CandidateSteps> candidateSteps(){
        return new InstanceStepsFactory(configuration(),
            new GoogleSteps()) //can put in a comma separated list of Step implementers here
            .createCandidateSteps();
    }

    protected List<String> storyPaths(){
        return new StoryFinder().findPaths(
            CodeLocations.codeLocationFromClass(this.getClass()),
            "*.story", "");
    }
   
}


inside GoogleSteps.java I place:
package nz.co.whiteboxit.jbehaveselenium.demo;

import java.io.File;
import java.util.List;
import java.util.concurrent.TimeUnit;

import org.jbehave.core.annotations.Given;
import org.jbehave.core.annotations.Then;
import org.jbehave.core.annotations.When;
import org.openqa.selenium.By;
import org.openqa.selenium.ElementNotVisibleException;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxBinary;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.firefox.FirefoxProfile;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.FluentWait;

import com.gargoylesoftware.htmlunit.ElementNotFoundException;

public class GoogleSteps {
    private WebDriver driver;
    private FluentWait<WebDriver> fWait;
   
    public GoogleSteps() {
        File pathToBinary = new File("[RELEVANT PATH]\\AppData\\Local\\Mozilla Firefox\\firefox.exe");
        FirefoxBinary ffBinary = new FirefoxBinary(pathToBinary);
        FirefoxProfile firefoxProfile = new FirefoxProfile();
        FirefoxDriver driver2 = new FirefoxDriver(ffBinary, firefoxProfile);
       
//        driver = new FirefoxDriver();
        driver = driver2;
       
        fWait = new FluentWait<WebDriver>(driver).pollingEvery(500,
            TimeUnit.MILLISECONDS).withTimeout(10,  TimeUnit.SECONDS);
    }
    @Given("I navigate to google.co.nz")
    public void iNavigateToGoogleCoNz(){
        driver.get("http://www.google.co.nz");
        fWait.until(ExpectedConditions.visibilityOfElementLocated(By.id("gbqfq")));
    }
    @When("I perform a search for \"$query\"")
    public void iPerformASearchForQuery(String query){
        driver.findElement(By.id("gbqfq")).sendKeys(query);
    }
    @Then("A link containing \"$text\" exists in the results")
    public void aLinkContainingTextExistsInTheResults(String resultText){
        waitForElementToBePresent(By.partialLinkText(resultText), 5000);
    }
    @Then("A link \"$text\" exists in the results")
    public void aLinkWithTextExistsInTheResults(String resultText){
        waitForElementToBePresent(By.linkText(resultText), 5000);
    }
   
    public WebElement waitForElementToBePresent(By by, int waitInMilliSeconds)
    {
        int wait = waitInMilliSeconds;
        int iterations  = (wait/250);
        long startmilliSec = System.currentTimeMillis();
        for (int i = 0; i < iterations; i++)
        {
            if((System.currentTimeMillis()-startmilliSec)>wait)
                return driver.findElement(by);
            List<WebElement> elements = driver.findElements(by);
            if (elements != null && elements.size() > 0)
                return elements.get(0);
            try {
                Thread.sleep(250);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
        return driver.findElement(by);
    }
}


This should run as is within Eclipse as a JUnit test. 

Future work/recommended improvements

My sync/wait/check mechanisms might need some improvement - I suspect there may be a better way.

The run results are rubbish at present - only one JUnit looking 'test' for however many tests actually exist.

The story must have the same name as the JUnitRunner test - again, this is only because I'm lazy/low on time/aiming for the bare minimum. 

This could be distributed/run using Selenium Server - I'll include this below since I've already implemented this in my demo. 

Selenium bits could be way tidier; ideally abstract away from objects and implementations a little to get something with a bit more reuse. 
  

Optional - Enhancing this be distributed/run using Selenium Server

Launch a Selenium Server Hub
java -jar "[RELEVANT PATH]\JBehaveArea\selenium-server-standalone-2.37.0.jar" -role hub -trustAllSSLCertificates
Launch a Selenium Server Node
java -jar "[RELEVANT PATH]\JBehaveArea\selenium-server-standalone-2.37.0.jar" -role node -hub http://localhost:4444/grid/register -browser "browserName=firefox, version=25.0, firefox_binary=[RELEVANT PATH]\Local\Mozilla Firefox\firefox.exe, maxInstances=3, platform=WINDOWS"

Alter the Driver used in the code from
driver = new FirefoxDriver(ffBinary, firefoxProfile);
with some stuff that is going to look a bit like this:
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setBrowserName("firefox");
CommandExecutor executor = new SeleneseCommandExecutor(new URL("http://localhost:4444/"), new URL("http://www.google.co.nz/"), capabilities);
WebDriver driver = new RemoteWebDriver(executor, capabilities);
 

Further Reading

I have discovered a superb Selenium tutorial at http://www.seleniumhq.org/docs/03_webdriver.jsp 
On the JBehave front I am a fan of http://christophertownson.com/tag/bdd


Apology

I'd like to add the disclaimer that I have not actually compiled and run the above as a whole; I have put together, compiled and run something extremely extremely similar, but for the above I have merely mentally compiled - a little gendankenexperimenting - and this works fine (in my head). So feedback welcome. I'll even go out of my way to fix any bugs people might have.

Saturday, 9 November 2013

JUnit Hello World in its simplest form

Here is a very simple, very short example of running JUnit.


Assumptions: using Eclipse
Get: JUnit jar - this might actually be in Eclipse, I've never paid enough attention to this.
You might need to associate JUnit to the project's build path.
The reason for vagueness is I control+1 all of my errors to quick fix them, and minimize my own learning at the same time.

So, what to  do:

Write a test class
e.g.
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

public class HelloJUnit{
@Before
public void myBefore(){
System.out.println("This is the before bit");
}
@After
public void myAfter(){
System.out.println("This is my after");
}
@Test
public void makePie(){
System.out.println("Make me some pie");
}
@Test
public void moarPies(){
System.out.println("Make me even moar pies");
}
}
Then in eclipse you go Run > Run Configurations... and choose a new JUnit test. Get it to run all, or whatever, and click go/run. 

Wednesday, 16 October 2013

QC unable to run QTP10 tests on Windows 7 - Test Run Scheduler opens, nothing happens

This is an issue I encountered at work. Fortunately I had one Alex Irvine who's boss at fixing things like this.

The exact problem is that from QC I would initiate a test running on my local machine.
This would start the Task Scheduler on my machine, but this would not show as having any tasks or do anything whatsoever.

The solution:

1.    Open windows explorer and navigate to
C:\Program Files\HP\Quality Center Client Side
2.    Find wexectrl.exe and right click on it to view Properties
3.    Change the Compatibility settings to “Run this program in compatibility more for: Windows XP (Service Pack 2)”
4.    Click Apply
5.    Click OK

Tuesday, 3 September 2013

Bugs, Features, Egos

As a Test Analyst there have been a couple of times I have raised a defect against a story and been met with a surprising mix of hurt and resentment from the developer. On the outside it looks like a defect that the developer stuffed up, closer up it looks like something the developer missed in the specification, up very close it's something which was either too difficult or big to finish off within the allotted time.

This only occurred to me today, years after these events, as I sat pondering features for a game I'm making. The list goes like this:
"Make the existing UI menus look acceptable" feature.
"Make the enemy sprite not forget what he's doing randomly" feature.
"Make the zoom work smoothly" feature.
Once this list was complete I immediately fell from my Dev hat into my QA hat and realised something, that these are in fact three major defects. Gosh. I had not viewed these in this way before, things which I had missed because I was blinded by the goal. I suddenly felt a pang of shame and sadness - a bizarre response given I'm home alone with no one to judge me. But still, my pride was hurt and my job changed from creating exciting new features to fixing stuff I'd neglected to do properly earlier.

Until now I'd insist that these are words 'bug' vs 'feature' have no relevance. I mean really, we don't care do we? They only count as metrics and measures which we present to management bodies to show progress, but it really should have no influence on ego. Bollocks! If I feel shame alone at my bugs then I have a new found appreciation for preserving the public facing ego of the project devs by using the right words and the right approach to tweaking their delightful undocumented features.

My view as at today:
Bug reports are records of investigation and associated decisions - but not tasks for devs.
Features is the full list of things the application needs and gets prioritised as musts and shoulds.

Thursday, 27 June 2013

The value of testing / how can we approach it

I'm constantly torn between having and not having test assets. On one hand, it's valuable to show what is being tested[0], be able to ramp up testing effort by delegating tests to a cheap execution source[1] (e.g. students), and to remind the tester what actually requires verification [2].

Also, should we require automation, it can be useful to define what we want covered as, from past experience, automation teams under time pressure will be forced to make sacrifices if they cannot show clearly what truly needs to be covered. Actually, knowing what needs to be covered also requires some domain knowledge, and it may not be efficient for that knowledge to reside within the automation team.

Also, by defining what is to be tested clearly allows us to know the scope and effort required. There are few organisations who will tolerate not having this information, and understandably so. The first question any Product Owner will ask is 'what did you test', and 'about three days worth, looks sweet' is rarely a satisfactory answer.

So tests seem like a good thing.

But on the other hand, they are out of date the moment they are written, are a duplication of assets required for developers and when wrong, can consume considerably more brain-time than working out the functionality from scratch.

Well the big question is should we use them then? And if so, how?

Fundamentally the answer is always that it depends on what's going on. How big the project is. The priorities and deadlines of the company. So many variables. Let's just run with assuming we want to be efficient and pragmatic, and I'll float my ideal scenario for my ideal team.

Maybe it requires us to look at what the role of testing is anyway. Most organisations I have encountered seem to have it as a check to ensure that software contains as few bugs as possible, and the role of the tester starts and ends at that. I feel like more is required though. We should be building in quality from the start. Our goal should actually be to assist the developers in creating code with confidence, at the present time and in the future. So how can we do this? Waterfall, where we test after development has been working for a few weeks - lol - no. Developers need regular, meaningful feedback fast. So this leads us to Continuous Integration, which I won't delve in to here. This does mean that we identify some scenarios in a new feature and get the developers to create unit sized tests to validate these.

We must have some tests which we must conduct to show that software is working. I don't think we can ever escape that. I just can't see a way out of it. We just need to verify the critical stuff before software goes out the door. So we should identify these as regression tests, and keep these to an absolute minimum. I'm loathe to having documentation residing in multiple places (requirements, test assets, user manual), so I'd recommend marking a note inline in whatever the requirements doc is saying that something needs testing, and what. This might sound an over simplification, but if you have some form of a document which explains what should happen then there really shouldn't be any need to copy this information elsewhere, which is what companies invest vast sums of money to do. Or make a mark elsewhere, but just think it through and avoid duplication.

So we have some manual tests owned by QA, and we have some unit sized automated tests owned by Dev. The final role would be to perform a bit of ad hoc testing as a feature is completed just to ensure that it is satisfactory, which doesn't necessarily have to be driven or conducted by a QA, and I think we're done. We will have some light documentation indicating what existing features are to be tested, how and when, and this should be sufficiently traceable for everyone that is so insistent on visibility (which is important). The only downside is that we're still duplicating the requirements in to some tests in a sense - the automated tests - which, being a duplication, annoys me. Perhaps BDD would be better (briefly mentioned below, but will be elaborated on in subsequent Blog post).

Since there are some assets in one form or another we need to track these. To help, it is useful to tag tests with how they're to be executed, so we'd want something to convey it's a one off feature test, a regular regression test, and whether or not it has been automated.

I have worked in a place which used Bugzilla for bugs and stories; basically for all of our requirements and manual test assets. The great thing about using this for stories is that we could insert Use Cases, which we did in the form of 'Given...When...Then...' and we made the developers execute these manually before handing a story over to QA, easy to do because they were right there in the very simple requirements doc. But let's focus on the cool point there.. They ran our tests! This was great. If they missed something obvious then shame on them. It' worked phenomenally well. We would then prepare some of these to become BDD/unit tests (actually the majority of these, but in hindsight a smaller subset could've been wiser). Sadly this was only 99.9%-of-the-way-there BDD, as we still copied our Use Cases over to a text file which was interpreted and used to drive the nunit tests. Still, pretty cool, and a bit more efficient than having thousands of detailed manual tests.


Friday, 17 May 2013

What's wrong with Silk Central

Silk Central is quite possibly one of the worst tools I have encountered. Ever. If anything bad was to come from the years leading up to the new millennium it wasn't some fundamental date bug, or 90s pop, it was shoddy software projects like this that somehow made it mainstream. Worse yet, ended up getting companies on board to mandate that teams rely on these shoddy products. *Shudders*

It staggers me how a piece of software can be so full of truly unintuitive and useless rubbish.

The nearest rival, HP Quality Centre, is by no means without flaws (of which it has many), but it is light years ahead of where Silk is pottering, by simply being intuitive and obvious to a user. Sure they're adding slightly more complex concepts in there now, but your basic (and arguably advanced) projects are easy.

My plea: if you find yourself trialing a product, work out your present team workflows (think about these in great detail: for new assets you need review, for old assets you need maintenance, for executed assets you need triage) and see whether or not you can actually do these things easily.

I should apologise, I am aware that Silk probably has hints at useful behaviour buried in there that could be used to satisfy my exampled requirement workflows; my real issue is that it is convoluted and non-intuitive. My finest example is that, after clicking a number of non-intuitive buttons to initiate a test execution (latest version of Silk, early 2013) you then cannot seem to complete a test run via any visible means. If perchance you are in to your car races - which let's face it, how many testers are interested in someone driving around in circles, really - then you may notice a black and white checker flag in the top left corner which either conveys that we have some clipart slapped on the UI to make us feel like we're racing through our tests, or actually this is a button to finish all. Top left corner. No text to guide. An intelligence test that I failed.

Having worked on creating a brand new trading platform (claim to fame), and seeing how easy it is as a QA to question usability, I am genuinely shocked at how such useless UI can be produced. It's not hard to think how users will find your product. Or, if all else fails, just ask them.

</rant>

Robot Framework, Basic Setup

Plug: Robot Framework is quick to setup, easy to write tests for, and super fast to triage failures in. The last point really sets it apart...