Archive: October 2015

Yet More Reasons Why You Don’t Have Good Automation

It has been a terrible two weeks for me. I lost my last living grandparent and my workplace drama culminated in the latest episode of humiliating and ham-fisted attempts to micromanage my work because The People Who Matter can’t seem to grasp the reality that test automation is really really hard.

I am demoralized today. I have hesitated to write about the ‘people’ element involved in developing a robust automation framework because this tends to be volatile and subjective content, but my work week totally blew, so I am going to do what has become a time-honored tradition since The InterWubs became a mainstream pastime — I am going to bitch about it online and talk some shit. When I joined my current workplace about three years ago, the state of UI automation was less mature than it is now. There were a lot of teams who were just starting out and there were some teams who had more mature tooling.

At this time, a complete overhaul of a big system was also just beginning, which meant disparate teams would have to collaborate together in ways that had never occurred at this company before. There was a sense of urgency about getting this ‘Automation’ thing going because there was so much to test and manual regression testing is slow and expensive. The various teams of SQA engineers were trying to get a handle on the automation and some were succeeding, but many teams were not. I was newly hired and excited about finally getting a job where I could spend more time writing code than running the same old boring manual regression tests over and over again.

I was assigned to a complex web application that saddles many domains across the company. Both the UI and the data/business logic aspects of this application are complex. It’s the kind of meaty automation challenge that gets a weirdo nerd like me pumped up and ready to code. My boss had recruited a team of senior employees and we wrote a large suite of test cases for this application. Then we started to investigate how we could approach the automation. We decided to go with the Page Object Model design pattern that is the gold standard for Selenium UI test automation. We evaluated some other frameworks, but, surprisingly, none of them were actually using this design pattern, so we started writing our own framework. Although I am biased, I will state that the code was sophisticated and high-quality stuff that was up to the task of being a great API for writing automated tests against the rich and complex UI of the application we were responsible for.

Then along came a high-level ‘architect’ who decided that everyone had to use the same ‘framework’. I won’t go into the ridiculous lack of consensus about what the term ‘framework’ meant because it’s a distraction. Let’s just say, there was no consensus on what this term meant. He took a look at the some of the various tools that some teams had built which they called ‘frameworks’ and then decided to declare one of them the ‘winner’. Then he decreed that all teams must use the one he had declared the winner. On the surface, this isn’t necessarily a bad idea and if the person making the decision actually knew how to evaluate the candidate frameworks, this could have ended well. The losers would have felt slighted and upset, naturally, but technical people generally can be persuaded to accept standards and tools they didn’t originally choose or build for themselves if they are good.

The key ingredient in making the above scenario end well is the word ‘good’. In my opinion, the following things must be true for a test automation framework to be good:

  1. It should be built with industry standard tools, not someone’s pet bleeding edge thing that is new and sexy, but not mature and certainly not in widespread use
  2. It should be built with tools that you can easily recruit talent to use and extend, not tools which have few experts you have to struggle to find and hire
  3. It should be built with tools that your current employees are mostly familiar with
  4. It should be built with tools that are well-documented and well-supported
  5. It should be built with a well-designed and properly articulated architecture so that its components are maintainable, reusable, and scalable
  6. The separate components of the system should be cleanly separated and use APIs to communicate with each other
  7. There should be no hardened dependencies on specific test management or defect management systems

Now that we have this settled, let’s return to the high-level ‘architect’ who decided to require all the QA teams to use the same automation framework. A person without good programming or software design skills can handle the first four items on the list. The last three items need to be evaluated by someone who does. Guess which category homeboy belonged to.

The framework he declared the ‘winner’ was just Not. Very. Good. It was actually one of the frameworks my team evaluated when we were trying to figure out what we were going to do with our automation work. We quickly passed on this one because it was very poorly designed and had none of the characteristics we were looking for. It didn’t help that the person who wrote it clearly did not have much familiarity with the WebDriver API. When I investigated how other teams were using it after Sir Architect declared it the winner, I noticed that a lot of them included it as a dependency in their POMs, but didn’t actually use anything in it. I think this was mostly because the design was so bad that it was nearly impossible to understand unless you methodically walked through every line of code. I eventually did this myself, and I have to say, I am a stronger person for the experience.

My co-worker and I couldn’t bear to dump our system which worked great for us and adopt this other system which totally didn’t work and would require us to shoehorn some ugly-ass spaghetti code into our test code in order to ‘use’ a framework that did absolutely nothing for us. We went on writing our page objects and incrementally automating some steps in our suite of manual end-to-end tests. Meanwhile, my boss decided to become an individual contributor and a member of our team who never even wanted to manage was given responsibility for our whole team. She had recently been hired and had not even had time to settle in. After she took over, Sir Architect kept asking if we were doing our automation using The Chosen Framework.

I should stop here and explain that our new manager, unlike the one who decided to stop managing, was not someone who had strong programming and software design experience. So, she put a lot of pressure on us to deliver lots of real good automation real quick. The problem with people who don’t ‘do’ software is that they tend to think that test automation is easy and when they hear the word ‘framework’ they think, “Magic system which shits a pound of automation gold every hour with little to no effort and so little technical expertise required that even a preschooler could automate the whole internet with it.”

We tried to explain that in order to automate the regression suite for our application, we needed to build models for all the UI components so that we could interact with them programmatically. We also tried to explain that the data and business logic domain was also something that needed to be modeled so that we could define and feed test data into automated tests instead of hard-coding it right there in the test methods. We tried explaining that The Chosen Framework contained some badly designed mechanism for getting a reference to a WebDriver instance of some desired type and that was basically all there was. All the rest of the necessary tooling, namely the models, were not provided with The Chosen Framework. Not only that, the Chosen Framework required that we use a ‘base’ page object which seemed to be super confused about whether it was a generic page object or a misguided effort to wrap some basic WebDriver functionality in a way that was inferior to WebDriver’s own native interface.

It. Just. Did. Not. Sink. In. Part of the problem was that other teams had managed by some miracle to tease some automated tests out of The Chosen Framework. We couldn’t seem to make our manager or Sir Architect understand that these tests didn’t do very much and that they were for very simple applications with maybe a page or two with a very limited data domain. Our application is not that type of application. It has lots of dynamic UI elements, numerous dialogs, menus, and data entry forms along with some complicated multistep state changes that have to be tracked and verified along the way. Not only that are a lot of validation rules on the data that are in play as well as a complicated user and role access permission domain to make it even more fun. And then there is the collection of features and behavior related to migrating customer artifacts from the older application that our application is supposed to replace. I also cannot neglect to mention the oh so very special test environments which have been featured in earlier episodes of “Why You Don’t Have Good Automation”.

A subset of these complications should be enough to send any self-respecting automation engineer over the edge, but throw in the need to periodically explain your failure to deliver some sweet, sweet automation for this mess to people who just refuse to understand that a badly designed mechanism for getting a WebDriver instance is not going to result in Automation Jesus marching triumphantly down our cubical-lined aisles and you have yourself a heaping helping of Soul-Destroying Is This Really My Life Cognitive Dissonance. We had a test suite of some 40 manual tests with roughly 15-30 steps apiece that we were running on two versions of IE, the latest version of Firefox on Windows 7 and OSX, the latest version of Chrome on Windows 7 and OSX in addition to Safari. At one point, our manager just declared that if we could not complete the automation for this test suite in three weeks, we had to dump our work and switch over to The Chosen Framework. This was after saying time and again, that the Chosen Framework provided no interfaces for interacting with the UI, nor did it provide any efficiencies for building the interfaces.

Fast forward two years. My team falls back under my original boss who has decided to manage again. My team now includes the person who built The Chosen Framework who has been promoted to Principal SDET. It is announced that we will collaborate with some other QA engineers in our organization to built a new framework and the designer of The Chosen Framework, V1, will ‘architect’ it. We spend a year and a half building various components while our Principal SDET regularly becomes very concerned about the directory structure of the repositories and the very large amount of code which some people are writing because large amounts of code is ‘heavy’ and that is bad. Oh, and ‘the communication protocol will be a POM dependency’. Somehow we write something that some of can use and there is an effort under way by our upper management to shove our framework down the throats of every other business unit in the company. A framework which none of them helped build or design and does not include anything from their now mature and fully-featured frameworks that are mostly better than ours. You can imagine how fast they all are running from the directive to adopt The Chosen Framework, V2.

I have finally been given space and time to do the automation after two years of relentless and never-ending manual testing stints all the while hearing, “Where is all that good automation?!!!! We can’t afford to hire manual testers. Just do it in your spare time!” Now I have about 200 automated tests of which only a couple are flappers which need some tweaking to make stable. Nevertheless, the new QA director has decided that we need to ‘start making some progress’ and a close working relationship and some coding ‘help’ from the creator of The Chosen Framework, V1, is just the solution to this problem.

It almost made me want to quit and take up goat-herding as a profession after pouring acid over my laptop and taking a shit right on top of my workstation out of spite.

So, folks, this is the latest edition of “Why You Don’t Have Good Automation.” Automation frameworks are a serious technical challenge and you need to put technically sophisticated people in charge of it. Also, don’t try to shove a tool down the throats of people who know better than you do what they need. If you’re looking to have your entire company on a single framework, you need to involve all the teams in requirements gathering. You also need to incorporate the best of their tooling into it. Also, if you don’t want to drive your automation staff into the arms of your competitors, don’t keep asking why they aren’t done every two days.

Share This:

Brixen Decorators: What’s Changed

The basic concept of the decorators hasn’t changed. There are just more of them. I have added decorators for state beans, configuration beans and builders in addition to the decorators for the component implementations. The decorators help reduce boilerplate code by using a new feature introduced in Java 8: default interface methods. A class can implement numerous interfaces, but extend only one class. Prior to Java 8, this meant implementing the same methods over and over again in different classes extending a particular interface. After Java 8 however, a decorator interface can extend another interface and provide default method implementations for the interface it extends.

Here is an example of the PolleableBeanDecorator, which provides default method implementations for the methods required by PolleableBean:

The decorator defers all the PolleableBean method calls to the default PolleableBean implementation — PolleableBeanImpl. This is achieved by requiring a ‘provider’ for an internal reference to a PolleableBeanImpl instance. So any class which implements PolleableBeanDecorator needs to define an accessor method for this provider. Let’s have a look at the provider for a state bean:

I didn’t want to have a method in a decorator interface that would return a reference to the internal state bean itself because that would break encapsulation. So, I came up with the idea of a ‘provider’ which would give protected access to this internal state bean reference so that only a sub-class or a class within the same package as the provider would be able to access the internal state bean reference. The packaging structure I chose allows only the provider and its sub-classes, the decorator and other classes in their package to access the provider’s internal state bean reference.

Here is an example of a state bean which extends PolleableBeanDecorator:

DynamicControllableBean is a state bean for specifying a page object that contains one or more web controls which needs to be polled for state change of some kind, usually after an interaction with one of its controls. A drop down menu, for example, would need to be polled after interacting with the control that expands or collapses it to determine if the menu has expanded or collapsed as expected. The default implementation of DynamicControllableBean can only extend a single class, so its parent is ControllableBeanImpl.

It would be a drag to have to provide implementations for the methods required by PolleableBean which are exactly the same as the implementations provided in PolleableBeanImpl. By implementing the PolleableBeanDecorator interface, DynamicControllableBeanImpl can satisfy all the requirements of PolleableBean by providing only an accessor to a LoadableBeanProvider that wraps an instance of PolleableBeanImpl. Lombok helps reduce the amount of source code even more because the provider field only has to be annotated with the Getter annotation.

The decorators for the configuration beans operate in the same way. Here is the PolleableConfigDecorator and the provider for a configuration bean:

And here is DynamicControllableConfigImpl, a configuration bean which implements PolleableConfigDecorator:

Decorators for the builders was a bit trickier to pull off, but I managed to find a way. Here is the PolleableBuilderDecorator:

And AbstractDynamicControllableBuilder, which implements it:

This decorator implementation works by declaring a provider which wraps the builder implementing the decorator. The same state bean must encapsulate all the state for the component, so declaring a builder with a separate state bean instance wouldn’t work.

The decorators for the components haven’t changed, so I won’t post any examples here since it would just duplicate what I presented at the conference. The source code for Brixen is here.

Share This:

Brixen Configuration Beans: What’s Changed

The changes in the configuration bean package are mirrored largely by the changes in the state beans and the builders. The other change is that I added JSON type information to all of the configuration bean interfaces. When I originally conceived of the configuration bean idea, I foolishly didn’t consider the possibility that some configuration beans may contain other configuration beans as fields. So, I only added the type information to the LoadableConfig bean. Without this type information, Jackson cannot properly deserialize polymorphic types.

The other change is that LoadableConfig allows the definition of custom properties, through JsonAnyGetter and JsonAnySetter methods. It is entirely conceivable that one might want to define a configuration option for a page object which doesn’t have general significance to a class of page objects, but which is important for a specific context:

The ControllableConfig is the most significant new addition. This configuration bean is for defining the dynamically configurable options for a page object which contains web controls. This configuration also encapsulates the configurable options for each one of its controls:

There is a marker interface for a dynamic Controllable that needs to be polled on intervals for a state change via a FluentWait:

Here is an example of what the configuration source for a ControllableConfig would look like:

In the same fashion as the state bean and builders, there is a marker interface that is parent to all the control configuration beans:

The marker interface for a click control configuration bean:

It extends ClickableConfig which is a configuration bean for a wider class of clickable page objects besides controls:

The configuration bean for a hover control:

The configuration bean for a hover and click control:

And that’s a wrap for the changes in the configuration beans. The source for Brixen is available here.

Share This:

Brixen Builders: What’s Changed

One of the awkward things about the original version of the API is how data from a page object’s JSON configuration source is retrieved and handled. I wanted to do something that was more elegant and less cumbersome than the multistep process of the original:

  1. Querying the service for a configuration
  2. Determining if a particular dynamically configurable option is defined in the configuration
  3. Determining if the value assigned to the option is null
  4. If the value is not null, then retrieving its value from the Optional that wraps in the configuration bean and setting that field for the page object through its builder

All of the the builder interfaces now overload all the methods that specify a page object’s dynamically configurable options. One version of the method takes a value for the field, and the other takes a configuration bean. The builder implementations take care of all the steps listed above save the first step. They also handle the case where the configuration bean is null. All the client class has to do is query the configuration service for the page object’s configuration by String ID and pass the result to the builder. If there is no configuration defined for the current environment under test, then the service will return null. If the configuration bean which is then passed to the builder is null, the builder will do nothing with the bean and leave the default value for that field in the page object’s state bean unchanged.

Let’s look at a couple of examples. The LoadableBuilder, a builder for a basic page object, and PolleableBuilder, a builder for a dynamic page object which needs to be polled on intervals for a state change via a FluentWait, are basically unchanged since the conference except for the new methods which take a configuration bean.

The default implementations of the new setter methods do all the work of checking and retrieving the data from the configuration bean which was previously the responsibility of the class building the object:

The other big change is related to the total refactoring of how web controls and the page objects that contain them are specified and built. Each of the three types of controls described in this post has a builder, but there is also a builder for a page object containing controls which has methods for specifying the controls.

The marker interface for a control builder:

The marker interface for a click control builder:

It extends ClickableBuilder which is a builder for a wider class of clickable page objects besides controls:

The builder for a hover control:

And finally, the builder for a hover and click control:

There is a _lot_ of duplicated code for the hover control and hover and click control, just as is the case for their state beans, which I acknowledged in my post about the changes in the state bean package. For now, I don’t see a good reason for extracting the common behavior into a parent interface for both of them to extend because the parent interface wouldn’t be reusable in other contexts. It’s a wart on the butt of this API, but I think I can live with it.

The interface for the builder of a page object which contains one or more web controls has a lot of syntactic sugar that allows you to build the whole page object, complete with all its controls if you don’t want to use the individual builders for the controls themselves. Each control is associated with a String ID that must be unique (which should go without saying, but I couldn’t help myself):

Share This:

Yet More Reasons Why You Don’t Have Good Automation

I got a bit sidetracked with the Brixen posts that I forgot to spend some more time explaining Why You Don’t Have Good Automation.

I am tired today. I spent the day dividing my time between way too many tasks, one of them boring as hell, but which should make it possible to dynamically put together a test run with Zephyr for Jira using keywords. The keywords correspond to group names on the automated tests for each test case. It’s a giant pain in the ass to add all these labels to the Zephyr test case and then add them as group names to the test methods, but it’s necessary to make what I want to do possible.

In addition to this mind-numbing data entry horror, I also spent an hour or two trying to figure out how to run these automated tests on the production system. I have never set up automation for the production system. My suite runs every night on whichever of our two test systems that isn’t currently crapping its pants and painting the walls with its own feces. I have some configuration files containing a list of test users and test accounts and I have a neat little set up routine that pulls this test data out of the config files and delivers it to tests running in parallel so that no single user login is used by more than one thread at a time. Behavior on our awful test environments gets even more awful and unpredictable if a user is logged in from multiple browser instances.

I still get random errors resulting from a user being logged in more than once because I am not the only person running tests on the test system. There are hundreds of people who use these environments and everyone has access to the same user accounts that I do. So, shit happens. Shit happens a lot less since I set up my tests to ensure that at least they don’t inadvertently use the same login credentials, but I don’t control the accounts that other engineers are using for their tests.

But I digress. The production system is another can of worms. I need a way to store the login credentials and deliver them to the tests in a secure way. Storing it all in a plain text configuration file and checking it into source control is a pile of bad idea topped with steaming fresh dog turds. It would violate PCI security standards six ways from Sunday. I’m getting some help from the engineer who wrote the original and now retired automation suite that used to run on production. He had a database running on a managed VM that stored the credentials and used a config file to specify the database access credentials. The config file was actually a stub because checking the DB credentials into source control in a plain text file is no better than checking in the login credentials themselves. A human being had to check out the repo and add the DB credentials to the config file. Then they could trigger the tests which used the DB credentials from the config file to log into the database to retrieve the user login credentials.

I found out that I can’t use the same strategy because a DB running on a managed VM isn’t kosher anymore due to PCI security standards. I have to find some other way to deliver the production login credentials for automated tests to use. Did I mention that I am not an expert in security standards? And there is no documented process for dealing with this problem? A few months ago, our entire organization was tortured with a giant PCI training effort in which we had to watch lots of videos and take lots of quizzes to prove we had been trained in PCI security awareness. I guess developing procedures and systems for dealing with practical, real-life problems just like this one and training us in that was too much to ask for.

Another part of my workday was devoted to speaking to an account executive at Sauce Labs in order to get a cost estimate for their services for the four teams in my particular mini-organization. I think he was excited up until he found out that I had no budget power and that I actually don’t know what the management class here is willing to spend. I was handed the task for getting an estimate for using them in an offhand way in the middle of a meeting that was supposed to be about something else two weeks ago.

This Sauce Labs POC task has been a hot potato that got passed around for the last year. A couple managers had this task sitting on their plates for weeks and every week during our status meetings, it just kept getting pushed off. Eventually, we stopped talking about it. Then some engineers on one of our teams actually did a trial with them and did a great presentation on the experience. I was excited because I thought something was actually going to happen at that point, but then I never heard another word about it until I said we needed Sauce Labs in a meeting two weeks ago because I was pissed at the lack of infrastructure for Selenium test suites. Suddenly the task of doing the same thing that has already been done by others was given to me. So, I am doing what others already did, which sadly, will lead to the the same likely result. Nothing will happen and a year from now, some other over-worked person will be told to do it again.

So, you’re wondering, “What does this have to do with why I don’t have some of that real good automation?!” One thing I have learned in all my years of working in quality assurance is that ‘quality’ is not a thing. It’s actually a living system made up of processes, people and tools. It’s also a ungodly complex system in a large organization. Automation is only one piece of it. And unfortunately for you, it’s a piece that depends largely on the smooth functioning of almost every other piece of the system. How many times have you acknowledged that there is a problem somewhere in your system that relates to quality that resulted in thousands of hours of lost productivity? Shitty test systems? Shitty or non-existent documentation? Shitty or non-existent approach to generating and managing test data? Shitty, fractured approach to tools and infrastructure procurement and maintenance?

If your approach to any or all of these deficits has been to click your heels and wish for your own personal Automation Savior to swoop in and drop a load of some real good automation on you like Rapunzel spinning gold out of straw, I would like to point out that you need to provide the straw first. Providing the straw involves the procurement of arable land, seeds, fertilizer and labor to grow and harvest it, along with a transportation network to get it you in good condition. Because Rapunzel ain’t gonna spin no gold out of moldy, wet, rotting straw with rat droppings all over it. ‘Kay?

Share This:

Brixen State Beans: What’s Changed

At the conference, I presented source code for Accessible and Dismissable objects, that is page objects which can be rendered visible or invisible by user interaction. Because some objects are dismissable, but not accessible, such as announcement dialogs and popup ads, I modeled them as separate entities with their own state beans and used a marker interface, ToggleableVisibilityBean to specify the state for a component which is both. There are some limitations with this design approach. First is the built-in assumption that there is only one control which toggles the visibility of the component, which is not true in some cases. A chooser dialog can actually have three such controls: Submit, Cancel and Close. The other is that the controls themselves are also page objects, but they are not modeled as standalone entities.

So, I decided to re-work the concept entirely. I created a state bean interface for a component which contains controls. It allows any number of controls to be added to the component’s state specification. It also makes no assumptions about the side effects of interacting with the controls. Therefore, it is generic and applicable to any such component, whether it is a component with toggleable visibility, or which has filterable content or with pagination behavior. Big win for reusability!

Here is the source for ControllableBean, the new state bean for a component which contains web controls. This is a terrible name, and I am open to suggestions for something better. At least it’s shorter than ToggleableVisiblity:

Each control is associated with a name, and the ControllableBean interface has syntactic sugar setter methods for defining the state of each of its controls. There are three distinct flavors of controls:

  • Controls that are visible by default and have meaningful behavior when they are clicked
  • Controls that are are visible by default and have meaningful behavior when they are hovered, such as expanding a menu
  • Controls that are are invisible by default, must be hovered to make them visible and that have meaningful behavior when clicked and which also can have meaningful behavior when hovered

The first type is a vanilla, garden variety web control. The third type introduces some complicated test cases that I have to try out. There is a pretty complex text case I described at the beginning of this post that I have to test with this new version of the API. That same post also explains an additional shortcoming of my original design, having to do with the fact that in some environments you just can’t trigger the mouseover action through Selenium or through Javascript using the JavascriptExecutor. I spent some time thinking about possible dynamically configurable workarounds for interacting with the control in such a way that the side effects of the interaction can be triggered, allowing a tester to automate tests which rely on that workflow to test something else. Obviously, you’d have to manually test the hover action because you can’t automate it, but it would be great if that didn’t block the automation of other tests.

For the second type of control, you can often just click it to trigger the same side effects that the hover action does. So, I added two dynamically configurable options to click instead of hover. One for using native Selenium and one for using a Javascript click workaround through JavascriptExecutor in cases where the native Selenium click fails silently. For the third type of control, when you can’t hover the control either through native Selenium or the Javascript hover workaround, you just can’t make the element visible. So how do you click it? By using the Javascript click workaround through JavascriptExecutor, which will execute the click even if the element is not visible.

Yes, I know this is whack. I know that when you must resort to this hack for a clickable control, regardless of whether it is the first type or the third type in my list, you probably should add a test for the click action to your manual test suite. But, the intent of the workarounds is to enable automation of the workflow for other tests without ugly-ass boilerplate if-then-else clauses, or worse, hard-coding the workaround by default for all environments into your page object which makes tests for the click action suspect for all environments.

The hoverable controls have what I call an ‘unhover’ element (another awful name, I admit). This is a WebElement to use for removing focus from a hoverable control. It should be an element in a safe location which itself has no meaningful behavior when it is hovered. This element also has the same set of workarounds as the control — hover with Javascript when the native Selenium hover action fails silently and the focus is not removed from your control as well as the two click workarounds when you just can’t do a mouseover by any means. Just be sure that the control also has no meaningful behavior when it is clicked!

I have another state bean marker interface for components containing controls which have dynamic state changes when one interacts with their controls. This dynamic state change should polleable on intervals with a FluentWait to determine if the expected state change has been achieved:

Let’s take a look at the new state beans for the controls themselves. ControlBean is a marker interface that mostly serves for type determination and it’s the parent to all controls. You gotta love Java 8 for default interface methods:

ClickControlBean is a marker interface for the first type of control in the list.

It extends ClickableBean, which is for defining a wider class of clickable elements besides controls:

HoverControlBean specifies the second type of control on my list:

HoverAndClickControlBean specifies the third type of control on my list and it is a PolleableBean due to the fact that this type of control has dynamic visibility, which means it should be possible to poll it on intervals with a FluentWait to determine if it has been toggled visible or invisible after hovering it.

I am not happy with the amount of repeated code in the two hover control state bean interfaces. I don’t think it’s worth re-working the inheritance hierarchy here since a parent class that defines the shared methods probably isn’t reusable anywhere else.

So, there you have it. The source for Brixen is available here, and it has complete Javadoc comments for all but a small handful of classes.

Share This:

Brixen Page Object Configuration Service: What’s Changed

The version that I presented at the conference had the following interface:

I decided to strip it down and require only a method to retrieve a configuration by a String ID and a WebDriver reference:

The default implementation in Brixen is still a singleton. Some people consider this an anti-pattern because singletons are difficult to unit test, which is true, but for now, it serves its purpose for a prototype page object API. The nice thing about using interface types, is that I can replace the default implementation anytime and nothing downstream is affected. It’s a thread-safe, lazy-initialization implementation that relies on the class loader to do all the synchronization. It also reads in all of the configuration profiles at once the first time it is accessed. I figured this was better than accessing the files from disk multiple times.

The services relies on a couple of conventions:

  • The configuration profiles should all be located in folder named pageobject_config in resources
  • The configuration profiles need to follow a naming convention that allows the configuration service to identify which environments they pertain to

All of the environment information is derived from the WebDriver reference passed as a parameter to the query method. The naming convention is as follows:

So, for Firefox 38.3.0 on Mac, the name of the configuration file should be: firefox38.3.0-mac. The browser name is the value returned by DesiredCapabilities.getBrowerName() and the OS name is the lowercase String name of the Platform Enum for a given OS returned by DesiredCapabilities.getPlatform().

Share This:

‘UXD Legos’ Has Been Reborn as ‘Brixen’…

‘Lego’ is a registered trademark after all. The source is here. It looks very different from what I presented at the Selenium conference. I did a ton of refactoring around how web controls are specified. I will do a write up on the differences soon. Today, I am exhausted, and I’m just happy to get it posted to GitHub.  There’s a half-assed example of usage in the org.brixen.example.priceline package.  This is also vastly untested at the moment. My next steps will be:

  •  Do a complete write-up on this new incarnation of the ‘UXD Legos’ concept and how it differs from its previous form
  • Start translating it into C#
  • In tandem with the C# translation, write tests for the Java classes and the C# classes as I do the translation
  • In tandem with the translation and testing, develop some full-assed usage examples

I am not sure how long this phase will take. Hopefully not too long because I still have the Python and Ruby incarnations to write. This should be fun. Way more fun than running boring manual regression tests on crappy test systems that seem to want to give me the middle finger every five seconds.

Share This:

Even More Reasons Why You Don’t Have Good Automation w/ Update on Generic UXD Legos Source Code Packages

Update on the example source code packages for UXD Legos:

I am writing Javadoc for the newly refactored source code. This is quite an undertaking, but one that I feel is important to do. I also have expanded the dynamically configurable options for hover controls. As Oren Rubin kindly pointed out to me after my presentation at the Selenium conference, the Javascript hover workaround doesn’t always work. Some hover controls use the CSS :hover pseudoclass which cannot be triggered with Javascript. So, in these situations, there is nothing that can be done via Selenium or the JavascriptExecutor to trigger a hover action. He also said that some security protections also prevent this workaround from working properly as well.

I found a test case in a public website where the Javascript workaround doesn’t work for hovering and came up with some other ideas for workarounds that would allow a tester to trigger the side-effects that occur when the element is hovered if they cannot rely on WebDriver or JavascriptExecutor. The benefit is that they can still automate tests that rely on the ability to trigger the side effects, but which do not test the result of the hover action itself. Obviously, for the environments in which the hover action cannot be triggered, the result of the hover would have to be tested manually, but downstream functional tests don’t have to be blocked by the inability to trigger the hover in automation.

I have a test case, which is probably an uncommon edge case:

  1. The control is invisible and must be hovered before it can be clicked
  2. Clicking the control expands a menu
  3. The ‘Expanded’ state of the menu is sticky
  4. The menu becomes invisible if the focus leaves the control that expands it
  5. If the focus subsequently returns to the control, the menu will appear without clicking the control because the ‘Expanded’ state is sticky
  6. If the control is then hovered and clicked a second time, the menu is collapsed
  7. If the focus leaves the control and then comes back to it, the menu is not visible

This is quite a complicated test case and would definitely require some testing and experimentation to determine what, if any, dynamically configurable kinds of options can be used to handle cases where the hover cannot be triggered via Selenium or the JavascriptExecutor. I have some ideas, but unfortunately, I can’t try any of them out at the moment. The only example of this kind of component that I am aware of exists on the product I test. And if you read my previous post, you know about the vast array of test environments which are available to me (and hundred of other engineers) in my workplace. There is a bug that prevents the page which has this control from loading at all. The estimated time-to-fix is like…. 5 days. And because I can’t just spin up an environment with an earlier, working version of the system, my ability to test and develop this part of the API has come to a screaming halt. Along with my effort to develop a comprehensive UI automation suite for the brand new front end for this application I test because NONE OF THE PAGES WILL LOAD. Anyway, this portion of my post isn’t supposed to talk about Why You Don’t Have Good Automation, but unfortunately, the two of them are bleeding into each other this time.

So, what I probably will do is finish the Javadoc for what I have and post the source code. It’s not terribly well-tested in its new, much refactored form, but I will start writing unit tests for it when I start trying to translate it to C#.

And now on to Why You Don’t Have Good Automation:

I am puzzled today. I am puzzled by a phenomenon I have encountered in every QA job I have ever had. EVERY. SINGLE. ONE. The company states without equivocation that they want Good Automation. They acknowledge that manual test execution is costly and slow. They acknowledge that not having Good Automation severely limits the scope and coverage of the testing they can have. They acknowledge that they would benefit enormously from the fast feedback that Good Automation would give them. They acknowledge that the development cycle for their products would become much shorter from this quick feedback. They acknowledge they could quickly catch regressions if they had Good Automation that delivered results within an hour of every new build. They acknowledge that catching regressions quickly in complicated systems would help shape design improvements by surfacing unnecessary coupling between seemingly unrelated parts of it because Good Automation would catch regressions triggered in one part of a system by changes in another part.

How often have you seen the following in a SQA job post:

Need strong automation developer with at least 8 years experience in the industry. Masters degree in Computer Science desired. 5 years experience in Java/C#/C++/Python/Ruby/Perl/COBOL/FORTRAN or some other object oriented programming language. Selenium experience in all of the above is highly desired. Responsibilities include mentoring junior engineers, building automation and release engineering infrastructure and developing comprehensive test plans for an array of products. Job requires fifty percent manual testing.

Whenever I encounter one of these job posts, what I actually see is the following:

Desperately in need of a strong, young and healthy unicorn that never needs to sleep or take a vacation. Must be willing to subsist on a steady diet of dirt and impossible expectations.  We have no fucking clue what it is we need or want, so we threw everything we could possibly think of into the list of requirements for this position, including the release engineering function which is totally a separate role from manual testing and automation development. We want someone who is both an amazing software developer with an expensive and lengthy advanced education that has a first-year drop out rate exceeding fifty percent as well as an amazing quality assurance expert with the associated superb communication, writing and analytical skills. We also want someone who is amazing at dealing with the intense and demoralizing office politics that come with working for a company that has the same unreasonable and impossible expectations we have.

We want you to have the same level of magical thinking we engage in because we wholeheartedly believe that we should get all of the skills, experience and talent of three strong professionals for just one salary. We want you to believe as much as we do in the fantasy that we really truly want good automation because the only way you will take this job is if you believe things that just aren’t true! And when you find that instead of the fifty percent manual testing we said you would do is actually more like seventy-five percent, we want you to cheerfully and politely accept endless questioning of your abilities and talents because we want to know where the all that good automation is and why it is taking you so long. You must be doing something wrong!

Let’s just consider the problems inherent in expecting that an individual employee should agree to perform two functions with two different skills sets — Testing strategy, test planning, designing good test cases and developing robust software systems to automate, execute and report results for these test cases. THEY ARE NOT THE SAME THING. These are actually two distinct skill sets. I am not saying they don’t often coexist in the same person. In fact, I think it is not uncommon for a really good test automation expert to have both skill sets because a lot of the time, they started out in manual testing. I do not believe, however, that the path to Good Automation will be found in thinking you can have that person do both jobs at the same time. Because they are both really hard jobs to do really well. You try to hire a single person to do both, one or both of the functions you want them to perform will suffer.

I need to break some really unpleasant news to the modern tech workplace. Multitasking? IT’S BULLSHIT. Computers can have multiple processors. Human beings only have one processor and the quality of what it can do suffers when it is forced to divide its focus between multiple and competing tasks. Please stop smoking the crack that made you believe humans can perform multiple tasks at the same level of quality and speed and within the same amount of time that they could do each task individually.

Now, let’s talk about the meaning of ‘Manual’ testing. I think manual testing has gotten a really bad name. I have noticed a trend lately in job candidates which are applying for positions that are billed as ‘Developer In Test’. They don’t want to do ‘manual’ testing. They look down in it and feel that it is a lesser function than automation is. They see it as less prestigious and less well-compensated. It is, in short, a deterrent to taking the job. It’s sort of like spraying automation developer repellent all over your position and seeing if you can find the developers with the right genetic makeup that makes them resistant to it. These job candidates are correct in many of their assumptions — salaries for traditional SQA employees are lower and the jobs are considered less desirable and it became common sometime in the last 15 years or so, for every traditional SQA employee to suddenly want to ‘get out of manual testing.’ Employers have contributed to the stigma of this role by requiring that most if not all of their SQA hires have some automation experience and often a computer science degree.

The problem with all this is that every software company really needs the skills that a talented traditional SQA engineer can bring to the table. The reason you need those employees is that they will provide you with a necessary precondition to Good Automation, which are good test plans with well-designed test cases. This is not an easy thing to do, and if you find an SQA engineer who is really good at it, you should compensate them highly and treated them like the treasured and important asset that they are. Don’t insult them by acting like their skills are out-of-date artifacts of a bygone era and demand that they transform themselves into a software engineer in order to be considered valuable and desirable as an employee. Let me list the skills a really good SQA engineer needs to have:

  1. They need to write well
  2. They need to communicate well
  3. They need to have really good reading comprehension
  4. They need to be able to synthesize a lot of information sources from design specs and requirements documents into test plans with well-written test cases which can be run by someone who is not an expert in the system. THIS IS HARD.
  5. They need to write test cases that can be automated. THIS IS HARD.

Still think this person shouldn’t be treated with the same level of respect as a good software developer? No? Fine, you don’t deserve that employee and I hope they leave you and your company in the rearview mirror as they get on the highway out of Asshole Town where you are the self-elected mayor and village idiot.

I work on a team of 8 people. There is one person on my team who I feel has had the most positive impact on product quality. She just always seems to master her domain no matter what it is she is doing. She builds all the right relationships and somehow manages to extract information out of this crazy and chaotic environment where very little is documented in a testable fashion before it is coded into the products. This person does the traditional SQA function. Her expertise was invaluable in onboarding several of us and I still find myself going to her with questions after working here for three years myself.

Now, lets get to the subject of what automation developers find so hateful about ‘Manual’ testing that they can’t run from it fast enough. Let’s get the simple reasons out of the way. Some of them just don’t have the chops to do the up front work that a good traditional SQA engineer needs to do to pave the way to Good Automation and they know it. But the more common reason is that ‘Manual’ testing frequently means that there is a large, often poorly written, manual regression suite that you want to automate, but you can’t seem to get around the fact that it’s just too big, tedious and time-consuming that the automation engineers you hired to give you Good Automation just don’t have the space and time to actually do any automation. Running the same test cases over and over again, release after release after release is just awful. It’s awful no matter who you are or what you are good at. It’s the kind of job you should give to a temporary contractor or an offshore company that specializes in providing these kinds of services. It doesn’t make the work any more pleasant, but at least you aren’t paying top-dollar to have it done and you aren’t bullshitting anyone about how rewarding it is. Because it’s not rewarding by any stretch of the imagination.

If you are having trouble getting good results with the temporary contractor or out-sourcing strategy, let’s circle back around to Step One which is the person you hired to write that regression suite. Did you perhaps have some poor judgment about the necessary talent and skills to write a good regression suite you can effectively outsource? Because that just might have something to do with why you can’t seem to get satisfactory results with outsourcing it. Don’t treat the traditional SQA function as a lessor function than development. Take care to hire the right people to do it and make sure you treat them like the valuable asset that they are and you will not be disappointed. Don’t make the mistake of thinking that their job is easy. Hire the right people to perform this function and don’t expect them to do a second full-time job and you will find yourself on the road to Good Automation.

Share This: