Category: Conference

Leading an Elephant to Water and Convincing Him To Drink It

Large corporate organizations move very slowly and if you are a creative type who hates to coast along on ‘good enough’, you are probably frustrated and demotivated a lot of the time. The modern corporate workplace is a graveyard of good ideas that were quietly drowned in a washtub behind the woodshed. To survive this emotionally draining dependence on comfortable mediocrity, you have to be _really_stubborn. I’ve been thinking over the origins of my greatest frustrations with my employer and most of it boils down to not being heard and having a lot of ideas shot down or otherwise smothered in a flood of boring, soul-sucking manual regression testing that crowds out all joy from the job. Sometimes, it seems that even just exploring or trying out a concept is offensive and upsetting to someone and therefore, It. Must. Be. Stopped.

In the interest of personal development and achieving greater success, I have to own my own part in this. My communication skills have been really lacking, and my messaging has been unfocused and too difficult to understand. I have also come to the conclusion that it is best to toil away privately on some ideas until you have something that can serve as a powerful demo before you reveal it to anyone. As part of my effort to resolve this conflict over the presence of lots of labels on Zephyr test cases in Jira, I actually developed a Powerpoint presentation to cover the problem statement, the challenges and the proposed solution. I feel like a little Agile warrior right now. I’m even thinking that this presentation could work as another conference presentation and I’m kind of excited about proposing it for some other conference in the future.

I saw a really great presentation by a developer from Uber at the Selenium conference in Portland last fall that summed up the kind of frustration that has generated my exit from every job I left that wasn’t due to a layoff or the sudden bankruptcy of the company. She said that most companies exist in the middling, unambitious area where there is no motivation or desire to do more than mediocre QA, but there are organizations where QA is a first order function and that the tools, processes and infrastructure are elite, top-notch stuff. Those companies are where future of testing is.

So, I could quit my job and go work for those companies, but the only problem with that is that I want to _be_ a trendsetter who brings that to a company that doesn’t already have it. I don’t want to show up late to the party after the waves of innovation have already passed through. Problem is, you can hardly be a trendsetter in the most common reality where you work for a large corporation that is in the mediocre middle because the scale of the task is so enormous that no one person can do it. This isn’t one of the lone cowboy kind of projects where one single heroic person toils away to solve the problem and emerges with a shiny, cool solution to the adoring cheers of their co-workers. This kind of paradigm shift takes coordination and cooperation across the organization and it tends to piss of a lot of people because it makes them uncomfortable, insecure and afraid. Lately, I have been having trouble getting buy-in just from my own little team.

So, I have to sharpen my communication skills. I have to learn more patience and I have to get my temper and frustration under control. And I need to start engaging a lot more with other people outside my own insular team.

Share This:

Brixen Decorators: What’s Changed

The basic concept of the decorators hasn’t changed. There are just more of them. I have added decorators for state beans, configuration beans and builders in addition to the decorators for the component implementations. The decorators help reduce boilerplate code by using a new feature introduced in Java 8: default interface methods. A class can implement numerous interfaces, but extend only one class. Prior to Java 8, this meant implementing the same methods over and over again in different classes extending a particular interface. After Java 8 however, a decorator interface can extend another interface and provide default method implementations for the interface it extends.

Here is an example of the PolleableBeanDecorator, which provides default method implementations for the methods required by PolleableBean:

The decorator defers all the PolleableBean method calls to the default PolleableBean implementation — PolleableBeanImpl. This is achieved by requiring a ‘provider’ for an internal reference to a PolleableBeanImpl instance. So any class which implements PolleableBeanDecorator needs to define an accessor method for this provider. Let’s have a look at the provider for a state bean:

I didn’t want to have a method in a decorator interface that would return a reference to the internal state bean itself because that would break encapsulation. So, I came up with the idea of a ‘provider’ which would give protected access to this internal state bean reference so that only a sub-class or a class within the same package as the provider would be able to access the internal state bean reference. The packaging structure I chose allows only the provider and its sub-classes, the decorator and other classes in their package to access the provider’s internal state bean reference.

Here is an example of a state bean which extends PolleableBeanDecorator:

DynamicControllableBean is a state bean for specifying a page object that contains one or more web controls which needs to be polled for state change of some kind, usually after an interaction with one of its controls. A drop down menu, for example, would need to be polled after interacting with the control that expands or collapses it to determine if the menu has expanded or collapsed as expected. The default implementation of DynamicControllableBean can only extend a single class, so its parent is ControllableBeanImpl.

It would be a drag to have to provide implementations for the methods required by PolleableBean which are exactly the same as the implementations provided in PolleableBeanImpl. By implementing the PolleableBeanDecorator interface, DynamicControllableBeanImpl can satisfy all the requirements of PolleableBean by providing only an accessor to a LoadableBeanProvider that wraps an instance of PolleableBeanImpl. Lombok helps reduce the amount of source code even more because the provider field only has to be annotated with the Getter annotation.

The decorators for the configuration beans operate in the same way. Here is the PolleableConfigDecorator and the provider for a configuration bean:

And here is DynamicControllableConfigImpl, a configuration bean which implements PolleableConfigDecorator:

Decorators for the builders was a bit trickier to pull off, but I managed to find a way. Here is the PolleableBuilderDecorator:

And AbstractDynamicControllableBuilder, which implements it:

This decorator implementation works by declaring a provider which wraps the builder implementing the decorator. The same state bean must encapsulate all the state for the component, so declaring a builder with a separate state bean instance wouldn’t work.

The decorators for the components haven’t changed, so I won’t post any examples here since it would just duplicate what I presented at the conference. The source code for Brixen is here.

Share This:

Brixen Configuration Beans: What’s Changed

The changes in the configuration bean package are mirrored largely by the changes in the state beans and the builders. The other change is that I added JSON type information to all of the configuration bean interfaces. When I originally conceived of the configuration bean idea, I foolishly didn’t consider the possibility that some configuration beans may contain other configuration beans as fields. So, I only added the type information to the LoadableConfig bean. Without this type information, Jackson cannot properly deserialize polymorphic types.

The other change is that LoadableConfig allows the definition of custom properties, through JsonAnyGetter and JsonAnySetter methods. It is entirely conceivable that one might want to define a configuration option for a page object which doesn’t have general significance to a class of page objects, but which is important for a specific context:

The ControllableConfig is the most significant new addition. This configuration bean is for defining the dynamically configurable options for a page object which contains web controls. This configuration also encapsulates the configurable options for each one of its controls:

There is a marker interface for a dynamic Controllable that needs to be polled on intervals for a state change via a FluentWait:

Here is an example of what the configuration source for a ControllableConfig would look like:

In the same fashion as the state bean and builders, there is a marker interface that is parent to all the control configuration beans:

The marker interface for a click control configuration bean:

It extends ClickableConfig which is a configuration bean for a wider class of clickable page objects besides controls:

The configuration bean for a hover control:

The configuration bean for a hover and click control:

And that’s a wrap for the changes in the configuration beans. The source for Brixen is available here.

Share This:

Brixen Builders: What’s Changed

One of the awkward things about the original version of the API is how data from a page object’s JSON configuration source is retrieved and handled. I wanted to do something that was more elegant and less cumbersome than the multistep process of the original:

  1. Querying the service for a configuration
  2. Determining if a particular dynamically configurable option is defined in the configuration
  3. Determining if the value assigned to the option is null
  4. If the value is not null, then retrieving its value from the Optional that wraps in the configuration bean and setting that field for the page object through its builder

All of the the builder interfaces now overload all the methods that specify a page object’s dynamically configurable options. One version of the method takes a value for the field, and the other takes a configuration bean. The builder implementations take care of all the steps listed above save the first step. They also handle the case where the configuration bean is null. All the client class has to do is query the configuration service for the page object’s configuration by String ID and pass the result to the builder. If there is no configuration defined for the current environment under test, then the service will return null. If the configuration bean which is then passed to the builder is null, the builder will do nothing with the bean and leave the default value for that field in the page object’s state bean unchanged.

Let’s look at a couple of examples. The LoadableBuilder, a builder for a basic page object, and PolleableBuilder, a builder for a dynamic page object which needs to be polled on intervals for a state change via a FluentWait, are basically unchanged since the conference except for the new methods which take a configuration bean.

The default implementations of the new setter methods do all the work of checking and retrieving the data from the configuration bean which was previously the responsibility of the class building the object:

The other big change is related to the total refactoring of how web controls and the page objects that contain them are specified and built. Each of the three types of controls described in this post has a builder, but there is also a builder for a page object containing controls which has methods for specifying the controls.

The marker interface for a control builder:

The marker interface for a click control builder:

It extends ClickableBuilder which is a builder for a wider class of clickable page objects besides controls:

The builder for a hover control:

And finally, the builder for a hover and click control:

There is a _lot_ of duplicated code for the hover control and hover and click control, just as is the case for their state beans, which I acknowledged in my post about the changes in the state bean package. For now, I don’t see a good reason for extracting the common behavior into a parent interface for both of them to extend because the parent interface wouldn’t be reusable in other contexts. It’s a wart on the butt of this API, but I think I can live with it.

The interface for the builder of a page object which contains one or more web controls has a lot of syntactic sugar that allows you to build the whole page object, complete with all its controls if you don’t want to use the individual builders for the controls themselves. Each control is associated with a String ID that must be unique (which should go without saying, but I couldn’t help myself):

Share This:

Brixen State Beans: What’s Changed

At the conference, I presented source code for Accessible and Dismissable objects, that is page objects which can be rendered visible or invisible by user interaction. Because some objects are dismissable, but not accessible, such as announcement dialogs and popup ads, I modeled them as separate entities with their own state beans and used a marker interface, ToggleableVisibilityBean to specify the state for a component which is both. There are some limitations with this design approach. First is the built-in assumption that there is only one control which toggles the visibility of the component, which is not true in some cases. A chooser dialog can actually have three such controls: Submit, Cancel and Close. The other is that the controls themselves are also page objects, but they are not modeled as standalone entities.

So, I decided to re-work the concept entirely. I created a state bean interface for a component which contains controls. It allows any number of controls to be added to the component’s state specification. It also makes no assumptions about the side effects of interacting with the controls. Therefore, it is generic and applicable to any such component, whether it is a component with toggleable visibility, or which has filterable content or with pagination behavior. Big win for reusability!

Here is the source for ControllableBean, the new state bean for a component which contains web controls. This is a terrible name, and I am open to suggestions for something better. At least it’s shorter than ToggleableVisiblity:

Each control is associated with a name, and the ControllableBean interface has syntactic sugar setter methods for defining the state of each of its controls. There are three distinct flavors of controls:

  • Controls that are visible by default and have meaningful behavior when they are clicked
  • Controls that are are visible by default and have meaningful behavior when they are hovered, such as expanding a menu
  • Controls that are are invisible by default, must be hovered to make them visible and that have meaningful behavior when clicked and which also can have meaningful behavior when hovered

The first type is a vanilla, garden variety web control. The third type introduces some complicated test cases that I have to try out. There is a pretty complex text case I described at the beginning of this post that I have to test with this new version of the API. That same post also explains an additional shortcoming of my original design, having to do with the fact that in some environments you just can’t trigger the mouseover action through Selenium or through Javascript using the JavascriptExecutor. I spent some time thinking about possible dynamically configurable workarounds for interacting with the control in such a way that the side effects of the interaction can be triggered, allowing a tester to automate tests which rely on that workflow to test something else. Obviously, you’d have to manually test the hover action because you can’t automate it, but it would be great if that didn’t block the automation of other tests.

For the second type of control, you can often just click it to trigger the same side effects that the hover action does. So, I added two dynamically configurable options to click instead of hover. One for using native Selenium and one for using a Javascript click workaround through JavascriptExecutor in cases where the native Selenium click fails silently. For the third type of control, when you can’t hover the control either through native Selenium or the Javascript hover workaround, you just can’t make the element visible. So how do you click it? By using the Javascript click workaround through JavascriptExecutor, which will execute the click even if the element is not visible.

Yes, I know this is whack. I know that when you must resort to this hack for a clickable control, regardless of whether it is the first type or the third type in my list, you probably should add a test for the click action to your manual test suite. But, the intent of the workarounds is to enable automation of the workflow for other tests without ugly-ass boilerplate if-then-else clauses, or worse, hard-coding the workaround by default for all environments into your page object which makes tests for the click action suspect for all environments.

The hoverable controls have what I call an ‘unhover’ element (another awful name, I admit). This is a WebElement to use for removing focus from a hoverable control. It should be an element in a safe location which itself has no meaningful behavior when it is hovered. This element also has the same set of workarounds as the control — hover with Javascript when the native Selenium hover action fails silently and the focus is not removed from your control as well as the two click workarounds when you just can’t do a mouseover by any means. Just be sure that the control also has no meaningful behavior when it is clicked!

I have another state bean marker interface for components containing controls which have dynamic state changes when one interacts with their controls. This dynamic state change should polleable on intervals with a FluentWait to determine if the expected state change has been achieved:

Let’s take a look at the new state beans for the controls themselves. ControlBean is a marker interface that mostly serves for type determination and it’s the parent to all controls. You gotta love Java 8 for default interface methods:

ClickControlBean is a marker interface for the first type of control in the list.

It extends ClickableBean, which is for defining a wider class of clickable elements besides controls:

HoverControlBean specifies the second type of control on my list:

HoverAndClickControlBean specifies the third type of control on my list and it is a PolleableBean due to the fact that this type of control has dynamic visibility, which means it should be possible to poll it on intervals with a FluentWait to determine if it has been toggled visible or invisible after hovering it.

I am not happy with the amount of repeated code in the two hover control state bean interfaces. I don’t think it’s worth re-working the inheritance hierarchy here since a parent class that defines the shared methods probably isn’t reusable anywhere else.

So, there you have it. The source for Brixen is available here, and it has complete Javadoc comments for all but a small handful of classes.

Share This:

Brixen Page Object Configuration Service: What’s Changed

The version that I presented at the conference had the following interface:

I decided to strip it down and require only a method to retrieve a configuration by a String ID and a WebDriver reference:

The default implementation in Brixen is still a singleton. Some people consider this an anti-pattern because singletons are difficult to unit test, which is true, but for now, it serves its purpose for a prototype page object API. The nice thing about using interface types, is that I can replace the default implementation anytime and nothing downstream is affected. It’s a thread-safe, lazy-initialization implementation that relies on the class loader to do all the synchronization. It also reads in all of the configuration profiles at once the first time it is accessed. I figured this was better than accessing the files from disk multiple times.

The services relies on a couple of conventions:

  • The configuration profiles should all be located in folder named pageobject_config in resources
  • The configuration profiles need to follow a naming convention that allows the configuration service to identify which environments they pertain to

All of the environment information is derived from the WebDriver reference passed as a parameter to the query method. The naming convention is as follows:

So, for Firefox 38.3.0 on Mac, the name of the configuration file should be: firefox38.3.0-mac. The browser name is the value returned by DesiredCapabilities.getBrowerName() and the OS name is the lowercase String name of the Platform Enum for a given OS returned by DesiredCapabilities.getPlatform().

Share This:

‘UXD Legos’ Has Been Reborn as ‘Brixen’…

‘Lego’ is a registered trademark after all. The source is here. It looks very different from what I presented at the Selenium conference. I did a ton of refactoring around how web controls are specified. I will do a write up on the differences soon. Today, I am exhausted, and I’m just happy to get it posted to GitHub.  There’s a half-assed example of usage in the org.brixen.example.priceline package.  This is also vastly untested at the moment. My next steps will be:

  •  Do a complete write-up on this new incarnation of the ‘UXD Legos’ concept and how it differs from its previous form
  • Start translating it into C#
  • In tandem with the C# translation, write tests for the Java classes and the C# classes as I do the translation
  • In tandem with the translation and testing, develop some full-assed usage examples

I am not sure how long this phase will take. Hopefully not too long because I still have the Python and Ruby incarnations to write. This should be fun. Way more fun than running boring manual regression tests on crappy test systems that seem to want to give me the middle finger every five seconds.

Share This:

Even More Reasons Why You Don’t Have Good Automation w/ Update on Generic UXD Legos Source Code Packages

Update on the example source code packages for UXD Legos:

I am writing Javadoc for the newly refactored source code. This is quite an undertaking, but one that I feel is important to do. I also have expanded the dynamically configurable options for hover controls. As Oren Rubin kindly pointed out to me after my presentation at the Selenium conference, the Javascript hover workaround doesn’t always work. Some hover controls use the CSS :hover pseudoclass which cannot be triggered with Javascript. So, in these situations, there is nothing that can be done via Selenium or the JavascriptExecutor to trigger a hover action. He also said that some security protections also prevent this workaround from working properly as well.

I found a test case in a public website where the Javascript workaround doesn’t work for hovering and came up with some other ideas for workarounds that would allow a tester to trigger the side-effects that occur when the element is hovered if they cannot rely on WebDriver or JavascriptExecutor. The benefit is that they can still automate tests that rely on the ability to trigger the side effects, but which do not test the result of the hover action itself. Obviously, for the environments in which the hover action cannot be triggered, the result of the hover would have to be tested manually, but downstream functional tests don’t have to be blocked by the inability to trigger the hover in automation.

I have a test case, which is probably an uncommon edge case:

  1. The control is invisible and must be hovered before it can be clicked
  2. Clicking the control expands a menu
  3. The ‘Expanded’ state of the menu is sticky
  4. The menu becomes invisible if the focus leaves the control that expands it
  5. If the focus subsequently returns to the control, the menu will appear without clicking the control because the ‘Expanded’ state is sticky
  6. If the control is then hovered and clicked a second time, the menu is collapsed
  7. If the focus leaves the control and then comes back to it, the menu is not visible

This is quite a complicated test case and would definitely require some testing and experimentation to determine what, if any, dynamically configurable kinds of options can be used to handle cases where the hover cannot be triggered via Selenium or the JavascriptExecutor. I have some ideas, but unfortunately, I can’t try any of them out at the moment. The only example of this kind of component that I am aware of exists on the product I test. And if you read my previous post, you know about the vast array of test environments which are available to me (and hundred of other engineers) in my workplace. There is a bug that prevents the page which has this control from loading at all. The estimated time-to-fix is like…. 5 days. And because I can’t just spin up an environment with an earlier, working version of the system, my ability to test and develop this part of the API has come to a screaming halt. Along with my effort to develop a comprehensive UI automation suite for the brand new front end for this application I test because NONE OF THE PAGES WILL LOAD. Anyway, this portion of my post isn’t supposed to talk about Why You Don’t Have Good Automation, but unfortunately, the two of them are bleeding into each other this time.

So, what I probably will do is finish the Javadoc for what I have and post the source code. It’s not terribly well-tested in its new, much refactored form, but I will start writing unit tests for it when I start trying to translate it to C#.

And now on to Why You Don’t Have Good Automation:

I am puzzled today. I am puzzled by a phenomenon I have encountered in every QA job I have ever had. EVERY. SINGLE. ONE. The company states without equivocation that they want Good Automation. They acknowledge that manual test execution is costly and slow. They acknowledge that not having Good Automation severely limits the scope and coverage of the testing they can have. They acknowledge that they would benefit enormously from the fast feedback that Good Automation would give them. They acknowledge that the development cycle for their products would become much shorter from this quick feedback. They acknowledge they could quickly catch regressions if they had Good Automation that delivered results within an hour of every new build. They acknowledge that catching regressions quickly in complicated systems would help shape design improvements by surfacing unnecessary coupling between seemingly unrelated parts of it because Good Automation would catch regressions triggered in one part of a system by changes in another part.

How often have you seen the following in a SQA job post:

Need strong automation developer with at least 8 years experience in the industry. Masters degree in Computer Science desired. 5 years experience in Java/C#/C++/Python/Ruby/Perl/COBOL/FORTRAN or some other object oriented programming language. Selenium experience in all of the above is highly desired. Responsibilities include mentoring junior engineers, building automation and release engineering infrastructure and developing comprehensive test plans for an array of products. Job requires fifty percent manual testing.

Whenever I encounter one of these job posts, what I actually see is the following:

Desperately in need of a strong, young and healthy unicorn that never needs to sleep or take a vacation. Must be willing to subsist on a steady diet of dirt and impossible expectations.  We have no fucking clue what it is we need or want, so we threw everything we could possibly think of into the list of requirements for this position, including the release engineering function which is totally a separate role from manual testing and automation development. We want someone who is both an amazing software developer with an expensive and lengthy advanced education that has a first-year drop out rate exceeding fifty percent as well as an amazing quality assurance expert with the associated superb communication, writing and analytical skills. We also want someone who is amazing at dealing with the intense and demoralizing office politics that come with working for a company that has the same unreasonable and impossible expectations we have.

We want you to have the same level of magical thinking we engage in because we wholeheartedly believe that we should get all of the skills, experience and talent of three strong professionals for just one salary. We want you to believe as much as we do in the fantasy that we really truly want good automation because the only way you will take this job is if you believe things that just aren’t true! And when you find that instead of the fifty percent manual testing we said you would do is actually more like seventy-five percent, we want you to cheerfully and politely accept endless questioning of your abilities and talents because we want to know where the all that good automation is and why it is taking you so long. You must be doing something wrong!

Let’s just consider the problems inherent in expecting that an individual employee should agree to perform two functions with two different skills sets — Testing strategy, test planning, designing good test cases and developing robust software systems to automate, execute and report results for these test cases. THEY ARE NOT THE SAME THING. These are actually two distinct skill sets. I am not saying they don’t often coexist in the same person. In fact, I think it is not uncommon for a really good test automation expert to have both skill sets because a lot of the time, they started out in manual testing. I do not believe, however, that the path to Good Automation will be found in thinking you can have that person do both jobs at the same time. Because they are both really hard jobs to do really well. You try to hire a single person to do both, one or both of the functions you want them to perform will suffer.

I need to break some really unpleasant news to the modern tech workplace. Multitasking? IT’S BULLSHIT. Computers can have multiple processors. Human beings only have one processor and the quality of what it can do suffers when it is forced to divide its focus between multiple and competing tasks. Please stop smoking the crack that made you believe humans can perform multiple tasks at the same level of quality and speed and within the same amount of time that they could do each task individually.

Now, let’s talk about the meaning of ‘Manual’ testing. I think manual testing has gotten a really bad name. I have noticed a trend lately in job candidates which are applying for positions that are billed as ‘Developer In Test’. They don’t want to do ‘manual’ testing. They look down in it and feel that it is a lesser function than automation is. They see it as less prestigious and less well-compensated. It is, in short, a deterrent to taking the job. It’s sort of like spraying automation developer repellent all over your position and seeing if you can find the developers with the right genetic makeup that makes them resistant to it. These job candidates are correct in many of their assumptions — salaries for traditional SQA employees are lower and the jobs are considered less desirable and it became common sometime in the last 15 years or so, for every traditional SQA employee to suddenly want to ‘get out of manual testing.’ Employers have contributed to the stigma of this role by requiring that most if not all of their SQA hires have some automation experience and often a computer science degree.

The problem with all this is that every software company really needs the skills that a talented traditional SQA engineer can bring to the table. The reason you need those employees is that they will provide you with a necessary precondition to Good Automation, which are good test plans with well-designed test cases. This is not an easy thing to do, and if you find an SQA engineer who is really good at it, you should compensate them highly and treated them like the treasured and important asset that they are. Don’t insult them by acting like their skills are out-of-date artifacts of a bygone era and demand that they transform themselves into a software engineer in order to be considered valuable and desirable as an employee. Let me list the skills a really good SQA engineer needs to have:

  1. They need to write well
  2. They need to communicate well
  3. They need to have really good reading comprehension
  4. They need to be able to synthesize a lot of information sources from design specs and requirements documents into test plans with well-written test cases which can be run by someone who is not an expert in the system. THIS IS HARD.
  5. They need to write test cases that can be automated. THIS IS HARD.

Still think this person shouldn’t be treated with the same level of respect as a good software developer? No? Fine, you don’t deserve that employee and I hope they leave you and your company in the rearview mirror as they get on the highway out of Asshole Town where you are the self-elected mayor and village idiot.

I work on a team of 8 people. There is one person on my team who I feel has had the most positive impact on product quality. She just always seems to master her domain no matter what it is she is doing. She builds all the right relationships and somehow manages to extract information out of this crazy and chaotic environment where very little is documented in a testable fashion before it is coded into the products. This person does the traditional SQA function. Her expertise was invaluable in onboarding several of us and I still find myself going to her with questions after working here for three years myself.

Now, lets get to the subject of what automation developers find so hateful about ‘Manual’ testing that they can’t run from it fast enough. Let’s get the simple reasons out of the way. Some of them just don’t have the chops to do the up front work that a good traditional SQA engineer needs to do to pave the way to Good Automation and they know it. But the more common reason is that ‘Manual’ testing frequently means that there is a large, often poorly written, manual regression suite that you want to automate, but you can’t seem to get around the fact that it’s just too big, tedious and time-consuming that the automation engineers you hired to give you Good Automation just don’t have the space and time to actually do any automation. Running the same test cases over and over again, release after release after release is just awful. It’s awful no matter who you are or what you are good at. It’s the kind of job you should give to a temporary contractor or an offshore company that specializes in providing these kinds of services. It doesn’t make the work any more pleasant, but at least you aren’t paying top-dollar to have it done and you aren’t bullshitting anyone about how rewarding it is. Because it’s not rewarding by any stretch of the imagination.

If you are having trouble getting good results with the temporary contractor or out-sourcing strategy, let’s circle back around to Step One which is the person you hired to write that regression suite. Did you perhaps have some poor judgment about the necessary talent and skills to write a good regression suite you can effectively outsource? Because that just might have something to do with why you can’t seem to get satisfactory results with outsourcing it. Don’t treat the traditional SQA function as a lessor function than development. Take care to hire the right people to do it and make sure you treat them like the valuable asset that they are and you will not be disappointed. Don’t make the mistake of thinking that their job is easy. Hire the right people to perform this function and don’t expect them to do a second full-time job and you will find yourself on the road to Good Automation.

Share This:

Some More Reasons Why You Don’t Have Good Automation w/ Update on Generic UXD Legos Source Code Packages

Update on the example source code packages for UXD Legos:

I am writing the sample page objects to demonstrate how to use the API and its components. I have refactored a lot of things since I presented the API at the conference. The configuration service implementation has been refactored to auto-initialize itself by reading in all the available configuration profiles. It has also been updated to be more robust in a multi-threaded environment.

I have significantly re-worked the model for controls on UI components. The new version is more flexible. Controls are treated as separate entities from the components they control. Since some interactions, like dismissing a UI component, are actually achievable by multiple controls, it makes sense to do it this way. My original design didn’t robustly handle this. For example, a chooser dialog can be submitted, or cancelled, or closed via three different controls. All three actions result in the dialog becoming hidden from view, but my abstract model for components which can be toggled visible and invisible didn’t handle the fact that there could be multiple controls that dismiss a component from view. The new model can handle all three of those scenarios in a generic way. I am working out a good interface for the builder of the UI component to handle specification of all of its controls. Each type control I modeled has a specific builder implementation for it, which is great if you are just focused on specifying and building a control, but not so great if you just want to specify and build the whole component, along with all of its possible controls. It’s clunky to have to interact with more than one builder. I think I have found the right design for this, and I am implementing it right now, along with the sample page objects.

I think, in addition to delivering the C#, Python, and Ruby versions of this API, I will also need to write an updated slide show about it.

And now on to Why You Don’t Have Good Automation:

I am grumpy today. I had a meeting this morning to discuss the automation strategy for a major product. We ended up talking about how the test suite is growing in size by 5-15 tests a day and I pointed out that I have a GIANT bottleneck in the form of inadequate infrastructure for executing Selenium test suites. It was revealed that this is a big problem for many other teams as well. There is no company-wide Selenium grid system available. We also do not have the green light for using Sauce Labs. I was under the impression that other teams had their own private Selenium grids which they jealously guarded because they are a limited resource and building a comprehensive Selenium grid is a big task. If you read my previous post on this subject, it’s not particularly easy to acquire the necessary resources and IT support to build one. I suspect that the reason we do not use Sauce Labs is that we would need to scrub all data in the test environments of personally identifying information. Which… will probably not happen anytime soon, so let’s backtrack to Square One: there is a giant bottleneck related to the lack of test infrastructure for running large suites of Selenium tests. It’s just crazy that there are tests that could be running with every build, that are not running because we only have the infrastructure for getting quick results for the highest priority tests.

If your company, like my company, spends a lot of time recruiting engineers with Teh Awesome Mad Skillz, but they do not budget resources to handle all those awesome automated tests, they are wasting a lot of money and a lot of talent. They would be better off hiring a cheap team of off-shore manual testers to run the tests manually. Otherwise, you have to force those amazing engineers to run tests manually in lieu of writing automation you can’t support in order to ensure that your products are tested. It’s expensive to hire development engineers to run manual tests all day. Plus, it pisses them off and then they leave for jobs where they get to write code.

The other issue that came up is that my tests take a long time to run because we have _TWO_ whole test environments available for an engineering organization of hundreds. TWO. WHOLE. ENVIRONMENTS. And they are unstable because there are too many people using them. Hence, all my tests have to do a lot of checking in order to capture failures related to environment craziness as opposed to real failures so that they can be re-tried until a verifiable true pass or fail can be determined. Other teams are running destructive tests all day long on these environments, so I can make absolutely no assumptions about the availability of test data on the system. Therefore, I have to make each test method search for test data that meets the necessary conditions for the test first.

The end result is that the bottleneck in Selenium grid infrastructure is doubly painful because the terrible test environment situation requires lots of retries and querying that would be unnecessary if each engineer had an isolated environment only they were using. Tests take more time to write because of all the extra overhead of dealing with the environment, so I’m not as productive as I could be. Not that it matters because…. Square One: Inadequate selenium grid infrastructure to run all the tests.

To summarize:

Automation is not magic dust that is gathered by tiny little fairy hands and deposited at your doorstep overnight for free. It takes a lot of planning and infrastructure. So, you like… need to invest in it and stuff. If you don’t, you won’t have good automation. You will have shitty automation that mostly functions as an Automated Smoke Test Teddy Bear that you can cuddle up to at release time for that warm feeling of false security that it gives you after forcing your resentful automation engineers to run manual tests until they are ready to grab the nearest fork and dig out their eyeballs from the sheer boredom of it all.

Share This:

Why You Don’t Have Good Automation w/ Update on UXD Legos Source Code Packages

Update on the example source code packages for UXD Legos:

I’m almost there! I could not resist the temptation to refactor the code from my presentation to handle weaknesses I was hoping no one would notice at the conference. The design for Accessible/Dismissable components didn’t really address the components which are accessed and dismissed by a combination of hovering and clicking on accessor and dismisser WebElements. That was my secret plan all along — present far too much content and speed through slides full of complicated Java source code to prevent anyone from comprehending it enough to notice that my abstract interaction model for these components was lacking something.

Putting together this example package really highlights just how much I overshot the bounds of a reasonable amount of content for a 45 minute presentation. Not only is the amount of source code way too much, there were too many different design concepts to cover. I think I could have easily broken this up into three 45 minute presentations. Sorry again, conference-goers who sat through my presentation. We’ve all had those college professors who did this to us, right? Start nice and slow with something simple and digestible for the first 10 minutes, then jam a semester’s worth of content in the next 30 minutes. The good news is that there’s no final exam at the end of this.

The task of providing example source code in other languages is going to be a longer term effort. Some of the design tricks I used are reliant on concepts and features that are really Java-centric, and more specifically, Java 8-centric. It will be an interesting exercise in figuring out how to translate it to C#, Python and Ruby.

Now on to the actual subject of this post.

I am grumpy today. I work for a large, multinational technology company. I came here after working a couple of start-up gigs at small companies with less than 25 employees. When I took this job, I was SOOOOOO glad to be joining an organization that I was sure would have resources I could only dream about at the cash-strapped employers I had been working for. It hasn’t been anything like that. In my last job, where the CEO regularly cursed out creditors over the phone for not realizing just _who_ he was when they dared to call him and ask for repayment of funds they should be glad to not be getting from someone as important as he was, I had three monitors. I could have my test plan open on one screen, a JIRA screen open on another and my Eclipse IDE open on a third. All I had to to was ask the IT guy (and there was only one of them, so I knew who to talk to), and it happened. When I wanted a VM, I asked the same IT guy and it was done within a day.

At my current employer, I have been trying to get a second monitor since I started working here 3 years ago. Every time I go through official channels, I am told that it is not company policy to provide a second monitor. Apparently, increasing employee productivity via inexpensive and trivial perks like additional computer monitors is just not one of those things they want to concern themselves with. I am now reduced to thieving the second monitor one of my former team members somehow finagled through unofficial channels. It’s just sitting there at his now-empty desk, left behind when he transferred to another team a few weeks ago. If I were feeling bold, I could take both his monitors and hide one in my file cabinet as a backup in case one of mine died.

When I want a VM, I have to enter a ticket in an impossible-to-navigate system, fill out a long questionnaire and…. wait. For a really long time in particular if I want a Windows VM. I requested some VMs before the conference because I had these big ambitions of building this amazing selenium grid with TWO WHOLE NODES during the Selenium Grid workshop on the first day of the conference. I couldn’t install anything on them because the process to grant me sudo privileges failed to work for my VMs. Before the conference, I entered a ticket about this problem in the vain and naive hopes that it would be resolved before the conference started.

When the ticket sat untouched for over a week, I resorted to looking for people who might tangentially be related to this part of the IT support staff to pester via email about solving this problem. As the conference grew near, I finally got a response from someone who said that it wasn’t their job to deal with this even though his name is listed on the web page for requesting sudo permissions on lab machines. He forwarded the email to someone else, who solved one problem for me, but not the remaining issue I had with adding a repository to apt-get so I could install Java on my hub VM. It worked perfectly on my two node VMs. My subsequent emails inquiries failed to generate a response. Today, I got an email from the help desk system, almost a month after I entered my original ticket complaining that I had no sudo privileges on my VMs. Apparently, the ticket was finally assigned to someone today.

So, this first edition of Why You Don’t Have Good Automation has to do with the demoralizing effect of tedious and silly barriers to the acquisition of reasonable resources to do a good job. If you make your employees feel like they are extras on the set of Candid Camera, Kafka Edition as they engage in seemingly futile efforts to achieve small gains in the face of an absurd and meaningless bureaucracy, you will not have good automation. You probably won’t have any at all.

Share This: