Tag: brizen

Even More Reasons Why You Don’t Have Good Automation w/ Update on Generic UXD Legos Source Code Packages

Update on the example source code packages for UXD Legos:

I am writing Javadoc for the newly refactored source code. This is quite an undertaking, but one that I feel is important to do. I also have expanded the dynamically configurable options for hover controls. As Oren Rubin kindly pointed out to me after my presentation at the Selenium conference, the Javascript hover workaround doesn’t always work. Some hover controls use the CSS :hover pseudoclass which cannot be triggered with Javascript. So, in these situations, there is nothing that can be done via Selenium or the JavascriptExecutor to trigger a hover action. He also said that some security protections also prevent this workaround from working properly as well.

I found a test case in a public website where the Javascript workaround doesn’t work for hovering and came up with some other ideas for workarounds that would allow a tester to trigger the side-effects that occur when the element is hovered if they cannot rely on WebDriver or JavascriptExecutor. The benefit is that they can still automate tests that rely on the ability to trigger the side effects, but which do not test the result of the hover action itself. Obviously, for the environments in which the hover action cannot be triggered, the result of the hover would have to be tested manually, but downstream functional tests don’t have to be blocked by the inability to trigger the hover in automation.

I have a test case, which is probably an uncommon edge case:

  1. The control is invisible and must be hovered before it can be clicked
  2. Clicking the control expands a menu
  3. The ‘Expanded’ state of the menu is sticky
  4. The menu becomes invisible if the focus leaves the control that expands it
  5. If the focus subsequently returns to the control, the menu will appear without clicking the control because the ‘Expanded’ state is sticky
  6. If the control is then hovered and clicked a second time, the menu is collapsed
  7. If the focus leaves the control and then comes back to it, the menu is not visible

This is quite a complicated test case and would definitely require some testing and experimentation to determine what, if any, dynamically configurable kinds of options can be used to handle cases where the hover cannot be triggered via Selenium or the JavascriptExecutor. I have some ideas, but unfortunately, I can’t try any of them out at the moment. The only example of this kind of component that I am aware of exists on the product I test. And if you read my previous post, you know about the vast array of test environments which are available to me (and hundred of other engineers) in my workplace. There is a bug that prevents the page which has this control from loading at all. The estimated time-to-fix is like…. 5 days. And because I can’t just spin up an environment with an earlier, working version of the system, my ability to test and develop this part of the API has come to a screaming halt. Along with my effort to develop a comprehensive UI automation suite for the brand new front end for this application I test because NONE OF THE PAGES WILL LOAD. Anyway, this portion of my post isn’t supposed to talk about Why You Don’t Have Good Automation, but unfortunately, the two of them are bleeding into each other this time.

So, what I probably will do is finish the Javadoc for what I have and post the source code. It’s not terribly well-tested in its new, much refactored form, but I will start writing unit tests for it when I start trying to translate it to C#.

And now on to Why You Don’t Have Good Automation:

I am puzzled today. I am puzzled by a phenomenon I have encountered in every QA job I have ever had. EVERY. SINGLE. ONE. The company states without equivocation that they want Good Automation. They acknowledge that manual test execution is costly and slow. They acknowledge that not having Good Automation severely limits the scope and coverage of the testing they can have. They acknowledge that they would benefit enormously from the fast feedback that Good Automation would give them. They acknowledge that the development cycle for their products would become much shorter from this quick feedback. They acknowledge they could quickly catch regressions if they had Good Automation that delivered results within an hour of every new build. They acknowledge that catching regressions quickly in complicated systems would help shape design improvements by surfacing unnecessary coupling between seemingly unrelated parts of it because Good Automation would catch regressions triggered in one part of a system by changes in another part.

How often have you seen the following in a SQA job post:

Need strong automation developer with at least 8 years experience in the industry. Masters degree in Computer Science desired. 5 years experience in Java/C#/C++/Python/Ruby/Perl/COBOL/FORTRAN or some other object oriented programming language. Selenium experience in all of the above is highly desired. Responsibilities include mentoring junior engineers, building automation and release engineering infrastructure and developing comprehensive test plans for an array of products. Job requires fifty percent manual testing.

Whenever I encounter one of these job posts, what I actually see is the following:

Desperately in need of a strong, young and healthy unicorn that never needs to sleep or take a vacation. Must be willing to subsist on a steady diet of dirt and impossible expectations.  We have no fucking clue what it is we need or want, so we threw everything we could possibly think of into the list of requirements for this position, including the release engineering function which is totally a separate role from manual testing and automation development. We want someone who is both an amazing software developer with an expensive and lengthy advanced education that has a first-year drop out rate exceeding fifty percent as well as an amazing quality assurance expert with the associated superb communication, writing and analytical skills. We also want someone who is amazing at dealing with the intense and demoralizing office politics that come with working for a company that has the same unreasonable and impossible expectations we have.

We want you to have the same level of magical thinking we engage in because we wholeheartedly believe that we should get all of the skills, experience and talent of three strong professionals for just one salary. We want you to believe as much as we do in the fantasy that we really truly want good automation because the only way you will take this job is if you believe things that just aren’t true! And when you find that instead of the fifty percent manual testing we said you would do is actually more like seventy-five percent, we want you to cheerfully and politely accept endless questioning of your abilities and talents because we want to know where the all that good automation is and why it is taking you so long. You must be doing something wrong!

Let’s just consider the problems inherent in expecting that an individual employee should agree to perform two functions with two different skills sets — Testing strategy, test planning, designing good test cases and developing robust software systems to automate, execute and report results for these test cases. THEY ARE NOT THE SAME THING. These are actually two distinct skill sets. I am not saying they don’t often coexist in the same person. In fact, I think it is not uncommon for a really good test automation expert to have both skill sets because a lot of the time, they started out in manual testing. I do not believe, however, that the path to Good Automation will be found in thinking you can have that person do both jobs at the same time. Because they are both really hard jobs to do really well. You try to hire a single person to do both, one or both of the functions you want them to perform will suffer.

I need to break some really unpleasant news to the modern tech workplace. Multitasking? IT’S BULLSHIT. Computers can have multiple processors. Human beings only have one processor and the quality of what it can do suffers when it is forced to divide its focus between multiple and competing tasks. Please stop smoking the crack that made you believe humans can perform multiple tasks at the same level of quality and speed and within the same amount of time that they could do each task individually.

Now, let’s talk about the meaning of ‘Manual’ testing. I think manual testing has gotten a really bad name. I have noticed a trend lately in job candidates which are applying for positions that are billed as ‘Developer In Test’. They don’t want to do ‘manual’ testing. They look down in it and feel that it is a lesser function than automation is. They see it as less prestigious and less well-compensated. It is, in short, a deterrent to taking the job. It’s sort of like spraying automation developer repellent all over your position and seeing if you can find the developers with the right genetic makeup that makes them resistant to it. These job candidates are correct in many of their assumptions — salaries for traditional SQA employees are lower and the jobs are considered less desirable and it became common sometime in the last 15 years or so, for every traditional SQA employee to suddenly want to ‘get out of manual testing.’ Employers have contributed to the stigma of this role by requiring that most if not all of their SQA hires have some automation experience and often a computer science degree.

The problem with all this is that every software company really needs the skills that a talented traditional SQA engineer can bring to the table. The reason you need those employees is that they will provide you with a necessary precondition to Good Automation, which are good test plans with well-designed test cases. This is not an easy thing to do, and if you find an SQA engineer who is really good at it, you should compensate them highly and treated them like the treasured and important asset that they are. Don’t insult them by acting like their skills are out-of-date artifacts of a bygone era and demand that they transform themselves into a software engineer in order to be considered valuable and desirable as an employee. Let me list the skills a really good SQA engineer needs to have:

  1. They need to write well
  2. They need to communicate well
  3. They need to have really good reading comprehension
  4. They need to be able to synthesize a lot of information sources from design specs and requirements documents into test plans with well-written test cases which can be run by someone who is not an expert in the system. THIS IS HARD.
  5. They need to write test cases that can be automated. THIS IS HARD.

Still think this person shouldn’t be treated with the same level of respect as a good software developer? No? Fine, you don’t deserve that employee and I hope they leave you and your company in the rearview mirror as they get on the highway out of Asshole Town where you are the self-elected mayor and village idiot.

I work on a team of 8 people. There is one person on my team who I feel has had the most positive impact on product quality. She just always seems to master her domain no matter what it is she is doing. She builds all the right relationships and somehow manages to extract information out of this crazy and chaotic environment where very little is documented in a testable fashion before it is coded into the products. This person does the traditional SQA function. Her expertise was invaluable in onboarding several of us and I still find myself going to her with questions after working here for three years myself.

Now, lets get to the subject of what automation developers find so hateful about ‘Manual’ testing that they can’t run from it fast enough. Let’s get the simple reasons out of the way. Some of them just don’t have the chops to do the up front work that a good traditional SQA engineer needs to do to pave the way to Good Automation and they know it. But the more common reason is that ‘Manual’ testing frequently means that there is a large, often poorly written, manual regression suite that you want to automate, but you can’t seem to get around the fact that it’s just too big, tedious and time-consuming that the automation engineers you hired to give you Good Automation just don’t have the space and time to actually do any automation. Running the same test cases over and over again, release after release after release is just awful. It’s awful no matter who you are or what you are good at. It’s the kind of job you should give to a temporary contractor or an offshore company that specializes in providing these kinds of services. It doesn’t make the work any more pleasant, but at least you aren’t paying top-dollar to have it done and you aren’t bullshitting anyone about how rewarding it is. Because it’s not rewarding by any stretch of the imagination.

If you are having trouble getting good results with the temporary contractor or out-sourcing strategy, let’s circle back around to Step One which is the person you hired to write that regression suite. Did you perhaps have some poor judgment about the necessary talent and skills to write a good regression suite you can effectively outsource? Because that just might have something to do with why you can’t seem to get satisfactory results with outsourcing it. Don’t treat the traditional SQA function as a lessor function than development. Take care to hire the right people to do it and make sure you treat them like the valuable asset that they are and you will not be disappointed. Don’t make the mistake of thinking that their job is easy. Hire the right people to perform this function and don’t expect them to do a second full-time job and you will find yourself on the road to Good Automation.

Share This: