Tag: testing

Treading Water Is Too Much Work

I honestly meant to post about the daily insanity of my workplace about a dozen times over the last two months. The only problem was that I was working all the time. Something amazing happens every time I step off the Endless Manual Testing Train To Perdition. I always have a giant pile of technical debt to address and I have to work overtime for weeks to catch up on the deferred maintenance. The current task on the pile is implementing an interface for a REST API to query the customer configuration management application for configurations meeting necessary criteria for automated tests. I didn’t want to use the UI to search for these configurations because the test systems are still a steaming pile of crap. Using the UI introduces a ton of new opportunities for their horrible performance to blow a test execution run to smithereens. Plus, these tests already take long enough to run that any means I have at my disposal to optimize the execution time should be put to work.

The API belongs to an ecosystem of APIs the company developed for programmatically interacting with our services and systems. There is a tool available in the The Chosen Framework v2.0 for executing requests against these APIs, but the design is not to my liking. Given the political shit storm that always surrounds any attempt to alter the design or function of any of the services in The Chosen Framework v2.0 and the total lack of staffing to support and maintain the framework, I decided to secretly write my own tool. There is an overly complex authentication system for using any of these APIs, so I had to build out my model for the access control group and user admin business domain to include the credentials needed for authentication. I have a penchant for wanting all my test data to live outside my test suite source code, so everything needed to be serializable/deserializable from both XML and JSON because who am I to dictate which storage format a person should use in order to employ my tools for their own purposes?

One thing lead to another, and suddenly, I had 6 different repositories for tools I have written to model parts of the business domain and as well as services I built on top of those models to automate interactions with the products I test. Not to mention that the number is set to grow because I have plans for additional services for even more automation goodness. In short, this collection of tools and services has become a total beast. It is really an automation framework in its own right. I finally managed to get a Stash project dedicated to them, so I moved everything there. As soon as I did this, I started to get nervous about getting caught for not using the services in The Chosen Framework v2.0. QA Director has been on the warpath against people who don’t want to use the services in the The Chosen Framework v2.0. I believe that the failed effort to shove it down the throats of other business units in the company is a sour and somewhat embarrassing memory for him, especially because several people on our team have turned their own noses up at various tools in framework. I can only imagine what the reaction will be if I get caught for building a rogue framework.

I confessed to my boss today that I was guilty of coding without a license. He asked me why I didn’t use the existing tool and I explained my reasons, and he said I should just add my tool to The Chosen Framework v2.0. This meant I had to explain that this would mean adding other tools to it, including the business objects and page objects that I had been asked to remove last year. He seemed confused and said he would address the issue in Some Very Important Discussions taking place next week among QA Director, the other managers and Principal SDET regarding the future of The Chosen Framework v2.0. I don’t think he realizes yet what a clusterfuck this whole thing has been. I would prefer to have nothing to do with The Chosen Framework v2.0 and continue building out my secret framework, but I know this is never going to happen.

The summer was a busy affair. It started out with a massive fail by the new organization that has taken over support for development tools and infrastructure. They officially took over maintenance of Stash in June and immediately buggered system the first time they did routine maintenance on it. For some reason, the most recent backup available was a month old. Thankfully, I had the most recent version of all my work saved locally, but some teams lost work. I am very happy to see that we are getting something that looks a lot like a real IT support staff, but these handoffs are a bitch. At the very least, before doing anything to the source control system, one should make a backup and then make a couple copies of that backup and store each of them in different locations. They handled the recovery process well, though, with good communication and organization of the effort to glue Humpty Dumpty together again.

As part of my job duties over the summer, I helped Junior SDET Who Should Be Senior mentor our summer intern from MIT. This was one of the most pleasant and enjoyable experiences I’ve had. It’s a lovely experience to work with one of the smart ones who learns quickly and is able to extrapolate abstract principles from a few basic examples and work independently. I highly recommend hiring MIT students as interns. I think a lot of them could easily drop out of school and start working as developers or even start their own companies and be wildly successful.

QA Director has been on everyone’s case about work from home policies and has lately taken up the cause of monitoring what we are doing on our computers at any given time. At one point, he asked Junior SDET Who Should Be Senior to talk to our intern about not playing games or watching any videos while he is in the office. QA Director claimed that ‘someone’ had complained to him about it, but I wouldn’t put it past him to lie about that so that he doesn’t come off as the ‘bad guy’ with a bad case of Hall Monitor Syndrome. I personally think that people who are productive and hit their milestones should be left alone, but I’m not the boss. Right after this, he claimed that ‘someone’ complained about the whole QA team being absent from the office on the first Friday after a giant release date. This was first and foremost, incorrect because I was in the office that day. I’m just too short for anyone to see my head over the top of my cube. Second, several people took vacation after that horrific release cycle, one of us couldn’t work that day because their new H1B visa wouldn’t be in effect until the following Monday, another of us was out sick and the rest were working from home on Friday because that was the day they had chosen for their one day a week allowance.

This of course, ignited a tense and difficult team meeting in which many of us expressed anger and resentment because developers seem to do whatever they want when they want and we are the only team that ever gets called to account for our whereabouts. QA Director became very defensive and sort of pissed at us and then he said he was going to tell the person who complained to mind their own business. Which is what he should have done in the first place. There’s nothing more demoralizing than a boss or director who won’t stand up for their team. Our team manager seems to be getting really tired of all this and has said he is not going to get into the habit of taking attendance because we are all grown adults and he would prefer to focus on the work and whether it is getting done or not. Although I am really pissed that my former boss was fired, I like the new manager a lot. He’s focused on the work and getting the developers to work together with us as a team, which is nice.

I found a senior developer role elsewhere in the company that is with one of the business units that is turning a profit. I have an informational interview with the hiring manager next week. Hopefully, this will be the right fit because it would be really great if I could continue working with the company in a part of it that is actually making some money. I really like the technology here and working with a core piece of it would be a great opportunity for me to gain some new experience and skills. I am still working on acquiring the knowledge and skills to get into data science or machine learning. I underestimated the amount of time that would take. There’s a lot of math. SO MUCH MATH. I built an Anki flashcard deck for the MIT discrete class I studied through MIT OCW. After finishing up the book and the lectures, I started using it study the material. For the first two weeks, it felt like it wasn’t doing anything for me, but now I have started to see cards which are easy for me to answer. The other benefit is that there are many proofs that I ‘know’, but understand more fully as a result of repeatedly reviewing them. Our summer intern is taking the class this fall, so I shared the deck publicly, and it appears to have taken off like wildfire because it has been downloaded 41 times so far. Hopefully, Anki will catch on at MIT and students will make decks for their other classes and post them publicly so I can use them myself.

I also took on the task of identifying the root cause of some TestNG defects related to parallel test execution. These defects have been making it impossible to implement the ability to log my tests to a separate log file by thread without advanced jujitsu configuration and hacking in Logback. If the tests would just freaking execute in the right thread in the expected fashion, this would be trivial, but all these random extra threads get spawned because a new thread worker is inappropriately spawned when you have test method dependencies. TestNG works as expected when test methods don’t depend on each other. Actually, group and test method dependencies are notoriously buggy when trying to use the various parallel execution modes offered by TestNG. It was quite a ride debugging that code base, but I found the bug in the source code and now I am trying to figure out how to fix it. It seems to be difficult to get a pull request accepted in TestNG (for good reason), but I want this bug fixed so bad, I am willing to jump through whatever hoops they want.

Basically, I have a lot of irons in the fire all at once. Hopefully, switching to a role in a profitable part of the company will put me in a less over-burdened role. I am sick of working on an absurdly understaffed team with people in positions of technical leadership who shouldn’t be there because they can’t code or understand basic software architecture design principles. Also, I really want to work for an organization that doesn’t treat me or my team like a bunch of middle schoolers who need a hall pass and a reason to be absent for a bathroom break.

Share This:

Preparing for the Coding Interview and Other Tales

Today, I am playing the long game. The one that involves intensive and thorough preparation to get a job where I will have complex and difficult challenges to enjoy. Challenges that involve writing code and not long, involved test cases that take a day to design and debug before I can move on to the next test case that is going to take a day to design, debug and write. I contacted a recruiter who specializes in L33t Coding Jobs and he said I would have to get comfortable writing code on a whiteboard. I am not totally certain of the value of these whiteboard coding exercises over just giving the candidate a computer and pair programming through a bite-size problem with them, but I am getting with the program anyway. If this is the hazing ritual I must endure to obtain the my dream job, then I will endure it.

My chosen guide for preparation is the one that everyone uses — Cracking the Coding Interview. I snagged a 5th edition copy and starting working through the first two chapters. There’s one thing I can say for solving programming puzzles with paper and pencil. It’s slow. Okay, I can say some other things too. It does force you to know by heart the core APIs of your language of choice. I got very familiar with the String and Character methods for handling the entire Unicode character set while solving the problems in the chapter about strings and arrays. Most of the problems are actually pretty decent at working my CS student brain back into shape. Some of the problems are just silly and require you to do something that is dumb and the solution they want you to implement is one that wouldn’t pass code review in a decent company. Like — Delete a node from a list given only access to that node. The answer is “Copy the data from the next node in the list and delete that node.” Of course, this doesn’t work if the node you are give access to is the last node in the list. Normally, I would look at this and think, “Either this linked list API is shit and needs to be re-written because this doesn’t fucking work or the person who wrote this clearly doesn’t have a clue how to use a linked list API.”

Given that there are 150 problems to solve in the book, my interview preparation is going to take a couple months. It is worth the effort to escape the manual testing monster that slowing devouring all my joy. Hopefully, they will replace me with a traditional QA type, so the other SDETs can focus on their automation when I leave. The downside to this is that I haven’t had any spare time to work on Brixen, which is making me have a major sad right now. My next step is to write an EnhancedPageFactory class and some annotation classes so that declaring one of the component wrappers in a page object basically works the same way as declaring a WebElement with a FindBy annotation. Of course, specifying all the data for the component wrapper is a lot more intensive that a simple locator for a WebElement, but I figured that could be handled with a configuration file which is declared by the annotation. The intent is to make all that messiness with having to build the component in your page object go away and become completely invisible.

The other thing that bogged me down with the Brixen work was discovering just how painful and difficult C# is when you are trying to assert equality between collections. By default, C# uses reference equality which is totally ridiculous in my opinion. Who the hell though it made sense to not compare the objects they contain for equality? This really pisses me off. There is an interesting library that does this for C#, but I ran into problems comparing values like 64 bit integers and 32 bit integers. This library doesn’t have some mechanism for upcasting the 32 bit value and then doing the comparison. It just returns false. I realized this when the the library I was using to deserialize data from JSON was taking values like integers and returning then as the type with the largest possible values. So, if I serialize an object with a 32-bit field and then deserialize it, I get a 64 bit value for that field. VERY ANNOYING.

QA Director has become a mite touchy on the subjects of under-staffing and our unstable test environments. Apparently, the California team is even more under-staffed than we are. This is supposed to make us less unhappy that we can’t accomplish our objectives without working nights and weekends, and the logical fallacy of this line of reasoning is totally lost on most people. Be happy about suffering unreasonable expectations because other people are being crushed harder by expectations that are even crazier? Of course, we just made an offer to a manager candidate to replace my boss who was fired two months ago as the sacrificial lamb on the alter of blame for our lack of success instead of hiring more people who would actually do some of the work. Unfortunately, this candidate seems to be of the same delusion that it’s better to have a team of people who are doing both the traditional QA work and the automation. This does not bode well.

If I didn’t already understand how little valued the SQA function is, I’ll share a small anecdote about our upcoming monster release. There was a feature that was supposed to be ready by code freeze. It totally wasn’t ready by then, but we said we’d give it the old college try and test it if they could get it done within a few days. A few days later, the engineering team responsible for writing it said it was all done and ready to test. The person testing this feature started the testing and found that none of the REST end points were reachable. The engineering team is in India, so there follows a back and forth dialog between her and the engineering team with 12 hour delays in between communications. It was working for them, so she must be doing something wrong. As the release date grew closer, the testing could not proceed because the feature just wasn’t accessible to the test team. A meeting was called to make the call about this feature, that went like this:

“So, can we release this feature?” the product managers asked earnestly.
“No, we haven’t tested it,” answered the QA team.
“Great! Let’s release this feature then,” replied the product managers.
“We said no. We weren’t able to test it,” interjected the QA team.
“We all know that when QA says no they mean yes, ” laughed the product managers. “So, we’re good to go.”
“No. We are not good to go. The feature hasn’t been tested at all,” replied the QA team. “Not even one little bit.”
“Why are you being so difficult? I thought you said it worked,” barked the product managers.
“We didn’t say that it worked. The developers said it was working on their machines,” said the QA team who are now texting their favorite recruiters on their phones under the table.
“The customers really want this feature,” whined the product managers.
“We understand that. They probably also want it to work too,” said the QA team.
“Fine. Let’s talk about this again in a few days. Maybe the customers can test it for us.”

This happens all the time.

Share This:

Wherein I Give Up Due To Fatigue and Not Giving a Shit Anymore

I am exhausted today. I have been consumed by a tide of work that a team twice our size would not be able to competently manage. It started with a monster new feature that can not be quickly tested via our functional UI regression suite. It requires real end-to-end verification. Normally, we only need to create configurations which use features with the web application for configuration management. For this feature, we have to verify that the configuration is applied correctly once it goes active. This involved a journey of discovery that highlighted the insane lack of coordination and communication between our particular organization and the rest of the company at large.

First, we did not know how to do this end-to-end testing because we have never done it before. Second, we were under the mistaken impression that there was no apparatus for doing this kind of testing in our two test environments. We believed this because we have been told repeatedly that there was no such thing since we were hired. The entire development team and their managers also believed this to be true. A co-worker and I began trying to master this unknown art of end-to-end testing by contacting the team which develops and tests that part of the network. During this first meeting in which the two of us probably sounded as intelligent as brain-damaged turkeys riding the short bus to school, we discovered that both of our test environments did in fact have the networks for doing this end-to-end testing. Not only that, they have had them for FIFTEEN YEARS. It’s impossible to understand that our entire organization could share the delusion that this part of the system was not present in either test environment for so long. I still don’t have an explanation for it. The only thing I can say is that inter-group and inter-department communication is abominable.

We also discovered that another business unit was nearly done with building a system which allows them to go to a website, fill out a simple data entry form and press a button which spins up test VMs on demand that function as a nice little sandbox testing environment for testing for this part of the end-to-end pipeline in isolation. It is well on its way to being part of a continuous deployment testing apparatus where there is little to no human intervention required to spin it up and do tests. I am astonished that my entire organization seemed to be under the delusion that this was an impossible pipe dream. The sheer embarrassment of talking to one team after another about their testing which is so much better than ours and their systems knowledge, which is so much more comprehensive than ours, and their test automation which is so much better than ours has broken something in me. It didn’t help I calculated that it was going to take a month to design, write and execute these test cases. A month in which I would not be writing any code.

Also weighing on me is the task of writing all the functional test cases for the entire UI. There is a hiring freeze and getting new requisitions is akin to an act of god. We have two principal-level employees who want nothing to do with this task. And who would? It’s tedious and unrewarding work to write functional UI test cases. Since I have been tagged as the person who couldn’t get any automation done for three years, I figured it was better to bite the bullet and be the SQA lead even if I am not getting paid to do that role. I’m not even getting paid enough to do my current role. This decision was the genesis for an unpleasant discovery regarding my company’s IT support infrastructure, which is that there really isn’t one. You’d think that a company with 6000 employees would have gotten around to hiring a 24/7 team to support and administer things like the build system, the source code control system and the defect tracking and project management systems. But, sadly, this is not true.

Our JIRA installation started out as a rebel application running off the director of engineering’s desktop computer because a cadre of new employees refused to use Bugzilla. Naturally, because there was no formal introduction of JIRA as a tool, there were no knowledgeable professionals who controlled the adoption. The result is an explosion of project templates with a ton of totally unnecessary custom fields and overly complicated workflows. An organization full of supposedly amazing engineers should be able to design a system that cleanly separates the work of product design, development, testing and customer support, but what we got was a system full of templates and workflows that was designed by a drunken squirrel on acid. Eventually, it got too big to be a rebel application running on the director of engineering’s desktop computer. It did not get big enough, however, to merit a 24/7 dedicated support team. It had gotten big enough to have a part-time support team who were all doing other jobs before they got stuck with the responsibility for managing and administering this monstrosity.

This is where I entered the picture with my quixotic notions of quickly configuring three projects — one for the APIs I wrote for our test automation framework, one for the domain models I have written using the APIs, and one for the QA team to use for defining and managing work related to testing the customer configuration application. I had dreams of using these three projects to define and manage the workload of writing all the formal test cases for the application and the catalog of work that must be done to automate them. This seemingly simple task has been going on since early February. I am in the final stages of getting the correct workflow defined for the Zephyr test tickets. This customized workflow has been most painful part of the process. For everything else, I was able to use the default, out-of-the-box Jira workflow and fields or the agile-style workflow and fields that the team building the new front end uses. The Zephyr test cases were another matter. I wanted a workflow that would accommodate test case design and development for manual tests and automated tests.

I wanted a design phase followed by a review phase. Then, if the test case is to be automated, there is a development phase followed by a code review phase, followed by a testing phase. If manual, the ticket is ‘complete’ after review. If an automated test case successfully gets through testing, it is ‘complete’. I also wanted a workflow that could accommodate sending the ticket back for re-work if review, code review or testing revealed issues as well as a workflow that could handle flagging a test case for needed updates because of changes in the behavior of the application. Granted, this is not a simple workflow design and it didn’t help that I did not initially understand the JIRA terminology or means of representing a workflow. The other problem was that the number of custom issue statuses and fields had completely gotten out of control during the wild west period for our JIRA installation. Apparently, this can cause serious performance degradation in JIRA, so they were no longer allowing any new statuses to be defined. I was thankfully able to choose good statuses from the list of existing ones. That fact that there are about four different statuses signifying that something is ready for development or testing just shows how important it is for a company to get out in front of the problem of building IT support infrastructure instead of chasing after the cowboys who just get tired of waiting for a modern tool chain.

In the midst of this mess, I also took on the task of implementing localization support for my page object library. Our California team requested this feature before Christmas. There are eleven possible languages that a user can select when browsing our customer portal. It took 3 weeks of late evenings and working weekends to pull off locale-agnostic page objects. I also chose to do a major refactor of the DSL API I had written because it badly needed to be simplified and scaled down. I think the system works well. Now all the page objects refer to enumerated constants which represent these localized identities. There is some under the hood tooling that they use for mapping locale-sensitive identifiers to the enumerated constants. It was the best I could do given the short time frame for building it. It was a huge task that burned me completely out.

So, I looking at this job and I have come to the conclusion that nothing is going to change. I have been complaining about the fact that the traditional SQA work and the automation work need to be separate roles since I joined this company nearly four years ago. Nothing has changed. I’m still stuck with work I am not interested in doing and having to burn the candle at both ends to do the work I actually want to do. So, in short, I am looking for a new job as we speak. I finally realized that this company which is supposedly full of amazing engineers is not actually interested in applying solid engineering principles to the test automation. They aren’t even interested in applying solid SQA practices because they severely under-resource the function to the point of absurdity. I keep thinking back to my first real SQA job with a medical device company. The size and scope of the application was smaller, yet we had two managers, four people on the backend and six people on the front end. That’s half again the size of my current team. When I think about this, I realized how totally delusional my company is about the work burden and the complexity of not only testing these applications, but also building robust test automation for them.

Share This:

Some More Reasons Why You Don’t Have Good Automation

It has been a while since our last edition of Why You Don’t Have Good Automation. I am hopeful today. On Monday, I did a presentation on the reasoning and purpose behind my detailed labeling of test cases. The gist of the presentation is that placing emphasis on end-to-end testing as if it were the only way to do automated testing was going to lead us to a limited return on the investment we put into writing all that sweet, sweet automation infrastructure. Once I replaced the older smoke test suite, I was happy to see the dev manager and my boss go all gooey-eyed over it, but I am bothered by the false sense of security that it gives. I am not saying that it is useless, but it has limited potential.

I am disillusioned where end-to-end testing is concerned. It is hideously difficult to build everything you need to implement the plumbing to even get to the point where you can do one end-to-end test. I have been through a couple iterations of this nightmare. The first time was during the Frameworks Wars when my co-worker and I toiled mightily to automate a suite of forty or so manual tests. There was just so much to do and we were slammed over and over again every few weeks with an avalanche of manual testing work for the application. We were also new to the company and not all that familiar with the surrounding, flaky systems that our application interacts with for each of these end-to-end scenarios. Combine this with our total lack of organized training with the some very unreasonable expectations, we were set up to fail and fail hard.

At the time, both my co-worker and I were operating under the same limited perspective as everyone else. End-to-end testing was where it was at and if you weren’t doing that, you were not doing Good Automation. So, we coded and coded, all the while being asked, “Where is all that sweet, sweet good automation?” We actually built a fairly extensive library of page objects which later became of the basis of the current page object library, but because we didn’t have the plumbing to get all the way from A to Z, it amounted to nothing in the eyes of the non-coding people around us. We made some mistakes such as thinking we could not throw our code out there unless it was ‘done’ and leaving the results reporting to the end. We should have had a parade up and down the hallways once a week, given tech talks and otherwise tooted our horns relentlessly. Our unpolished code was better than most of the test code I had seen up to that point. If we had a simple HTML report with green and red pass and fail indicators, I think the non-coding cadre would have believed we were actually doing something.

Our biggest mistake, however, was trying to do end-to-end testing first. If we had broken those long end-to-end test cases up into smaller, atomic test cases that could be strung together in end-to-end scenarios or run as standalone tests, we would have been able to report each week that we had automated twenty-five new test cases. It’s very easy to automate a test that verifies that a dialog opens when you click its accessor. It’s easy to automate a test that verifies that the title on the dialog is correct. It’s easy to automate a test that verifies that the dialog is dismissed if you close, cancel or submit it. Since our application had a gazillion dialogs, we could have written an entire test suite for verifying the basic behavior of dialogs. It’s not easy to verify that a given type of user can log in, access the appropriate accounts, create a configuration, fill out a complicated data entry form, save their configuration and then make it active. There are just so many interactions with the UI along the way, the chances that some flaky thing will happen to short-circuit the test are too high. Maybe one of the dialogs gets stuck, or a page gets stuck and doesn’t load. Given the sorry state of our test environments, this kind of problem arises every other test run, especially during periods of active and intense development. I won’t go into the details, but one of the systems that the application must interact with at the end of the long process for making a configuration go live is the bane of my existence. Imagine what it is like to kick off a test run that has a lower bound on it of ninety minutes to go from start to finish and it fails at the VERY. LAST. STEP. when that final sub-system in the whole pipeline chokes because of a common environmental issue.

The advantage of writing lots and lots of small tests that get into your app quick, verify a few small things and get out quick is that the chain of interactions with the UI is so much shorter. There are fewer opportunities for an unstable test environment to screw up your test results. And if a test does fail? WHO CARES. You just retry it a few times to see if it passes on a subsequent attempt. It’s easy to retry because this isolated little test lacks all those dependencies that a step in a fifty step end-to-end scenario would have. And why do you have to verify everything in one end-to-end chain? Why is it not sufficient to find a way to verify all the little steps in isolation? I’m not saying that you shouldn’t have some end-to-end test automation. There is a place for this kind of test automation. It’s just not terribly comprehensive. I know that sounds crazy. It’s an end-to-end test. By definition, that’s comprehensive, right? Nope. Nope. Nope. It’s anything but comprehensive. It represents one, very narrow and defined pathway through your system. If you want a comprehensive test suite and you want it to deliver results sometime in the next decade, you need a test suite that is composed of thousands of atomic little tests that are scattered across your system’s functionality.

This brings me back to the beginning — Why all these labels on the test cases? This is a way for categorizing all the tests so that it is easy to search for them. When you write a test suite of atomic tests, you will end up with hundreds, if not thousands of tests. Hence, you need to be able to easily find test cases. Second, these labels make it possible to dynamically specify at commit time which tests the developer would like to run. The end-to-end tests, given their limited coverage, long running time, and relative instability are not helpful for this scenario. The developer wants relevant, fast feedback. We also don’t have a giant Selenium grid at our fingertips, so we have to be conservative about how we use these resources. If there are eight commits in a day, the suite of end-to-end tests would overwhelm our resources very easily.

So, in short, if you are a regular worshipper at the idol of the end-to-end test automation god, you won’t have good automation. This god is a fickle liar who tucks you into bed at night with his minion, the Smoke Test Teddy Bear. Smoke Test Teddy Bear whispers sweet nothings into your ear, telling you if the end-to-end test suite passes, all is good. Don’t worry about the thousands of other possible user interactions with your application that it _didn’t_ cover. They are inconsequential seeds of evil doubt that the faithful believers must banish from their thoughts. Shut the closet door on those demons!

Share This:

Leading an Elephant to Water and Convincing Him To Drink It

Large corporate organizations move very slowly and if you are a creative type who hates to coast along on ‘good enough’, you are probably frustrated and demotivated a lot of the time. The modern corporate workplace is a graveyard of good ideas that were quietly drowned in a washtub behind the woodshed. To survive this emotionally draining dependence on comfortable mediocrity, you have to be _really_stubborn. I’ve been thinking over the origins of my greatest frustrations with my employer and most of it boils down to not being heard and having a lot of ideas shot down or otherwise smothered in a flood of boring, soul-sucking manual regression testing that crowds out all joy from the job. Sometimes, it seems that even just exploring or trying out a concept is offensive and upsetting to someone and therefore, It. Must. Be. Stopped.

In the interest of personal development and achieving greater success, I have to own my own part in this. My communication skills have been really lacking, and my messaging has been unfocused and too difficult to understand. I have also come to the conclusion that it is best to toil away privately on some ideas until you have something that can serve as a powerful demo before you reveal it to anyone. As part of my effort to resolve this conflict over the presence of lots of labels on Zephyr test cases in Jira, I actually developed a Powerpoint presentation to cover the problem statement, the challenges and the proposed solution. I feel like a little Agile warrior right now. I’m even thinking that this presentation could work as another conference presentation and I’m kind of excited about proposing it for some other conference in the future.

I saw a really great presentation by a developer from Uber at the Selenium conference in Portland last fall that summed up the kind of frustration that has generated my exit from every job I left that wasn’t due to a layoff or the sudden bankruptcy of the company. She said that most companies exist in the middling, unambitious area where there is no motivation or desire to do more than mediocre QA, but there are organizations where QA is a first order function and that the tools, processes and infrastructure are elite, top-notch stuff. Those companies are where future of testing is.

So, I could quit my job and go work for those companies, but the only problem with that is that I want to _be_ a trendsetter who brings that to a company that doesn’t already have it. I don’t want to show up late to the party after the waves of innovation have already passed through. Problem is, you can hardly be a trendsetter in the most common reality where you work for a large corporation that is in the mediocre middle because the scale of the task is so enormous that no one person can do it. This isn’t one of the lone cowboy kind of projects where one single heroic person toils away to solve the problem and emerges with a shiny, cool solution to the adoring cheers of their co-workers. This kind of paradigm shift takes coordination and cooperation across the organization and it tends to piss of a lot of people because it makes them uncomfortable, insecure and afraid. Lately, I have been having trouble getting buy-in just from my own little team.

So, I have to sharpen my communication skills. I have to learn more patience and I have to get my temper and frustration under control. And I need to start engaging a lot more with other people outside my own insular team.

Share This: