Frank Charlton

The home page of software test engineer / record producer Frank Charlton.

Everything I Learned From CAST 2016

And oh boy it's a lot.

Day 0

I didn't do any of the pre-conference workshops, retreats or tutorials this time around, so Monday was all about exploring Vancouver and rehearsing my speech. Thank you SO much to my friends, coworkers and most especially my wife who probably sat through 5? run-throughs of the speech in various versions.

Speakers / Facilitators Meeting

When I signed on to be a speaker I also had the chance to volunteer to facilitate other talks as well. I'm all for helping out, so I figured what the heck. They held a little meeting for all the speakers and facilitators to mostly go over "Open Season" and "K-Cards". What are these? Well Open Season is one of the most intimidating words they could probably come up with for Post-Talk Questions. As listeners of the talks you are 100% supposed to hold your questions until Open Season unless you have a clarifying question, like what does that abbreviation mean, what's that term mean, etc. Open Season is also where facilitation and K-Cards come in.

K-cards were developed as a way of handling the chaos that can evolve from asking questions in group environments. They're kind of brilliant. The basic idea is that everyone in the audience has three cards that they keep on them, and they use them to not only signify that they have something to say, but also what kind of thing they have to say.

  • Green - New thread / different question
  • Yellow - I have something to say / contribute to the current thread / question
  • Red - I need to talk RIGHT NOW. This usually gets thrown down when someone in the room is personally referenced. It can also be used to signify things like an audio problem during the talk.

As a facilitator you keep track of the green cards, and call on yellows. You facilitate the threads. You also have the power to kill a thread or part of a conversation if you feel like it's eating up too much time. This ends up being surprisingly smooth. After a talk was given you'd end up seeing a few green cards go up, the facilitator would write down all their numbers, then call on the first one.  That person would talk with the speaker, and then any yellows would join the thread, and then we'd move along to the next green card.

I really loved this, and it was joy not to just hear people ask the terrible "let me ask a question that is all about me feeling smart" conference questions. I actually want to try and figure out some way of bringing k-cards back to work. Our grooming sessions in particular just become "loudest person wins" and I'd like to find a way to let everyone have their voice heard.

Day 1

Keynote: Nicholas Carr

The first speaker of the Day was Nicholas Carr who gave a fascinating talk on mankind's interactions with, and dependencies on automated technology. One of the big themes was "automation complacency". This is all about how often we reject our human intuition and assume that the computer is right. It's a computer! He talked about the grounding of the Royal Majesty cruise ship where despite losing GPS connection, the GPS continued to tell officers it was on course. An officer noticed that they hadn't crossed a crucial buoy, but assumed HE was wrong, because computers, right? The ship ended up running aground and cost millions in rescue and damage fees.  This is also relevant in things like auto correct (especially for me) where we stop paying attention to how we spell, assuming that autocorrect will do the right thing. Next thing you know, you've sent a huge gaff to your whole company.

He also brought up the case of Pablo Garcia who due to a series of human errors with computers, human dismissals of warnings, and human errors was given 38x his intended dosage of antibiotics. Don't worry, he's ok now. We ignore warnings. We also just gloss over computer interfaces too easily sometimes. This becomes scary when thinking about things like autopilot systems. Turns out pilots have to do about 5 minutes of manual work in a plane these days. Crazy.

That's why Nicholas talks about something he refers to as "Human Centered Automation". Basically the principle is that you shouldn't hand over automation from a human that hasn't mastered the subject. In his own slide's words:

  • Automate after mastery (eg. calculator)
  • Transfer control between computer and operator (eg. aviation)
  • Allow professional to assess situation before providing algorithmic aid (eg. radiology)
  • Don't hide feedback / alarms (eg. Boeing)
  • Allow friction to learning for the human (eg. videogames)

I'm super curious about Nicholas' work now, and plan on picking up his latest book.

My Talk!

Thanks to  Andreas  for the great picture.

Thanks to Andreas for the great picture.

Oh hey, I gave my first big conference talk! It went really, really well. I should probably write a post on this whole process alone. There was a really great turnout, tons of great questions, and I think I only said um about 700 times.

 

Keynote: Anne-Marie Charrett

Another great keynote. This one was all about the place of managers in the role of testing. I'm reasonably new to the blogosphere of test, but I guess there have been many pieces about the death of test management over the past few years, and actually many companies seem to have done away with Test Managers altogether. Anne-Marie was here to tell us why the various leadership and management roles in test are important.

Her talk was really relevant for me. It's fairly impossible for me not to be a leader. It's just something I've fallen into my whole life - whether it was playing games at school, leading projects in college, or taking charge in my professional career. I only became an official "people manager" a year ago.

One of the themes of Anne-Marie's talk was that "management" being fulfilled by a specific test-background person isn't a necessary requirement as long as strong test leadership is in place. She quoted John P Kotter a few times (a good summary here):

Managers promote stability, leaders press for change.

She mentioned that test leaders should essentially be coaches for their peers and junior testers, and that every tester should be striving toward lead-like skills, even if they have no desire to be a lead in their career. Every tester should be able to confidently work and communicate for their organization.

What does a tester do? What skills should they be developing?

What does a tester do? What skills should they be developing?

She also spent a little time speaking to how testers need to be independent, and need to develop their own unique skills, and almost more importantly, need to be able to fail on their own and learn. This is super difficult for me. I always want to swoop in and tell people how to do things exactly like me, tell them how to prevent mistakes, etc. Failing (in safe ways) is important to learning, and to growth. As a leader you need to give them that space, and be ok with the choices they make, even if they don't match your own.

She also mentioned a few good skills to focus on while coaching:

  • Test strategy (with Mind Maps)
  • Risk based testing
  • Analysing a story 
  • Exploratory testing
  • Modelling
  • Oracles

Lots of good stuff for me to explore here. Everyone was all about mind maps at CAST. I use them for life things occasionally, but have rarely used them during testing,

Cooperating in Exercise Judgement and Skill: Requirements

My first attempt at facilitating was at my new friend Julie Lebo's talk. Julie walked us through her experience walking into a university setting who had no testers, no test strategy and would essentially hand her off monolithic projects to verify before they were deployed to satellites. What could go wrong?

Julie's main struggle was determining the requirements of what she was actually testing when things were handed off to her. How is this supposed to work? Is it documented somewhere? Is the documentation right? Is the code right? When they are different, which one wins?

What she had to do was continually push her way earlier into the development cycle. Get involved in earlier conversations, help create the requirements, help document correct behaviors. She had a lot of skills from her own education about what types of requirements are best documented in words, in flow charts, in graphics, etc that she was able to educate the team with. 

"If we don't start on the same page, we're not going to end on the same page" - Julie Lebo

The earlier she got into the development process the more involved she felt in the team, and the more interested in test/quality the team became.

I'm 100% on board with Julie here. She mentioned that as testers we should be testing the initial requirements that our products get built on in order to get involved as early as possible. I'll take it a step further and say that if possible, get involved even before requirements are being made - in the idea generation and brain storming phase. A voice in test can be crucial here in helping course correct wrong assumptions.

How I Used My "Mindset Toolkit" to Develop a Tester's Mindset

Credit to  @AST_News  who had on point coverage throughout the conference.

Credit to @AST_News who had on point coverage throughout the conference.

Holy crap was this talk great. One of the challenges we face as testers is that it's very, very difficult to quantify what we do, or how we do it.

How on earth did you find that bug!?
I dunno...I kind of did my standard thing, did this, did that and then I thought hmm...maybe I'll try this and then hey! There it was!

Vivien Ibiyemi has come up a great way of thinking about testing that peels apart part of this mystery. She calls them her "mindset tweaks". In my own lingo these are probably "hats". Basically they are a mindset that you force yourself into during a test run, to find bugs by thinking like that mindset. You're emulating / imitating someone else.

Some of my favorites from Vivien's list:

  • User Mindset (average user)
  • Already Tested" (when a dev says there is nothing to test)
  • "Lazy Tester" (super slow, as little work as possible)
  • Analytical (being super nerdy based on code / requirements, etc)
  • Curiousity (what happens if I do this...)
  • Project Phase (what's important / new to this phase)
  • Bug Reporting (find everything)
  • Business (what are the business specific needs?)

Vivien also had some gems of wisdom about dev/test relationships. When you find a ton of bugs in code, make sure that you communicate that you trust the developer. You just don't trust the code. Related is how sometimes you need to befriend developers and casually talk about "all the weird things (you are) seeing!" so that they understand the code is massively buggy, and start a reassessment, rather than you writing 100 bugs against them.

She also talked about how it's our responsibility to not give up on bugs that significantly impact quality, even if we end up being wrong. Being wrong is ok, and is worth the "embarrassment." You're fighting for quality, and sometimes you will be wrong, and that's ok.

Vivien's work here has really gotten me thinking what kinds of mindsets we can be utilizing at my own company as testers, and how we can emulate the behaviors of our coworkers. Some things have come to mind:

  • Brand new user
  • Long time user
  • Everything makes them angry user
  • Your grandma
  • Hasn't upgraded in 5 years
  • Massively slow internet
  • Blind user
  • Super fast user
  • Color blind user
  • Acrobatic user (way too many rotations flips, etc)
  • Words Per Minute users (way too many touches)

Basically as testers we drift in and out of these kinds of behaviors all day long, but the idea of sticking with them for an 30 minute run or more is really interesting. I'm curious what will fall out.

Day 2

Keynote: Sallyann Freudenberg

This was a fascinating talk that was a needed drift from standard conference talk on testing. What Sallyann focused her keynote on was a discussion of neurodiversity, and how it really benefits tech teams in particular. 

So what is neurodiversity? It's basically the range of the neurological differences that can occur of the human genome. Think autism, depression, anxiety disorders, bipolarity, ADHD, asperger's, etc. Sallyann talked about how these different ranges in neurology can be massive assets to teams, in some cases almost like superpowers. Just like you want a diversity of opinions, diversity in race, gender, sexual orientation, etc, having neurodiversity ends up giving your team and org particular skillsets and strengths. Oh, and the thing is, you already have this going on in your org already.  

She focused on how her experiences with raising an autistic son taught her how people of different neurological backgrounds viewed the world around them, how they communicate to others, and how they cope with difficulty.

Some of the highlights:

  • Knowing when to stop talking is an expert skill
  • Teams put together randomly (rather than by putting together the most skilled people) end up performing better
  • The range of autism is massive, but those affected tend to excel in repetitive tasks
  • Be aware of sensory overload in office surroundings (noise, smell, touch, etc)
  • Those with ADHD tend to have terrible self care
  • People with depression are "spookily good" at assessing reality, and have far better empathy
  • Try not to surprise people when possible. Agendas, plans, and foreshadowing help massively.

This ended up being a really deep talk for a lot of members, and I can't wait for the talk to be posted on Youtube. The biggest takeaway here is that you already work with people in this range, and you're already reading a post by someone in that range (long term depression, hi!). Be aware of that. Think of how their brains might be functioning and how you can support that. Some people need visuals. Some need words. Some people want to think on their feet. Some need preparation time. 

Alpha Testing as a Catalyst for Organizational Change

Steven Woody's great talk was my second opportunity to facilitate. What Steven wanted to show the audience how important it is to alpha test your products, not just beta test them. Some people don't even know that alpha testing is a thing, and that's the problem :)

So why should we alpha test? All of the testing that we do in-lab is crucial. We simulate user scenarios. We trying to break things in insane ways. We try to ensure that everything is flawless so that users in the real world never experience any issues. But it's not real world testing. And never will be. And shouldn't be? That's where alpha comes in. Here's what Steven lays out as expectations for his alphas:

  • Field test of the "80 percent solution"
  • Friendly users who were self selected, expecting some issues, and willing to provide feedback
  • Actually using the product on a day to day basis for its intended purpose
  • Feedback report & weekly monitoring
  • By definition an end-to-end systems test
  • Long-running, continuous operational test of the product

This was great to see on screen. One thing that we've struggled with is not our External Alpha's, but our employee-based ones. I'm thinking there is some optimization of our employee pool we can do based on feedback and usage data.

What alpha does is it brings all of the parts of the product together into a real world situation, with real users. Oftentimes in software teams are hyper focused on their small piece of the puzzle, and forget to step back to test (and experience) how that puzzle piece fits into the big picture. Alpha is a great place to do this, and a safe place to do this. When all the pieces come together it ends up educating everyone - the siloed teams (about eachother's work), the PMs (about the real experience, and product readiness), the testers (about cases they should be running), to customer support (on what they should expect in the future) and even outside of the software organization to other stakeholders.

Lightning Talks

Credit to  Neil Studd , one of the many Lightning Talkers.

Credit to Neil Studd, one of the many Lightning Talkers.

The last official session I made it to was a round of lightning talks. For those unfamiliar, lightning talks are usually a group of talks limited to 5 minutes or so where the presenter has limited time (and often limited preparation) to passionately deliver on their topic. Usually greatness occurs.

Some of the highlights:

  • Neil Studd
    • People on the internet are jerks, even in the testing community. We really need to stop that. Stop with the angry internet arguing that goes nowhere and actually engage in progressive, thoughtful debate. Also, don't be rude to eachother. We are humans. We are humans with feelings in the same field for christsakes. 
  • Karis Van Valin
    • Karis walked us through some basic strategies that she's learned over time about how to approach testing data. I can't remember if it was during Karis' talk or somewhere else this conference but somewhere it got mentioned that "good" data and "bad" data should test the same in that they should both be stored properly, and not cause issues.
  • Nancy Kelln
    • After years of being right from the beginning, Nancy started keeping a "Prediction Notebook" where she writes down what she knows is going to go wrong, despite people telling her otherwise. Oh man I need this. Eventually she started writing down the situation, her prediction, the people she tried to convince, the meetings they had, etc. It's basically an "I told you so!" book. It sounds like she might have some fun with it to and whip it out when the bad thing eventually happens. OMG I predicted this 3 months ago!! I must be psychic! People end up learning that maybe she knows what she's talking about based on her years of experience...
  • Karlo Smid
    • Karlo talked to us about why UI automation is a bad strategy. Our users are humans. Our automation of the interface should mimic our users. Using computers to run through user interface tests is flawed logic. Users are able to do things / find things that you can't predict and properly account for in UI automation. UI automation can easily pass on things that real users would immediately fail. Your efforts are better spent automating lower level things that computers are skilled at. If you want to focus your human testing, diff what is different between builds / code and assess what should be focused on.
  • Richard Bradshaw
    • It was great to see Richard talk in person. I've been a big fan of his for a while. Richard focused a bit on the phrase "automated testing". This is a bit of a misnomer. No tests are automated. Some human at some level is involved somewhere in the chain. Writing the tests, ensuring they are run, evaluating the results, etc. Also the phrase limits our mindset to the tests themselves being automated. Instead we should move to the phrase "automation in testing". What if there is some automated process that helps me as a human test better, we should be chasing that. That could be loading builds, getting screenshots, generating data etc. Automation in testing.
  • Brendan Connolly
    • We all (for the most part) can read a Stephen King book. We see the words, we can decipher them, we comprehend them, and we have the ability to be moved by them, too. We all don't have the ability to write great novels, and we aren't hard on ourselves about that at all. That's actually kind of expected. Reading is a kill, writing is a different one. Why have we lumped them into being equally valued with coding? This is what Brendan's main point was. This was really important for me to hear, and I imagine it is for many testers. I am terrible at writing code. I can cobble things together with StackOverflow, DuctTape, CrazyGlue and a little split and it'll kinda work. It's been a bit of a shame for me. Reading though - I can make my way around diffs, understand logic, etc pretty well and know what questions to ask when I don't understand. I can speak roughly the right lingo and speak in pseudo-code. I should keep working on that skill and not feel bad about it.

Some tweets I liked from sessions I couldn't attend:

Overall:

Holy crap this was a great conference. I've been to a few conferences over the years in various industries and almost all of them have been mostly fluff and poor attempts at motivation. They throw some current buzzwords at you, say agile and automation about 500 times, give you some stickers and send you on your way. No so with CAST. Every session I went to was really detailed and full of great information that I wanted to bring back to my organization. All of the people I met were smart, passionate people who are incredibly dedicated to the craft of test. As with many things in life I immediately made friends on Twitter, and my software testing list is getting pretty long now. I will definitely be back next year (in Nashville!), and this time as a member of AST.

Further Reading: