Home | Login | Recent Changes | Search | All Pages | Help
DoTestersResonateWithModelsSee also AssumptionsAboutModeling Becky Winant's column on StickyMinds on July 15, 2002, titled Modeling Practice and Requirements drew surprisingly few feedback responses from the StickyMinds readers. (See
Sticky Minds Weekly Column, July 15, 2002) --BobLee 2002.07.17 The most productive models I use when testing are models of the mental processes of the developer of the code. For example, if the developer makes a certain mistake in one place, I'm likely to find the same mistake elsewhere, or a mistake that's related to the same sort of misunderstanding. - JerryWeinberg 2002.07.31 Bob, I use playthroughs a lot to discover what works and - especially - what doesn't. One client of mine liked this idea so much they put on "a show" when their customer came to visit for requirements sessions. Imagine an organization of PhDs, chief scientists and talented engineers meeting with the U.S Army (their customer). The project manager put on a tux and acted as ringleader to have his team "play out the system" as they understood it. The army spotted all sorts of things to question. The project team kept "score" of the new information and cheered the client for each score. The experience was one of those common reference points for everyone involved. The project manager said it was the best thing he ever did in improving customer relations. - BeckyWinant 2002.07.30 Shannon - I had a similar situation with a Fidelity home-grown middleware thread safety bug. I got 3 managers & 3 leads in a conference room with a whiteboard. I drew out 3 competing threads being unsafe on a shared resource - the code base equivalent threaded through 5 DLLs, 12 objects and was very hard to capture in the head. Doing a concurrency model having each participant "be a thread" with the whiteboard for state memory showed the doubters how just because "It always worked before..." wasn't adequate in the face of new market concurrent pressure on this weak spot. I found that live whiteboard modeling kept them critically engaged and got the go-ahead to refactor the modules involved. --BobLee 2002.07.25 I was having trouble understanding the specification for a set of processes that we were supposed to test. At some point I realized I could model what happens to the records as finite state machines. From a picture version of the finite state machine, it was obvious that the specification had errors. Transitions were allowed that shouldn�t be, as well as not specifying transitions that needed to happen. The model and picture also allowed better discussion with development about what should happen in the non-obvious cases. My contribution to the discussion was "asking the ugly questions" as Sherry puts it. (That and explaining finite state machines, finite state diagrams, and how it fit with what we were doing.) After we had a finite state machine that everyone agreed would work, test used it as a guide to testing. � ShannonSeverance 2002.07.25 Shannon, I wasn't involved then either, but I suspect that you are right. Becky, if I couldn't be a tester, I would want to be an analyst. Actually, the company I used to work for called their senior testers "Test Analysts". But testing let's me follow a project from start to finish, and I still get to play with the toys. (Some testers may be strong J's, but every tester has a strong streak of the 3 year old who throws his toys against the wall to see what happens.) Also, I am not required to have technical skills at the level an analyst and designer needs. You, Steve, Bob, Keith and a few other people have several threads going where the only thing I can read is the title. But, if you put me on a project with a good analyst / designer, we can do remarkable things while having a lot of fun. SherryHeinze 2002.07.24 I do not have any direct experience with software development practices from the 1960's. However, every time I read Fred Brooks's The Mythical Man-Month I get the sinking feeling that we have not made much progress. - ShannonSeverance 2002.07.24 Sherry, You would make a great analyst and designer :) All the questions and observations you talk about are so important at the start. I've often thought that testers need to be involved at the beginning, but have never heard of situations where this has been done. If there was no analysis, no design, no requirements, I guess and ask the developer, if the developer is still available. No comment. I just wonder if people think about what they should really get for whatever time and money they have invested. Listening to responses like yours I wonder how far commercial software (or other software?) management has progressed from 1960 practices. - BeckyWinant july 24,2002 There's an open ended question if I ever heard one. My strongest inclination is to quote Johanna's "It depends" line & run. In my personal ideal world, I really did start on the project the first day, and so did the analyst and the technical architect. That happens here sometimes. When it does, I usually get to go to all the requirements meetings, make notes on everything, especially what needs to be tested, share notes with the analyst and architect, and start asking ugly questions the first week. We find that more perspectives is better. Sometimes I draw models/pictures, but usually I just use the analyst's models & ask about whatever I can't find or follow. I review everything from the proposal on, if I am involved early enough, for clarity, consistency, testability, usability, completeness and the things that make me nervous for no evident reason. Mostly, I am looking for things that aren't consistent, like a data model that won't allow a situation described in my notes - or your notes, things that are out of scope, things that won't work that way, unclear requirements, etc. Everything makes sense to the person who wrote it, anything is workable to the person who designed it. I have a 4 page document and several slides here that you can have copies of, if you like. If I didn't start until development finished, I don't do any of this. If I start anywhere in between, I do what I can. If there was no analysis, no design, no requirements, I guess and ask the developer, if the developer is still available. SherryHeinze 2002.07.24 Sherry, What approach do you use in testing the design and analysis? - Becky Winant 7/24/02 This thread grew so fast that it's hard to know what to respond to. Definitions of "tester" vary from project to project. Some projects hire one tester after development is complete. Some projects are tested by developers, analysts or users already on the project. Some projects start a Test Lead with the Project Manager, the Business Analyst and the Technical Architect on the first day of the project. I find it hard to generalize for all situations. I don't usually draw models, but I do use any that exist for test planning. If I start on the project early enough, the models are the first thing I test. If only 30% of the errors come from the code, which is the statistic I keep finding, the models and the requirements are much more important to test, and much cheaper to fix, than code will ever be. I don't think all analysts are dolts, any more than all testers are. Testers use abstraction when we plan, if we plan. Often, we start so late on a project that we don't do much planning. In some companies, it is difficult to get money for more than one tester, for starting before development is complete, for any planning time, for time for discussion with anyone. In that sort of environment, creating anything that can be shared now or reused later is rare. But the problem is not so much the attitude and skills of the more experienced testers as the attitude of the managers who don't understand what we do. I am starting a new contract next week, and 2 testing contractors I know are working on a different project at the same company. We are already identifying what we can share and the manager is encouraging it. Unfortunately, his attitude is unusual. Testers aren't all J's - I'm a fairly strong P. I am also more interested in testing the design and analysis, which is seldom true / false, than the code. Bob, we are indeed seeing teams of testers, more testers, testers sooner, more communication between testers and other team members, at least in some places. Along with this goes the concept that testers aren't anti anybody - we are the ones who make the developers look good, and some of them know it. Some of us are also learning not to say "your baby is ugly". SherryHeinze 2002.07.23 Bob, I'm curious. What were your assumptions? - BeckyWinant 2002.07.19 See AssumptionsAboutModeling page Becky, would you like to post that article on the AYE 2002 Articles? I think it touches some interesting preconceived notions -- our discussion exposed several assumptions that I hadn't aired before "Doesn't everybody know that..." Red Flags Waving In The Breeze! --BobLee 2002.07.18 Esther, thanks for shifting us to latest response at top. The challenge in the illustrations is that the difference is in the content. The form or notation - say, Unified Modeling Language (UML) - is the same. Now that I write that, perhaps it would have been helpful to say that a model might be a data flow diagram, an entity relationship diagram, a UML model, or a visual model of those types. I could do examples. In the article I was already over word count. Elisabeth had pointed out that she thought my article seemed to be a model of models, rather than relating it to testing. While my intent was to support a requirements viewpoint - and any flowthrough from there - perhaps I need to consider what a more and designers than testers. Certainly I could discuss how certain models could be useful to a tester, and what limitations apply. - BeckyWinant 7-18-02 I read Becky's article this morning (Thursday). I wonder if the absence of comment is connected to the absence of illustrations in the article. From the discussion below, I gather that testers may be less experienced using visual models than people working in some other aspects of software (e.g., analysis or design). I can imagine that if I've never seen a model of a particular sort, I won't be able to visualize it from reading a very short description of it. I suspect that there are some limitations around illustrations with the medium. Becky, do you have illustratiosn of the modles you describe in your article? (and can we see them?) EstherDerby 071802 Bob: Testers are modeling --> in Artisan models (all in the head, why would someone else need to share?) This is kind of like Jerry's maturity level 0 developer culture: they aren't aware that they're modeling, it's personal, and the egoless thing hasn't happened to these models yet. Yes. That is how I saw it. - BeckyWinant 7-18-02 From Bob All, I think that this is the most inclusive of the 3 responses in my inbox. Suggestion: shall we transplant to a Wiki page on AYE? I think the threading would get clearer. I'll start a page there and we can cut'n'paste there. Thoughts from Elisabeth's response: Testers are modeling --> in Artisan models. (all in their heads, why would someone else need to share?) This is kind of like Jerry's maturity level 0 developer culture: they aren't aware that they're modeling, it's personal, and the egoless thing hasn't happened to these models yet. Think of models as one's personal edge in the field and you get discomfort sharing/disclosing. Moving up the tester maturity model, they become able to share, but aren't terribly aware in this corner - the literature hasn't awakened to how shared models shorten the way. If above is so, then not comfortable at introspection of how own modeling plays out, especially since it might map to Jerry's maturity level 0 -- not nice to contemplate. I see testers emerging from "lone-wolf tester" [as in "cowboy coder"] into team testing. Management support for other than "do your own work" seems spotty, currently. As the team vs. individual awareness increases, more need for cross-team communication points toward sharable models to communicate better. Mind you, the above is all my own cotton candy thinking, but it might map some of the territory. BobLee 2002.07.17 PS: Diane Gibson described Nynke & me as "a nest of NFs" and I'm a "P" as well. Just imagine the possibilities... ;-)) Original emails follow: Becky, I'm interested that by Wednesday morning, I'm the only one to respond to your StickyMinds column on types of modeling. [Modeling Practice and Requirements]
I think that precision and abstraction are opposite forces - moving toward or away from details. Any ideas here? Bob 2002.07.17 Bob Thanks for your message. I'm not sure that testers use visual models. Models are conceptual - they sacrifice precision for visibility. I'm not sure I agree with that, but I do appreciate that that view prevails. Modelers who work at the "high end" of modeling � Kinetic and vanguard � build models with the same sort of precision as software (also an abstraction!) . In fact, executable modeling was devised with the notion of creating a higer leve, visual "programming" language. These models capture knowledge � from business, engineering to system principles or functions. I do not think I am very mainstream. When people talk about testable requirements I sigh and wonder how many times we have to discover something like models (visual, prototype or simulaiton) to test our understanding of what someone wants or needs. When I talked to James Bach one time about executable models he was astounded. He thought that was so cool and was totally unaware of what a GOOD requirements analyst could do. (I discovered that some testers believe all analysts are dolts). I recall in the 70s how everyone treated software developers like anti-social geniuses or freaks. This view persisted as long as it was useful to "hide behind" the label or difference. Testing versus everything-else seems to create a similar cultural difference. Or, I might be missing somehting sitting here alone in my office. :) Testers are precision seekers. What part of their job utilizes abstracting, stepping back? Testers, like software developers and modelers, look at operations or behaviors and figure out the pattern. I'd call that abstraction ability. I think that precision and abstraction are opposite forces - moving toward or away from details. Hmmm. I think of precision and abstraction as complementary. One supports or confirms the other. A model is useless if it isn't anchored in reality. Precision is tedious detail if we can't spot patterns that provide a context. (Or, at least that is my opinion). My view may be jaded by years of drawing and painting. In visual art, abstractions are based on precision. Some abstractions are transformed reality (Picasso, for example). Some abstractions are based on the science of color and form (Mondrian, for instance). Abstraction doesn't pop up out of nowhere. It was a wonderful day when as a software developer I recognized the same principles of composition at work. Does this make any sense to you? What do you think? Am I far afield? Your email reminded me to go to StickyMinds and see what's going on! Thanks :) To: Becky What I mean is that the presented level of detail differs - testers seek the concrete "fact == {TRUE | FALSE}" level of representation. Models seek to hide irrelevant details to fit more of the whole idea in at least one viewpoint slice {time, dependency, service, threading, subsystem, ...}. A tester's need to seek boolean test conditions shuns generality in favor of concrete. (I wonder what % of testers by choice are MBTI "J"s?) Relentless closers. A metaphor I picked up when I went from Boeing to Aetna: the scientific folks use floating-point numbers to prevent overflow, the accountants use fixed-point decimals to preserve the exact penny value despite unnoticed overflow. Neither could make sense of the other's practices. In engineering, the leading 3-4 digits and the magnitude are important. Slide rules were only good for 2-3 digits max. Modeling supports magnitude visibility, code details support precise digits verification. I think those are the mental viewpoints involved. Bob: I think that precision and abstraction are opposite forces - moving toward or away from details. Becky: Hmmm. I think of precision and abstraction as complementary. One supports or confirms the other. A model is useless if it isn't anchored in reality. Precision is tedious detail if we can't spot patterns that provide a context. (Or, at least that is my opinion). I think we're on a different page here: Abstraction (in the D.L.Parnas sense) is about information hiding and generalization. Precision is the exactness that can be read from the artifact. While precision does go into abstraction in modeling, what comes out shows less readable precision. (My viewpoint - your mileage may vary. ;-) ) Bob 2002.07.17 Bob, Elisabeth and I are planning a chapter on Models as used by testers, in our new book, The Psychology of Software Testing. I think Elisabeth would like to be in on this conversation. (I haven't yet read the original article, but apparently we need to refer to it.) JerryWeinberg 2002.07.17 To: Jerry, Becky, Elisabeth: I thought that the experience might be noteworthy. Could someone poll STQE's readers about usefulness of models? Rate each of Becky's categories of models through the needs of readers and also capture job/role of responder: Tester, Mgr. of Testing, Developer, Analyst, Mgr., Project Mgr. ... Interesting area - I hadn't thought about it until I noticed the [non]behavior. From Elisabeth Hendrickson next Hi all, I'm delighted to be included in this conversation. It's funny how the universe converges sometimes. I just finished an article on modelling the software under test for STQE titled "A Picture's Worth 1000 Words." I'm told it will in the the September issue. The article came on the heels of two Los Altos Workshop on Software Testing meetings: LAWST 12 was on modelling the software under test and Sim-LAWST 1 was on UML for testers. And, of course, Jerry and I spent a fair bit of time talking about models last week. So I have (at least) two cents I'd like to chip in here. Personally, I've been using a variety of models to figure out how to test for years. Mostly I use data flow diagrams, entity relationship diagrams, flow charts, state charts, and architectural diagrams (by which I mean block diagrams of the pieces and parts of the system, like UML deployment diagrams). In all these cases, I rely on pictures, or visual abstractions, as models. I haven't used executable models, but now that Becky's mentioned it, I can imagine how such a thing could be extremely useful. I've learned that not all testers model explicitly. In the last 10 years, I've worked with and interviewed hundreds of testers. Some testers grokked what I was doing with pictures immediately. Others appreciated it but had no idea how they might apply it or construct pictures themselves. Still others had no idea what I was doing drawing all these pictures when there was testing to be done. (Of course, I should note I've gotten the same continuum of reactions from developers, managers, etc. I think it's a human thing, not a tester thing.) In the last year, I started teaching testers to model in my workshops. I've had mostly positive results. I can recall only a couple workshop participants who just didn't get it. And even they seemed to have a better appreciation for modelling by the end of our time together. Like Bob, I've been surprised by how many of the participants in my workshop have never drawn a picture of their systems. However, I see signs that interest in modelling is on the rise. Harry Robinson has had a site and email list dedicated to model based testing for a while. The OMG recently announced a testing profile extension to the latest draft of the UML spec (see http://www.omg.org/cgi-bin/doc?ad/2002-04-03). Also, I've been reading Whittaker's recent book, _How to Break Software_, in which he talks about fault models. Recent discussions with a QA Director colleague (Dave Liebreich for those of you who know him through SHAPE) have been quite illuminating as well. He's been coaxing his staff to articulate the mental models they have about the software they're testing. He reported that the conversations have been going something like: "So, if I told you we had to test X in a very short period of time, what would you test first?" "Oh, that's easy. I'd test the floobitz configuration because if anything is gonna break, that is." "OK, so what does that tell you about how the software handles configurations and, in particular, the floobitz configuration?" It's taking a while, and a lot of prodding and questioning, but he's making progress with them. They aren't drawing pictures like I am accustomed to doing. Rather, they're articulating dependencies in a way that will allow them to design workable test strategies for quick-turnaround projects. By the way, Dave has also written about models in testing. See: (I'm especially fond of his "Rube Goldbergian" models.) After talking with Dave, talking with Jerry, and thinking about this a lot more, I'm now convinced that all testers form mental models of the software they're working with even if they don't draw pictures. How else can a tester predict what the software will do and where the bugs will be? Elisabeth 2002.07.17 P.S. For the record, I'm a tester and a "J." :-) Elisabeth Personally, I've been using a variety of models to figure out how to test for years. Mostly I use data flow diagrams, entity relationship diagrams, flow charts, state charts, and architectural diagrams (by which I mean block diagrams of the pieces and parts of the system, like UML deployment diagrams) I think that I might be about to learn something very useful about models. I have also used these and other models for years. What is really clear to me is that a model used at the start � before you have a product � will be different that a mode employed once you have a product and need to test it. I haven't used executable models, but now that Becky's mentioned it, I can imagine how such a thing could be extremely useful. My personal expereince has been that models which have been carefully constructed (whether for execution or not) are ones which will prove useful later down the road. That being said, beware that "model" is a term used to describe a variety of communication experiences � all valid, but not all testable. Still others had no idea what I was doing drawing all these pictures when there was testing to be done. I suspect that Bob's insight explains why this is so. However, I see signs that interest in modelling is on the rise. I'm curious what these signs are and where you have found them. Can you expand on this? Harry Robinson has had a site and email list dedicated to model based testing Thanks for this reference! If you wish to know more about executable modeling and that philosophy look at http://www.projtech.com and http://www.kc.com. Kennedy Carter (kc.cm) has an article referenced on Harry Robinson's site. I also like the "Rube Goldbergian" models described on Dave's site. I admit to a perplexity as to the postive description which seems to conflict with the name suggesting "bizarre comeplexity". (I must have missed something). After talking with Dave, talking with Jerry, and thinking about this a lot more, I'm now convinced that all testers form mental models of the software they're working with even if they don't draw pictures. How else can a tester predict what the software will do and where the bugs will be? Sure! They have to have a mental model. The modeler's question is "Could they predict more if they used sophistocated models to help organize their mental concepts?" Looking forward to more discussion. Becky 2002.07.17 Much later I think that's the gist of the email. I tried to unscramble the responses-to-responses. Rearrange as necessary! --BobLee 2002.07.18 WhyPeopleRespondToArticles
Updated: Wednesday, July 31, 2002 |