Forum for Academic Software Engineering Volume 3, Number 8, Fri Dec 3 13:27:37 CST 1993 (FASE # 18) Topics: Solicitation of authors for ieee press monograph on object Re: Using CASE Tools CASE Study/SE Coursework Re: Getting students to plan for testing (several articles) A------------------------------------------------------- From: George Zobrist Subject: ;solicitation of authors for ieee press monograph on object +++++++++++ CHAPTER AUTHORS SOLICITED +++++++++++++++++++++++++++++++++ +++++++++++ IEEE PRESS MONOGRAPH +++++++++++++++++++++++++++++++++ +++++++++++ EDITOR: G. W. ZOBRIST, UNIVERSITY OF MISSOURI-ROLLA +++++++ +++++++++++ J. V. LEONARD, CONSULTANT ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ IF YOU HAVE AN INTEREST AND EXPERTISE TO WRITE AN INTRODUCTORY CHAPTER ON OBJECT ORIENTED SIMULATION PLEASE CONTACT G. W. ZOBRIST. THE MATERIAL SHOULD COVER THE BASIC CONCEPT OF OBJECT ORIENTED SIMULATION AND BE A LEAD IN TO OTHER CHAPTERS ON APPLICATIONS, LANGUAGES, MODELING TECHNIQUES ETC IF INTERESTED SEND A TITLE/AUTHORS/ADDRESSES/EMAIL/PHONE AND A ONE TO PAGE ABSTRACT/DETAILED OUTLINE TO: DR. GEORGE W. ZOBRIST DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF MISSOURI-ROLLA ROLLA, MO 65401 PHONE: 314-341-4492 FAX: 314-341-4501 EMAIL: C2816@UMRVMB.UMR.EDU CHAPTER IN FINAL FORM NEEDED BY FEBRUARY 1, 1994 AND MONOGRAPH TO BE PUBLISHED BY IEEE PRESS, NEW JERSEY. LENGTH ABOUT 50 PAGES (8 1/2 BY 11 DOUBLE SPACED) A------------------------------------------------------- From: kwc@elden.cse.nau.edu (Ken Collier) Subject: Re: Using CASE Tools The following appeared in Volume 3, Number 7, (FASE # 17) >From: "David Budgen" >Subject: Using CASE Tools > >We have something of a debate going on at present about whether or not it is >useful (and financially viable) to use CASE tools with both undergraduate and >postgraduate courses in software analysis and design. (So I am talking here >of `upper-CASE' tools, possibly used to draw DFDs, ERDs etc.) > >To summarise as fairly as I can: > > Camp A argues that: > * students should be exposed to CASE tools as a part of their > learning experience, and should be encouraged to make use of them; > * ideally, these CASE tools should be fairly state of the art, and > should be representative of those that they might later encounter > in industry. > > Camp B argues that: > * students should be learning about the principles of analysis and > design, and while it may be useful to have CASE tools as diagram > drawing aids, the added baggage that real CASE tools carry with them > (in terms of configuration control etc) impede the learning process; > * it takes too long for students to learn to use most commercial CASE > tools, and the return is not commensurate with the effort expended; > * industrial-standard CASE tools tend to be expensive to licence, and > we risk being tied to a vendor. > First, my position: I have a foot firmly planted in each of camps A and B. While I agree that students should primarily be learn- ing about SE principles we have found it useful for them to be able to talk intelligently about the state of the art in CASE. Moreover, it is useful for students to see for themselves that CASE is NOT a "silver bullet". At Northern Arizona University we have put a good deal of effort into the development of a software engineering laboratory that is intended to provide the students with an exposure to the current state of the art in software development environments (or at least our perception of the SOTA). We have a network of Sun Sparc 10's for the students to use. Software availability in- cludes: IDE's Software Through Pictures Family of Tools (Analysis/Design) Cadre's Teamwork Tools (Analysis/Design) Centerline's ObjectCenter (C++ Graphical Development Environment) Rational Apex (Ada Graphical Development Environment) Interleaf (Documentation) DevGuide (GUI Building Tool) We are working on getting some project management tools and some testing tools as well. I used these tools in two undergraduate SE courses this Fall. The first course is a Junior level course in Software Analysis and Design. The students developed specifications documents and architectural design documents using Software Through Pictures, Teamwork and Interleaf. The second course is a classical Senior project course in which teams of 5 carry a project through all phases of a complete development cycle. In this course students are programming in either C++ or Ada (their choice) and all docu- ments from Requirements through Testing are required to be com- pleted using the available tools. I did not take very much time during class to teach the tools. This was because I wanted to devote our class discussions to principles rather than tools. So, I provided a very brief intro- duction to the capabilities of each tool; how to launch the tool; and where to find the manual sets. Then I turned them loose to learn on their own. I suspect there were very mixed feelings about this approach. (see below) This is the first semester that the lab has been fully functional so I'm still working out the bugs in the idea of integrating CASE into the teaching of principles and methodologies. Furthermore, I am still learning each of the tools myself which has been a bit of a drawback. However, I can give you some preliminary thoughts and observations. 1. Students are frustrated with the learning curve required of full-blown CASE tools. Although I can give them an introduction, the real learning comes from using the tools in the generation of deliverable artifacts. This is a time consuming activity on a project which is, in itself, time consuming. 2. Students are merely scratching the surface of the power that these tools contain. This is to be expected since one generally only discovers powerful features of any application by using and reusing the application. One of the features of both Teamwork and StP is automatic code generation. Of course we are only talking about templates for C++ classes or Ada Packages but I fully expected student enthusiasm about this ability. As it turns out, they are unenthused. In fact, they are unwilling to explore this feature. I believe that this is due to the added stress of learning something new when there is so little time left in the semester. 3. Students grouse about the user interfaces of the tools. I can't say that I blame them. Interleaf has one of the most nonintuitive interfaces I have seen. StP has a reasonably decent albeit nonstandard UI that is compounded by the fact that the tool provides so many services. Well, I've probably made it sound like the approach has been a dismal failure. At this point in the semester the students are about a stressed as can be. If you ask them how they feel they'll probably tell you that we should just scrap CASE alto- gether. However, my feeling is that as the development and usage of the SE lab evolves the SE students will be more gradually in- troduced to the tools. Thus, alleviating some of the pains and possibly encouraging more in depth exploration. Our plan for the future is the gradual introduction of these tools into other pro- gramming courses. Already, my CS1 class is using the StP struc- ture chart editor to create decomposition diagrams of their pro- ject designs. We hope to have students in all programming classes using bits and pieces of the tools in (ideally painless) ways so that they will be fully familiar with the UI and basic capabilities of the tools. This will allow the SE courses to stress the integration between tools as well as the configuration management capababilities and other impressive, but as yet unused, features of the software. We'll see how it works. Oh yes, the cost of the tools. Cadre and IDE both donated all of their tools plus annual maintenance. I had to practice a bit of my rusty salesmanship to make this happen. We purchased OjectCenter. Rational recently announced a very progressive plan for donating their Apex environment to Universities (I have more info if you are interested). Interleaf was purchased for us by Honeywell as part of another project that we are involved in. Other tools come as part of the basic Sun software package. The hardware was purchased through an equipment grant that is aimed at electronic design automation. We just piggy-backed on that grant. Workstation software is ridiculously overpriced. If you aren't bent on your students developing software in a Unix environment (which we are) then a PC environment would be much more affordable and there is a lot of instructional CASE stuff available. Hope there is something in all of this is is useful. -- Ken Collier Internet:kwc@elden.cse.nau.edu College of Engineering and Technology Phone: 602-523-5412 Northern Arizona University Fax: 602-523-2300 Flagstaff, Arizona 86011-1560 A------------------------------------------------------- From: "Donald Day (Syracuse University)" Subject: CASE Study/SE Coursework I'm currently involved in two projects which relate to this group. First, I am collecting data for a multi-year study of user responses to constraints implemented in CASE tools. I'm looking for respondents to a questionnaire. Target subjects include CASE tool users, CASE tool developers, and experienced application developers who do not use CASE tools (a control group). If anyone in this group is interested in com- pleting a questionnaire, I'd like them to contact me at D01DAYXX@SUVM.ACS.SYR.EDU. Secondly, I'm developing a series of software engineering courses, including object-oriented methods and formatl specification (among others). I'd like to be included in distribution for the group, so I can gather citations and comments in the areas of these courses. Thanks. +--------------------------------------------------------------------+ | Donald L. Day D01DAYXX@SUVM.ACS.SYR.EDU | | IST, 4-282 Ctr for Sci & Tech Donald_Day.chi@xerox.com | | Syracuse University 1-315-443-5611 or -2911 (voice) | | Syracuse, NY 13244-4100 USA 1-315-443-5806 (fax) | | [user responses to constraints in computerized design tools] | +--------------------------------------------------------------------+ A------------------------------------------------------- [ED: The remaining articles are responses to the last issue's article "Getting students to plan for testing," together with selected responses to the same posting on comp.edu and comp.software-eng.] From: Brian Marick Subject: Getting students to plan for testing kpierce@andre.d.umn.edu (Keith Pierce) writes: >It seems to me that what's missing is any test planning. What should we >test for? What cases are likely to find flaws? What is the anticipated >output of a test case? How do we verify that actual output matches >anticipated output? Is the coverage complete? I didn't wish to toot my own horn, but these questions are exactly those that I address in my forthcoming book, _The Craft of Software Testing_. If all goes well, it should be available for the fall semester. I can have Prentice-Hall send you an evaluation copy, if you like. For an overview, see cs.uiuc.edu:pub/testing/subsystem.ps (anonymous FTP). You'll see some differences between system/integration/unit test. The reasons are given. The book doesn't cover system test. My tool GCT has been used for coverage analysis in testing classes. (Details in same location - see GCT.README.) >Any views on implementing independent test teams within a student >project? I would argue against it. The trend in industry seems to be for developers to test class-project-sized chunks of code. Independent test teams are reserved for system testing and larger-scale integration testing. There are, I think, sound reasons for that distinction. An independent test team would be artificial, so probably not educational, and would deprive some of the students of true experience with testing. Brian Marick, Testing Foundations Testing consulting, training, and support marick@testing.com for business, marick@cs.uiuc.edu for recreation A------------------------------------------------------- Subject: Re: FASE V3 N7 From: Lorraine Johnston Mynatt has a reasonable section on testing - better than any of the other texts we've seen. In fact our students usually comment on how useful Mynatt is, particularly wrt doing a SE (team) project. We tend to use Sommerville, Pressman and Mynatt, but Mynatt seems to address issues at right level for our undergraduates. The same may not be true for a graduate class. Lorraine -------- Lorraine Johnston Lecturer - Dept of Computer Science Phone : +61 3 344 9107 University of Melbourne, Parkville 3052 AUSTRALIA Fax : +61 3 348 1184 email : ljj@cs.mu.oz.au A------------------------------------------------------- From: ajs@Euclid.DnE.WVNET.EDU (Anthony Schaeffer) Subject: Re: Getting students to plan for testing As a part of a NSF sponsored workshop on software engineering, I designed a testing exercise that I am using in our first programming course. They fight it, but ... We gave a panel talk at the SIGCSE meeting in Indy last winter (Jan or Feb). I can send you what I have done. Each time I use it, I learn more about how to use it. Tony Schaeffer A------------------------------------------------------- From: judy@basser.cs.su.OZ.AU (Judy Kay) Subject: teaching testing I have been running a CS2 course that requires students to complete a challenging project. I ran it this year for the first time and at least the testing aspects went very well. What I did was to require students to plan because they had to had in: a user manual - week 2 a design spec - week 3 their test strategy - week 4 prototype working system - week 7 fully functional system - week 13 and the marking scheme made it clear that correctness would be assessed exclusively on the basis of the output of a demo script that they provided and which was to: clearly present each test case clearly state the purpose of each test the output, presented in such a way that it was clear whether it was what was expected ot not. A number of my students actually handed in tests that demonstrated weaknesses (and bugs) in their programs - and I was really very pleased at that, both because it was often evidence of very thorough testing and because they clearly trusted me not to penalise them for this compared to omitting such a test. Most students felt a commitment to either stay with their week 4 tests or to explain why they didn't. Hope this is useful. A------------------------------------------------------- From: tcubed@mcs.com (James Hanlon) Subject: your comp.sw-eng post Keith; SOrry for confusion; new OS, new editor, mailer confused. Ignore abrupt messages. : I just finished grading team projects produced in our senior software : engineering class, and, once again, I'm not happy with the products. : Once again, despite my constant protests in class, most teams skimped on : specifications and design, and rushed to coding to "get something : running". I think this is unavoidable. Seeing stuff run is so much fun that folks just naturally gravitate towards that aspect of the job. Another factor is that structure is simply not perceived to the extent that action is: many literally can't see it at all, much less value it. : I post this to ask for help in getting students to view testing more : importantly. The conventional attitude of students is "lets run it a few : times to see if it works." When it works a few times, they think they : are done, and hand in the product. Amateurs notoriously underestimate combinational complexity, and its impact on executing programs. Even people who should know better act as if programs have a few inputs that occur at strictly predictable times. My least favorite "root cause" of software defects is "Developer failed to consider all possible combinations of events and program states"--it could just as easily be "Management failed to allocate infinite budget to exercise cross- product of events and states". My supervisor, when I was at Bell Labs, got frustrated by the lack of coupling between requirements definition and requirements verification. [Different bureaucracies were in charge of definition, implementation, and verification--and you wondered why your phone bill was so high...] His solution was to define a single position, Requirements/Test Engineer, that both: 1. judged the verifiability of proposed requirements, and 2. verified them (personally, in the lab). I was an RTE for a time, and I found it to present an effective combination of abstract principles, and manual labor. If there is any moral to be gained, I'd say: It takes a stakeholder to care. Conclusion: make your students stakeholders. : It seems to me that what's missing is any test planning. What should we : test for? What cases are likely to find flaws? What is the anticipated : output of a test case? How do we verify that actual output matches : anticipated output? Is the coverage complete? Testing is largely concerned with verifying time-constrained behavior, or text transformations (input-process-output). Planning therefore starts with enumeration of events, states, and actions (in the first case), or static characterization of textual relationships (in the second). Very quickly, one sees the value of finite state automata and relational algebra in the specification of system behavior. : Test planning ought to start at the specification phase with acceptance : and system test plans, continue through architectural design with : integration test plans, and on to detailed design with unit test plans. General principle: have no unverifiable expectations. : There ought to be tremendous educational value in requiring test plannin : of students. In particular, early test planning can clarify : specifications, interfaces, etc. Next quarter, I will require test : planning in the course project. : Where can I find guidance for my students in going about test planning? Famous paper by Gourlay: MAthematical Framework for the Investigation of Testing (IEEE TSE Nov 83) lays out the somewhat discouraging details. : Standard texts don't seem to help. Our present text, by Schach, never : mentions test planning even once. Pressman includes a section on test : planning in his outline for a specification document, and says it's : important, but then never mentions it again. Beizer, in "System Testing : and Quality Assurance", is silent on planning, and seems to imply that : serious thinking about testing only begins after implementation (this is : inferred from a very quick scan of the text. My apologies if I'm : wrong). : Any views on implementing independent test teams within a student : project? In industry, independent test teams succumb to management's quest to quantify all human activity, and drift into a numbers game. This forces them into a code cop role, and generates nothing except bad feelings and lots of graphs. My prejudice would be to spare innocent youngsters this experience. More helpful would be to introduce them to the notion of formality in all areas of software practice--then they won't have to spend so much time verifying. There is no book on the topic of cost-effective formality for practicing software engineers, and, as near as I can tell, no interest in publishing one (issues addressed are "too abstract"). : Any help would be appreciated. I'll summarize responses to this : newsgroup. Hope this helps, Jim Hanlon tcubed@ddsw1.mcs.com A------------------------------------------------------- From: tama@misty.anasazi.com Subject: Re: Getting students to plan for testing Why don't you require a Unit Test Plan, with documented test cases, and expected output. You could require this when they turn in their design. You could use many models as the basis for this plan, but if you want them to learn more about Quality Assurance, have them use the IEEE model for QA Test Plans (you could modify it to include or emphasize whatever you feel is appropriate). Then when they turn in the completed product, they would have to turn in the actual test output along with the documentation that proves that they fully implemented their unit test plan. Just a suggestion, and I understand the frustrations. I get the same output from professional software engineers. :-) Gotta meet that schedule and get something running. There really has to be a balance... -- Tama S. McBride, CQA W: 602/861-7679 Don't mistake the end of tama@anasazi.com Anasazi/FDC, Phoenix AZ an illusion for the start tama@world.std.com Personal Account of a crisis. Weinberg A------------------------------------------------------- From: mark@hubcap.clemson.edu (Mark Smotherman) Subject: Re: Getting students to plan for testing I wholeheartedly agree with your posting. I teach an OS implmentation course where we build a micro-kernel on a PC. I stress testing as part of any "implementation", but I still get comments on my course evaluations about "This course shouldn't be about testing!" My current approach is to have weekly team meetings where a team comes in, demos the software, and presents a written "Testing Strategy". For early milestones, I give a list of path testing considerations and a general handout. I also discuss the usefulness of an automated stress tester. As the semester progresses, I change over from giving paths to giving some test cases, and I ask them to deduce the paths tested by the given cases and to identify other important paths. They should list all identified paths in their testing strategy document and also develop other test cases that exercise the additional paths. They has been less than wonderful in their repsonse; most teams bring in only the specified tests cases (and don't even do path-exercised analysis, much less identify other paths). I try to stress that I prefer a well-developed testing strategy with lots of identified _but untested_ paths over several ad hoc test codes. I tell them that such a use of their limited hours would at least awaken them to the potential latent bugs remaining in their software, and guide later debugging when they are building on top of the early milestone codes and experience problems. I am including below some edited material from my first two milestones. [ED: I have excised this lengthy discussion of test cases and testing hints. Write for a copy if you're interested.] A------------------------------------------------------- From: Mike Overstreet Subject: student testing Keith: I just read your posting in comp.software-eng. I have had the same frustration. I teach a junior level software engineering course which is project oriented. We have a senior level SE course which is survey oriente (using Sommerville, Pressman or something similar). But the junior course is different. As it has evolved, the emphasis on testing has increased each semester. Students are divided into 4 person teams. Each team member designs and implements some modules and tests some other modules developed by other team members. Grades are based as much on testing as on coding. Students turn in test plans early in the semester, then turn in test data, write test drivers and stubs as necessary. So each person's code is tested by another group member and each person tests another group member's code. Testing is done first within the group -- to get the ``bugs'' out of the test data as well as out of the code, then the test data is put in a public area. The pooled test data is then used in unit testing of all groups' modules. But there are some disadvantages in what I'm doing so that it may not help you: here all groups build a complete version of the system (though I provide several routines in a library since the project required more background in numerical methods than these students have) but each version must contain identical modules. Students start with a document from NASA Langley which is called a requirements document but in fact is a high level design document -- so that all modules, module interfaces, and data strucures have been specified for them. I can send you a copy if you would like to see it. It's about 100 pages in length. This semester I've got about 65 students in this class. And 16 groups. This is a 4 credit course. It has 3 hours of lecture and a 2 hour recitation each week (four of these with 15 to 20 people per recitation). The recitation focuses on the project. Covers tools -- students must use make, RCS, dbx. Each student also gives an oral presentation in the recitation presenting his/her test plan for one module. What test cases they plan and why they selected them. Note that test data -- in the form of input/output pairs -- are due before anyone has started coding that module. Mike Overstreet [ED: Mike sent a lengthy syllabus and schedule of project activities, which I omit here. I'll send it to anyone interested.] A------------------------------------------------------- From: Pamela Lawhead Subject: Students and Testing Keith, I too teach a software engineering class and have come up with an accountability method that works well. Each term we "contract" with an external industry to do a project for them, according to their standards. This requires that they provide the industry with all required documents(including test plans etc.). For three years we worked with IBM FSD in Boulder writing an X Window statistical plot package in Ada using DoD 2167a and the FSD style guide. Since IBM was making the requirements and the methodology I became an "enabler" rather than a "requirer". The change in my role allowed me to help them see how critical planned testing was. 2167a outlined for them a test plan and a series of accountability measures which they were required to follow. Using the FSD style guide required that they Specify, Design and Code in Ada. By doing the designs in Ada we were able to write compilable specifications. They seemed always to be "chomping at the bit" to get on with the coding but because of all of the industry requirements they were prevented from doing this until the very end. By using a Rational Promotion tool we could compile the specifications and trace them back to the requirements, write the design as Ada code and finally code at the very end. It was a really good experience. This year we are contracting with our Medical School to provide a paperless office for the dental school. It will be a several semester project and this term we will just do the voice recognition and synthesis steps required for the examination done by the dental hygenist. Again, though, I will be an interpreter of requirements not a maker so I will be able to "suggest" steps that they take to !FULLY! design, code and test the project. This may sound weird but it really works. Sincerely, Pam Lawhead The University of Mississippi A------------------------------------------------------- From: Bill McCarty Subject: Testing in SE courses Prof. Pierce, I saw your Nov. 19 Internet posting re testing in SE classes. I struggle with the same issue in our graduate SE classes. My best shot is to hand students who are responsible for testing the following references: Humphrey, Managing the Software Process, chapters 8 & 11 Hetzel, The Complete Guide to Software Testing, chapter 8 Gelperin & Hetzel, "The growth of sofware testing", CACM 31 (1988), no. 6, pp. 687-695. Royer's book Software Testing Management also has some useful ideas, but I haven't used it with students (yet). Also, the IEEE standards (e.g., 829 Standard for Software Test Documen- tation, 1008 Standard for Software Unit Testing, etc.) are helpful in providing "templates" for test documentation. I con- cede that these are probably too detailed for undergraduate use. If you come up with anything that works, please publish and let the rest of us know! Bill McCarty Azusa Pacific University A------------------------------------------------------- From: bertrand@eiffel.com (Bertrand Meyer) Subject: Re: Getting students to plan for testing Thanks to Prof. Pierce for this thoughtful discussion. Here is a pitch for two aspects of the Eiffel approach, assertions and seamlessness, which I believe address part of the issue. I do not claim, of course, that they completely solve it; but they do help significantly. 1. Assertions ------------- Assertions as they exist in Eiffel are not just testing tools; they help to write correct software, and to document it. But if you include assertions you can also monitor them at run time, and this provides the best approach to testing I have seen: because software elements are equipped with specifications (however partial) of their formal properties, you know what you are looking for during the testing phase. But assertions are not written during that phase: they are designed at the same time as the software itself, addressing one of Prof. Pierce's major concerns. Of course you cannot force students to write assertions. However: - People who develop software in Eiffel (at least in the ISE Eiffel implementation) almost always rely on the EiffelBase libraries (kernel, data structures etc.) which are loaded with assertions all over. Assertions are also the primary tool in documenting these library classes. In Eiffel as in other forms of software development, people learn mostly by imitating existing examples; the presence of the library and their assertions is a powerful incentive for students to use the same style, heavily reliant on assertions, in developing their own software. - Pedagogically, it is much easier to state and enforce a precise rule such as ``Use assertions!'' than a vague injunction such as ``Do not forget to plan for testing!'', to which students may nod in principle while treating it in practice as just another piece of professor's motherhood- and-apple-pie homily. - Because the injunction is precise, the teachers can investigate how well students apply it: you can check students' software for the presence of assertions. This is much easier than trying to assess whether students have ``planned for testing''. (I would not, however, suggest using a software tool to assess assertion usage automatically, since this would be too easy to fool, and also would measure quantity, not quality. Teachers who have tried to assess students' use of comments in traditional approaches have run into this pitfall. For assertions, visual inspection can pretty quickly reveal whether or not students are making a good use of the mechanism.) One note for anyone not familiar with Eiffel, although this is not the place to go into the details: Eiffel assertions are quite different from C-like ``assert'' instructions, which are just conditional print-and-stop instructions. Assertions are closely related to the software structure, in particular to routines thanks to preconditions and postconditions. Particularly important (and expressible only in an O-O context) are class invariants, which express key semantic properties of classes, and should play a fundamental role in testing, quality assurance, and regression testing. 2. Seamlessness --------------- Prof. Pierce writes: >Once again, despite my constant protests in class, most teams skimped on >specifications and design, and rushed to coding to "get something >running". I differ from conventional software engineering wisdom here in thinking that this haste in going to ``coding'' is not all bad. Software, after all, *is* code, and one can sympathize with the student or programmer who may listen politely to speeches about the importance of design but knows that bubbles and arrows have never been known to produce a paycheck, do a spreadsheet calculation, or control an airplane's engine. What does these things is code. The Eiffel approach here is to say that code itself should be of a sufficiently high level to enable developers to have almost right from the start the very important feeling that they have produced something concrete (not just pie in the sky, which is what most analysis and design methods yield), without renouncing abstraction. Eiffel texts can be quite abstract (deferred classes with no implementation yet, but of course assertions and the inheritance structure) and then evolve seamlessly towards a full-fledged implementation. This is the only way I know of reconciling the programmer's or student's thirst for getting ``real stuff'' early on with the manager's or professor's natural concern for making sure that the concepts are right and that the proper amount of thinking has been performed. These two aspects - assertions, seamlessness - are part of the reason why I believe that Eiffel, viewed here as the combination of method, language and environment, provides the right basis to teach programming at various levels (including introductory) and software engineering. A growing number of university departments are applying these concepts, which I have also had the opportunity to practice myself in various short stints in universities over the past couple of years. -- Bertrand Meyer ISE, Santa Barbara A------------------------------------------------------- From: mikeh@ssd.fsi.com (Mike Hann) Subject: Test Planning I can't direct you to a text that discusses test planning. Perhaps a prior issue to test planning, however, is understanding software testing. There is a poor understanding of what the purpose of software testing is and what the psychology of testing is. I find that I have problems not infrequently which are based soley on the notion "I wrote this and it has to work" or "I wrote this and this ugly problem could not be the result of this trivial error so I won't check for that problem." For a very, very good introduction to software testing (psychology, nature of, fundamental techniques, etc) see Glenford J Myers's _The Art of Software Testing_ (ISBN 0-471-04328-1). First published in 1979, this book is a true classic. I've got Beizer's books; they strike me as being primarily technique directed and fail to provide the same necessary, psychological conceptual insights into testing. I'm thinking that if you could introduce your students to the concepts of software testing they would do a better job of planning their tests. --------------------------------------------------------------------- "He who excels at resolving difficulties does so before they arise. He who excels in conquering his enemies triumphs before threats materialize." - Tu Mu, in Sun Tzu's The Art of War mikeh@ssd.fsi.com --------------------------------------------------------------------- A------------------------------------------------------- From: weyuker@inet.research.att.com In response to Keith Pierce's comments about students notions of Software Engineering projects and lack of adequate attention to specification, design, and especially test planning, I agree 100%. My solution has been for several years to have their team project include NO IMPLEMENTATION at all. Each team selects a project from a list of roughly 25 that I prepare. In each case I give them a 1 paragraph description of the product. Each team is responsible for 3 deliverables during the sememster (and I typically have 3 students per team). They write a functional specification, a design document, and a test plan for their product. I tell them that I expect each document to be roughly 10 pages. Most are stunned at the prospect of having to write 30 pages of English during the semester. Almost invariably, their documents wind up being substantially longer. While they are writing each document, I give lectures describing what goes into the appropriate document and different types of specification, design, or testing techniques. I try to give them a feel for an appropriate level of detail, but I do not give them a completed one to use as a template. Each team also gives a 10 minute presentation of each of their documents, with the rest of the class acting as the "customer" where the customer means whoever will be using that document. In that way, each team gets feedback (and each student has a chance to give an oral presentation, which is frequently a sorely-lacking skill). They can make final modifications to their document which they submit the following week. I have used the same format for both the graduate and undergraduate versions of the course. In the undergraduate course, at their urgings, I let them do a prototype implementation at the very end. It has been extremely popular. Many students have later told me that it was the only course they took that they felt they directly got to use once they went out into the workplace, and many said that simply having had a course like this was viewed as a real asset by many employers. As for test planning in particular, I stress that it is a life-cycle activity that should take place throughout and should begin as soon as a requirements document is available. Unfortunately, I can't give a good reference on test planning since there is none, to my knowledge. This way of organizing the course is much more work than a lecture course on software engineering. In the former type of course you tell them about what software engineers do. In this type of course you teach them to do software engineering. Last semester I personally read roughly 3000 pages of deliverables (and more than half of my undergraduate course and 85% of my graduate course were not native English speakers, which I assume is much more of an issue in New York than Minnesota) It is alot of work but it really is worth the effort. Elaine Weyuker weyuker@cs.nyu.edu weyuker@research.att.com A------------------------------------------------------- From: calliss@asuvax.eas.asu.edu Subject: Should we allow incorrect work to pass? (for FASE) DATE - Tue Nov 23 13:23:10 1993 Keith, you say > I just finished grading team projects produced in our senior software > engineering class, and, once again, I'm not happy with the products. > Once again, despite my constant protests in class, most teams skimped on > specifications and design, and rushed to coding to "get something > running". This is the recurring problem with SE projects. Below are some solutions that we have used 1) Change the project towards the end We changed a project towards the end by saying that the customer had cancelled to project and that they were being reassigned to another project (in this case it was a maintenance project). Whereas this made students realise that they should have paid more attention to what they were told in class, and added the "project cancelled" feel to the course. The students did not benefit from the change, it merely added extra punishment on those that did not work properly. 2) No Incorrect work This semester I told the students that I would not accept incorrect work. I set only one deadline which is when the project is due. It has been the responcibility of the groups to schedule the deliverables. There have been 4 deliverables scheduled A A The Requirement Document A System Level Test Plan The High Level Design Document The Code The documents in column A have to be submitted (and approved) in the given order. If a document is not up to standard it is rejected and a 10% penalty is imposed. The follow on documents will not be accepted until the document is accepted. Using this system, I have finally gotten the students to pay attention to the documents that tehy produce. The test plan document have to be delivered before the end of tehj project, and all the groups have submitted the test plan document before the High Level Dedign document, although I had not made this a requirement. The grade for the test plan document is based on the quality of the test plan document, again a bad document will not be accpted, and the number of seeded errors that the test plan finds in a version of the program that we have written. Although this is the first time I have offered the course this way, the results are very positive. Frank Frank W. Calliss Department of Computer Science and Engineering Arizona State University Box 875406 Tempe, AZ 85287-5406 Phone: (602) 965-2804 Fax: (602) 965-2751 Frank.Calliss@asu.edu E------------------------------------------------------------------- FASE V3 N7 Send newsletter articles to fase-submit@d.umn.edu or fase@d.umn.edu Send requests to add, delete, or modify a subscription to fase-request@d.umn.edu Send problem reports, returned mail, or other correspondence about this newsletter to fase-owner@d.umn.edu or kpierce@d.umn.edu Keith Pierce, Editor Laurie Werth, Advisory Committee Department of Computer Science Dept. of Computer Science University of Minnesota, Duluth Taylor Hall 2.124 Duluth, MN 55812-2496 University of Texas at Austin Telephone: (218) 726-7194 Austin, Texas 78712 Fax: (218) 726-6360 Telephone: (512) 471-9535 Email: kpierce@d.umn.edu Fax: (512)471-8885 Email: lwerth@cs.utexas.edu