QualityTesting
Quality Assurance & Testing
Birds of a Feather
AYE 2003
November 5, 2003, 7:30pm-9:30pm
Moderators: RickHower and ModestoHernandez
Attendees: JohnBenton, MalcolmCurrie, SherryHeinze, ModestoHernandez, RickHower, MikeMelendez, RonPihlgren.
The purpose of this session was to provide an opportunity to share thoughts and ideas for those people that have a common interest in Quality Assurance and Testing.
Since the topics of the discussion were not set prior to the meeting, I captured what was actually discussed with the intent of identifying some areas that people would like to expand on in this wiki.
Topics:
# Testing remotely: What are the issues that test groups face when they are not co-located with the development group? What are some ideas to make it work?
# Agile development and testing: What's the role of a test group in agile development? (ex. developing test automation, defining user acceptance tests, ...)
# Testing critical vs. non-critical systems.
# Comparing cleanroom and agile software engineering practices.
# Build Verification Tests: What are they good for?
# Test automation and tools: GUI vs. API, home-grown vs. commercial, keyword-driven (a.k.a. action-based) testing.
# Configuration Management: Version control systems, change management, etc.
# Code coverage tools.
# How to explain the purpose of testing. (The "marble simulation" from the "Quality vs. Speed" session at AYE was used.)
Modesto - 2003/11/11
Some folks requested that I post some info re the 'Structured Differential Testing' automated testing methodology I mentioned.
Definition:
A structured automated method for testing complex systems where part or all of the system is being modified, yet the end-to-end functionality is expected to remain unchanged.
Typical appropriate situation characteristics:
Legacy systems
Clearly definable subsystem interfaces
Controlled differences between �baseline� and new system
Controlled environment/repeatable processing
Available baseline data
Available methodology for data selection (this is often a challenge)
Repeatable method available for collecting ordered test output data
Available methods for pre-processing output data
Available tools to compare and analyze results (such as UNIX diff or Windiff)
General method:
1)Obtain input and output data from appropriate system interfaces
for baseline system; use very large data sets.
2)Run the same input data thru the 'new' system or subsystem
3)Collect ouptut data
(typically need to build some simple custom tools appropiate to the system, to handle the above steps)
4)Compare baseline vs 'new' output data using comparison tool(s)
5)Process comparison results, if needed, via awk/sed/Perl/etc. or custom-built tools
RickHower 2003.11.10
Also see TestingBof2004 for summary of Testing Birds of a Feather get-together at 2004 AYE
RickHower 2004.11.15
Updated: Monday, November 15, 2004
|