2019-07-05

Testing Types Ontology v1.0.1907.1

Please find below a checklist I've created to describe different testing strategies.



1/ The hidden rationale behind the word "testing"


 • Testing vs. Checking
◇ [ ] "Testing" is explorative, probing and learning oriented.
◇ [ ] "Checking" is confirmative (verification and validation of what we already know). The outcome of a check is simply a pass or fail result; the outcome doesn't require human interpretation. Hence checking should be the first target for automation.

REMARK: We can see that speaking of “UnitTests” is actually not the best thing we came up with. We should rather be saying “UnitChecks”. But never mind.


2/ High level characteristics


• Interfaces
◇ [ ] GUI (Requires a dedicated Desktop Session)
◇ [ ] CLI-style (Tests can be run in parallel within a single User Session)
◇ [ ] or Hybrid CLI+GUI (Makes extensive use of CLI and access to the Desktop resource is leveraged through locking or distribution)

• Scopes
◇ [ ] Unit Test (Method testing, Mock testing)
◇ [ ] Component Tests (e.g. API Tests)
◇ [ ] System Tests (e.g. Integration Tests, Coded UI Tests)

• Areas of Concern
◇ [ ] Functional
◇ [ ] Acceptance
◇ [ ] Security
◇ [ ] Performance
▪ [ ] Load Testing: measure quantitative load.
▪ [ ] Stress Testing: check behavior under exceptional circumstances.

• Backends
◇ [ ] None (i.e. the tests generate their own data from scratch, dynamically),
◇ [ ] Snapshot-driven (w/ Upgrade of existing on-site data)
◇ [ ] Import-driven (e.g. using an import feature to perform testing on legacy or fresh data)
◇ [ ] or Hybrid

• Mechanics
◇ [ ] Data-driven (the same scenario is repeated over and over again, each time with different data)
◇ [ ] Scenario-based (each test requires a dedicated piece of software, typically a script)
◇ [ ] Exploratory (e.g. Bug Bounty, Hallway Testing, Usability Testing, Monkey Testing)

• Dependencies
◇ [ ] Application (depends on another process)
◇ [ ] Input Data (depends on flat files or streams of data)
◇ [ ] or Hybrid

• Privileges
◇ [ ] Local Administrator (eg. The tests requires write access to Program Files)
◇ [ ] Custom Local (eg. The tests access protected local resources)
◇ [ ] Any Local Standard User (Accesses a limited range of resources, attached drives, or only C:\Users\[UserName] directories.)


3/ Practical categories


• Operating
◇ [ ] Revert-based (each session starts from a well-known state, typically a virtual server snapshot)
◇ [ ] or Rolling (when nothing goes wrong, the testing server may have an ever increasing uptime)

 • Scheduling
◇ [ ] Triggered (e.g. Continuous Integration)
◇ [ ] Scheduled (e.g. Regular/Nightly)
◇ [ ] OnDemand (more suitable for very heavy, long and expensive tests – e.g. Load testing, Security testing, etc.)

• Reporting
◇ [ ] Fully Automated (PASS/FAIL)
◇ [ ] Computer Assisted (WARNINGS)
◇ [ ] Manual (Analytical)


4/ Volatile categories



The categories below are fairly volatile: the tests may move to a different category as the system-under-test grows older.

• Branches
◇ [ ] Internal/Staging
◇ [ ] Release

• Stages
◇ [ ] Development Tests (No backend/legacy data)
◇ [ ] Non-Regression Tests (Snapshot or Import-driven, see 'Backends' above.)



No comments:

Post a Comment