Showing posts with label Automated Testing. Show all posts
Showing posts with label Automated Testing. Show all posts

2019-07-05

Testing Types Ontology v1.0.1907.1

Please find below a checklist I've created to describe different testing strategies.



1/ The hidden rationale behind the word "testing"


 • Testing vs. Checking
◇ [ ] "Testing" is explorative, probing and learning oriented.
◇ [ ] "Checking" is confirmative (verification and validation of what we already know). The outcome of a check is simply a pass or fail result; the outcome doesn't require human interpretation. Hence checking should be the first target for automation.

REMARK: We can see that speaking of “UnitTests” is actually not the best thing we came up with. We should rather be saying “UnitChecks”. But never mind.


2/ High level characteristics


• Interfaces
◇ [ ] GUI (Requires a dedicated Desktop Session)
◇ [ ] CLI-style (Tests can be run in parallel within a single User Session)
◇ [ ] or Hybrid CLI+GUI (Makes extensive use of CLI and access to the Desktop resource is leveraged through locking or distribution)

• Scopes
◇ [ ] Unit Test (Method testing, Mock testing)
◇ [ ] Component Tests (e.g. API Tests)
◇ [ ] System Tests (e.g. Integration Tests, Coded UI Tests)

• Areas of Concern
◇ [ ] Functional
◇ [ ] Acceptance
◇ [ ] Security
◇ [ ] Performance
▪ [ ] Load Testing: measure quantitative load.
▪ [ ] Stress Testing: check behavior under exceptional circumstances.

• Backends
◇ [ ] None (i.e. the tests generate their own data from scratch, dynamically),
◇ [ ] Snapshot-driven (w/ Upgrade of existing on-site data)
◇ [ ] Import-driven (e.g. using an import feature to perform testing on legacy or fresh data)
◇ [ ] or Hybrid

• Mechanics
◇ [ ] Data-driven (the same scenario is repeated over and over again, each time with different data)
◇ [ ] Scenario-based (each test requires a dedicated piece of software, typically a script)
◇ [ ] Exploratory (e.g. Bug Bounty, Hallway Testing, Usability Testing, Monkey Testing)

• Dependencies
◇ [ ] Application (depends on another process)
◇ [ ] Input Data (depends on flat files or streams of data)
◇ [ ] or Hybrid

• Privileges
◇ [ ] Local Administrator (eg. The tests requires write access to Program Files)
◇ [ ] Custom Local (eg. The tests access protected local resources)
◇ [ ] Any Local Standard User (Accesses a limited range of resources, attached drives, or only C:\Users\[UserName] directories.)


3/ Practical categories


• Operating
◇ [ ] Revert-based (each session starts from a well-known state, typically a virtual server snapshot)
◇ [ ] or Rolling (when nothing goes wrong, the testing server may have an ever increasing uptime)

 • Scheduling
◇ [ ] Triggered (e.g. Continuous Integration)
◇ [ ] Scheduled (e.g. Regular/Nightly)
◇ [ ] OnDemand (more suitable for very heavy, long and expensive tests – e.g. Load testing, Security testing, etc.)

• Reporting
◇ [ ] Fully Automated (PASS/FAIL)
◇ [ ] Computer Assisted (WARNINGS)
◇ [ ] Manual (Analytical)


4/ Volatile categories



The categories below are fairly volatile: the tests may move to a different category as the system-under-test grows older.

• Branches
◇ [ ] Internal/Staging
◇ [ ] Release

• Stages
◇ [ ] Development Tests (No backend/legacy data)
◇ [ ] Non-Regression Tests (Snapshot or Import-driven, see 'Backends' above.)



2019-06-19

Bots and automated commits – Continuous Integration for Quality Assurance and Performance


The idea of having bots writing code is only the continuation of code analysis. In this mindset, 1/ one creates levels of abstractions and let automated systems write the actual code (eg. through snippets, custom DSLs, etc.) and 2/ one associates its code with tests that are run on automated systems and let the "bots" auto-merge features when tests pass.







For those who are interested, this information comes from the following article (a very long, but interesting thing to read btw): "Why Google Stores Billions of Lines of Code in a Single Repository | July 2016 | Communications of the ACM"https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-billions-of-lines-of-code-in-a-single-repository/fulltext


2016-07-07

A word about false-positive and false-negative test-results, and why having a 'negative test-result' means 'passing the test'



Both “false-positive” and “false-negative” test-results exist.


1/ Positive and Negative tests-results


First, notice that the usual phrasing is “one is taking a test”, or “one is being tested for something”. One is “positive” for anomalies just like one is “positive” for alcohol, narcotics, drugs, disease, pregnancy, driving speed limit or amounts of goods at the customs: ie. typically some output values are above or below average. In everyday life, tests usually measure the concentration of a given chemical compound, or goods, or speeds; it’s the same for software: except that instead of chemical compounds or goods, we have file system objects, and we also have durations, speeds, loads, etc.



So, when software is taking a test, software is tested “for” regressions. Just like a chemical-test might reveal the presence of a molecule in your blood, a regression-test might reveal some incoherent or wrong behavior coming from the system-under-test. Therefore a “positive” test-result is a test-result that causes the test to appears, in first instance, and without further investigation, as “failed”.

By symmetry, a “negative” test-result is a result that did not detect any anomalies; the test is then usually considered as “passed”.


2/ False test-results


A “false-positive” test is a test that first appeared as “failed” but was later proven to be insignificant (eg. just like you would be first tested positive for something after saliva, or urine test, and later dismissed upon running more precise, blood tests).

Finally, “false-negative” tests are much more rare in practice: those are tests that did not catch an anomaly but should have. One can tell a test is a false-negative whenever one finds a problem that was not reported in the test covering the use-case.


In a nutshell:


Positive test-result = Anomaly detected = Failed test
Negative test-result = No anomaly detected = Passed test
False-positive test-result = Anomaly detected but no problem found after further investigation.
False-negative test-result = No anomaly detected and yet there was a bug.