2019-07-05

Testing Types Ontology v1.0.1907.1

Please find below a checklist I've created to describe different testing strategies.



1/ The hidden rationale behind the word "testing"


 • Testing vs. Checking
◇ [ ] "Testing" is explorative, probing and learning oriented.
◇ [ ] "Checking" is confirmative (verification and validation of what we already know). The outcome of a check is simply a pass or fail result; the outcome doesn't require human interpretation. Hence checking should be the first target for automation.

REMARK: We can see that speaking of “UnitTests” is actually not the best thing we came up with. We should rather be saying “UnitChecks”. But never mind.


2/ High level characteristics


• Interfaces
◇ [ ] GUI (Requires a dedicated Desktop Session)
◇ [ ] CLI-style (Tests can be run in parallel within a single User Session)
◇ [ ] or Hybrid CLI+GUI (Makes extensive use of CLI and access to the Desktop resource is leveraged through locking or distribution)

• Scopes
◇ [ ] Unit Test (Method testing, Mock testing)
◇ [ ] Component Tests (e.g. API Tests)
◇ [ ] System Tests (e.g. Integration Tests, Coded UI Tests)

• Areas of Concern
◇ [ ] Functional
◇ [ ] Acceptance
◇ [ ] Security
◇ [ ] Performance
▪ [ ] Load Testing: measure quantitative load.
▪ [ ] Stress Testing: check behavior under exceptional circumstances.

• Backends
◇ [ ] None (i.e. the tests generate their own data from scratch, dynamically),
◇ [ ] Snapshot-driven (w/ Upgrade of existing on-site data)
◇ [ ] Import-driven (e.g. using an import feature to perform testing on legacy or fresh data)
◇ [ ] or Hybrid

• Mechanics
◇ [ ] Data-driven (the same scenario is repeated over and over again, each time with different data)
◇ [ ] Scenario-based (each test requires a dedicated piece of software, typically a script)
◇ [ ] Exploratory (e.g. Bug Bounty, Hallway Testing, Usability Testing, Monkey Testing)

• Dependencies
◇ [ ] Application (depends on another process)
◇ [ ] Input Data (depends on flat files or streams of data)
◇ [ ] or Hybrid

• Privileges
◇ [ ] Local Administrator (eg. The tests requires write access to Program Files)
◇ [ ] Custom Local (eg. The tests access protected local resources)
◇ [ ] Any Local Standard User (Accesses a limited range of resources, attached drives, or only C:\Users\[UserName] directories.)


3/ Practical categories


• Operating
◇ [ ] Revert-based (each session starts from a well-known state, typically a virtual server snapshot)
◇ [ ] or Rolling (when nothing goes wrong, the testing server may have an ever increasing uptime)

 • Scheduling
◇ [ ] Triggered (e.g. Continuous Integration)
◇ [ ] Scheduled (e.g. Regular/Nightly)
◇ [ ] OnDemand (more suitable for very heavy, long and expensive tests – e.g. Load testing, Security testing, etc.)

• Reporting
◇ [ ] Fully Automated (PASS/FAIL)
◇ [ ] Computer Assisted (WARNINGS)
◇ [ ] Manual (Analytical)


4/ Volatile categories



The categories below are fairly volatile: the tests may move to a different category as the system-under-test grows older.

• Branches
◇ [ ] Internal/Staging
◇ [ ] Release

• Stages
◇ [ ] Development Tests (No backend/legacy data)
◇ [ ] Non-Regression Tests (Snapshot or Import-driven, see 'Backends' above.)



2019-07-04

Diffy and Thrift [Bookmarks]

Hey, I came across an interesting piece of technology, I don't have time to experiment with it immediately (and fully understand how we could make use of it), but I wanted to share this with you right away, in case any of this rings a bell to something that might apply to things you are working on:


1. Diffy


First, there is this thing called Diffy (developed by Twitter), which tackles some problems I am actually familiar with, in terms of Non-Regression Testing:

"Diffy: Testing services without writing tests"
https://blog.twitter.com/engineering/en_us/a/2015/diffy-testing-services-without-writing-tests.html

Diffy finds potential bugs in your service by running instances of your new and old code side by side. It behaves as a proxy and multicasts whatever requests it receives to each of the running instances. It then compares the responses, and reports any regressions that surface from these comparisons.

The premise for Diffy is that if two implementations of the service return “similar” responses for a sufficiently large and diverse set of requests, then the two implementations can be treated as equivalent and the newer implementation is regression-free.


Illustration from Twitter's blog:




2. Apache Thrift


Second, when you look at how Twitter is implementing WebServices, they go through something called Thrift (developed by Facebook and "given" to the Apache Software Foundation)... I've looked up how this could interoperate with IIS, and it appears that can be used through IIS Handlers:

"Apache Thrift - Home"
https://thrift.apache.org/


"Thrift over HTTP with IIS | codealoc"
https://codealoc.wordpress.com/2012/04/06/thrift-over-http-with-iis/

The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages. 

The Thrift .Net library includes an HttpHandler THttpHandler which implements the IHttpHandler interface for hosting handlers within IIS. Documentation of how to use the handler is very sparse. Below are instructions of how to create a Thrift service using THttpHandler.


Preview of a sample IIS config:





Summary


Both Diffy and Thrift are interesting since they tackle problems I am familiar with:
  • Diffy addresses the problem of comparing past and present versions of software while removing the random/noisy content from the output.
  • Thrifts addresses the problem of generating client plumbing code for "talking" to webservices. There is so much difference between server and client code, that writing them in the same language doesn't actually help the slightest bit.



Anyway, just wanted to say that's something we should look into, and understand, for when we design testing strategies...