A part of the Pythoscope system responsible for gathering information about a legacy system.
Source code of the system should be a main reference point.
We should avoid having non-source code information if possible. When requesting information it's better to point out where and how to annotate the source than just to ask for an answer. This way most of the information will still be in one place.
On the other hand, putting things into the source code only for the sake of testing is a bad thing to do. It all depends on the kinds of annotations we'll require. Supplementary to the source code analysis we may allow users to use a .hints file containing annotations about the project's source code. Being separated from the code it risks going out-of-sync, but it solves the problem with test-specific information inside the application source code.
Don't run any code unless explicitly allowed by the user through points of entry.
This also applies to modules' import (remember - they may not be import safe). Legacy code by definition is not safe to be run in testing environment, so we have to be very careful.
Static analysis of code modules
We statically analyze modules of application code, gathering information on defined classes, methods and functions.
- Easy to write. There are quite a few tools that explored Python source code analysis. Those include:
- To reveal usage scenarios, we would have to make the system "understand" the program, which in general case is hard.
- Won't cover dynamically generated code. We can probably get along decorators, properties, metaclasses and other magic stuff, as long as its usage is pretty standard.
- Deriving variable types (or flow-graphs and other information for that matter) from static information is possible (Wing IDE with its "likely type" hints does that), but limited in scope, as Python relies on dynamic environment a lot, so end results may not be worth the effort.
- Can local/global variable differentiation can be done reliably via static analysis? I know python interpreter uses different byte codes for accessing local and global variables, so that may be the case. This is important for testing, because functions that use only local variables and passed parameters ("pure" functions, disregarding other side effects) are very good candidates for unit testing.
- We could leverage some information from decorators or docstrings to guess what kind of arguments a function/method accepts. Many projects already use documentation conventions for functions and classes (e.g. epydoc encourages use of fields), so why not to use this for test generation?
Dynamic coverage analysis of code using the points of entry provided by the user
User may have:
- regression tests that represent some high-level functionality,
- administration/developer scripts that perform some tasks, calling the application code in the process.
She may use those as points of entry for a dynamic analyzer.
We "attach a code stethoscope" and run those regression tests and collect coverage info for them. This information includes:
- number of times a given line of code was called,
- input & outputs of all function calls,
- control flow graphs of all function and method calls.
- Collects a lot of information, which is both beneficial and harmful. We have a lot of data to work with, but we risk losing important information in the noise. We won't know until we try to do it.
- High-level test runs will tend to check only typical scenarios, so edge-cases won't be covered. On one hand we will lack information on those edge branches, on the other those will be great candidates for new unit tests (so we provide useful info either way).