Data-Test-Executer Framework speziell zum Test von Datenverarbeitungen mit Datengenerierung, Systemvorbereitungen, Einspielungen, ganzheitlicher diversifizierender Vergleich
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

215 lines
8.6 KiB

3 years ago
# abstract class for instance components
# ---------------------------------------------------------------------
from datetime import datetime
3 years ago
import components.sysmonitor
import components.testrun
import components.report
import components.maintain
import components.catalog
3 years ago
import ulrich.message
3 years ago
import ulrich.program
3 years ago
import inspect
3 years ago
import threading
class CompData:
def __init__(self):
self.name = ""
self.m = None
self.conf = None
class Component(components.sysmonitor.SystemMonitor, components.testrun.Testrun, components.report.Report,
components.maintain.Maintainer, components.catalog, threading.Thread):
"""
A component represents an application of the system-under-test or a data-artifact which is created from the system-under-test.
As the representation it has to knowlegde of the url, which other components depends on this component.
During a test-run the component must be checked, prepared, appfiles has to be collected, etc. For this doing there are some standard-methods implemented.
"""
3 years ago
def init(self):
"""
The range of the test-system is defined by the dependancy graph which starts at the application.and indate each component.
By this way it is possible to use the same component in different test-systems and so the
application-knowlegde of this component must be defined only once.
A component can be a software-instance or a data.
Initialisation of concrete component
it is controlled by their configuration
"""
job = ulrich.program.Job.getInstance()
verify = job.getDebugLevel(self.name)
self.m.debug(verify, "--- " + str(inspect.currentframe().f_code.co_name) + "() " + str(self.name))
3 years ago
def run(self):
3 years ago
job = ulrich.program.Job.getInstance()
3 years ago
if job.program == "check_environment":
self.check_Instance()
elif job.program == "init_testset":
self.prepare_system("testset")
elif job.program == "init_testcase":
self.prepare_system("testcase")
elif job.program == "test_system":
self.test_System("test")
elif job.program == "test_system":
self.test_System("test")
elif job.program == "finish_testcase":
self.finish_Test("test")
elif job.program == "finish_testset":
self.finish_Test("test")
3 years ago
def collect_TcResult(self):
"""
collects the result from the folder {tcresult}.
the content is stored intern for comparison
:return:
"""
job = ulrich.program.Job.getInstance()
verify = job.getDebugLevel(self.name)
self.m.logInfo("get files in for " + self.name + " in tcresult ")
self.m.debug(verify, "--- " + str(inspect.currentframe().f_code.co_name) + "() " + str(self.name))
def collect_Target(self):
"""
pre: only for components which be collected at the end of the test-set
collects the result from the folder {rsresult}.
post: a further contact zo the test-system is not necessary
:return:
"""
job = ulrich.program.Job.getInstance()
verify = job.getDebugLevel(self.name)
self.m.debug(verify, "--- " + str(inspect.currentframe().f_code.co_name) + "() " + str(self.name))
def compare_TcResults(self):
"""
compares the result with the target
(1) the elements of both sides are assigned
[2) looks for differences in assigned pairs
(3) try to accept the difference with inherent rules
(4) writes the result of comparison as html in folder {diff*}
(5) the summary result is returned
:return:
"""
pass
def getHitscore(self, typ, rs, tg):
"""
calculates the difference between the given elements.
the score is a number [00000-99999] with prefix h_ - 00000 is the best, 99999 the worst
:param typ:
:param rs:
:param tg:
:return:
"""
def report_TcResults(self):
"""
reports the result-code
:return:
"""
pass
def finish_Testset(self):
pass
def collect_TsArtifact(self):
"""
collects the artifacts from the test-system.
the result is written as original in subfolder {tsorigin}
:return:
"""
pass
def split_TsResult(self):
"""
transforms the result which is collected from the test-system.
the result is written as utf-8-readable parts in the specific subfolder {tcparts}
the relevant testcases will be called
:return:
"""
job = ulrich.program.Job.getInstance()
verify = job.getDebugLevel(self.name)
self.m.debug(verify, "--- " + str(inspect.currentframe().f_code.co_name) + "() " + str(self.name))
# for tc in testcases:
# self.fix_TcResult(self)
def compare_TsResults(self):
"""
controles the comparison the result with the target
:return:
"""
job = ulrich.program.Job.getInstance()
verify = job.getDebugLevel(self.name)
self.m.debug(verify, "--- " + str(inspect.currentframe().f_code.co_name) + "() " + str(self.name))
# for tc in testcases:
# self.collect_TcResult
# self.collect_Target
# self.compare_TcResults
def report_TsResults(self):
"""
(1.2) extraction of diff-files (only not accepted differences) as html-file in test-set - result-report -
(2.3) visualization of final result-code of each component and each testcase in test-set - result-report -
(2.4) visualization of statistical result-codes of each component and each test-set in test-context - result-report -
:return:
"""
job = ulrich.program.Job.getInstance()
verify = job.getDebugLevel(self.name)
self.m.debug(verify, "--- " + str(inspect.currentframe().f_code.co_name) + "() " + str(self.name))
reportheader = '<head>'
reportbody = '<body>'
testreport = ""
# if job.par.context == "tset":
# for tc in testcases:
# header = utils.report_tool.getTcHeader()
# body = utils.report_tool.getTcExtraction()
# if job.par.context == "tcontext":
# for ts in testsets:
reportheader = reportheader +'<\head>'
reportbody = reportbody + '<\body>'
testreport = reportheader + reportbody
return testreport
def report_result(self):
"""
When you finish your test run you have to report the result for your customer.
1 Your testers have recherche the results which are not acceptable.
They need detailed information of all test results in order to proof if the difference has their cause
in test-technicals or in system-technicals. They should be able to declare the cause.
2 Your testmanager wants to know what is done - the percentage of the necessary test metrik and the status of each test -
and which principle faults are found and which cause the faults have.
3 Your projectmanager wants to know if the system is working correct - the percentage of the necessary test metrik
and founded system errors.
:return:
"""
pass
def maintain_tests(self):
"""
:return:
"""
pass
def declare_Target(self):
job = ulrich.program.Job.getInstance()
verify = -1+job.getDebugLevel(self.name)
self.m.logInfo("--- " + str(inspect.currentframe().f_code.co_name) + "() started at " + datetime.now().strftime("%Y%m%d_%H%M%S") + " for " + str(self.name).upper())
self.m.logInfo("something in "+ self.name)
self.m.setMsg("checkInstance for " + self.name + " is OK")
self.m.logInfo("--- " + str(inspect.currentframe().f_code.co_name) + "() finished at " + datetime.now().strftime("%Y%m%d_%H%M%S") + " for " + str(self.name).upper())
def catalog_tests(self):
"""
Its not only a nice-to-have to know which tests are exactly running and which special cases they tests.
Each test running in a special context a special case. Each of such cases are defined by some attributes.
But often nobody knows in which testcases these attributes are tested. If you want to proof your test metrik
for some very special cases a database of these attributes can be very helpful!
Otherwise you must specify always new test cases for a correction of such special case,
the existent test cases must be found afterwards and must be correct in their expectation.
:return:
"""
pass