Lecturer(s)
|
|
Course content
|
1. Introduction. Purpose of modelling and performance and reliability testing, basic terms (mistake, error, failure, availability, performance, safety, security, reliability). 2. Systems modelling. Queuing networks, Markov chain, temporal logic. 3. Basic reliability models (redundant systems). Random number generators. 4. Basics of software simulations. Basic techniques, calendar, discrete event simulation, time and events in the simulation, design and parameterization of the simulation model. 5. Using of simulation for modelling queuing networks and other systems, simulation of the multithreading applications, system environment simulation. 6. Fundaments of the performance measuring, for use in both simulation and real system. Kinds and examples of metrics. Ensuring of test repeatability. 7. Best practices for creating of reliable software ? availability levels, basic methods for ensuring reliability, runtime error processing, using of reliability modelling. Standards and architectures for reliable software systems (AUTOSAR, MARTE, ISO 50128 and similar). 8. Benchmarking, performance testing of real HW and SW, preparation of workload, workload clustering. 9. Debugging. Using of debugger and profiler for error detection, error isolation, supervising of application run. Using of the record of application execution for simulation models. 10. Result analysis and presentation. Statistics, result visualization, risks for result interpretation. 11. Static SW analysis ? existing tools and methods (Spin model checker, Java PathFinder and similar), their suitability, using and limitations in specific situations. 12. Dynamic SW analysis ? existing tools and methods (Gcov, Glassbox, Cobertura and similar), their suitability, using and limitations in specific situations.
|
Learning activities and teaching methods
|
Lecture supplemented with a discussion, Students' portfolio, Skills demonstration, Task-based study method, Individual study, Self-study of literature, Lecture, Lecture with visual aids, Practicum
- Preparation for laboratory testing; outcome analysis (1-8)
- 6 hours per semester
- Contact hours
- 60 hours per semester
- Preparation for an examination (30-60)
- 40 hours per semester
- Graduate study programme term essay (40-50)
- 50 hours per semester
|
prerequisite |
---|
Knowledge |
---|
to understand basic issues of the network communication and remote request processing |
to understand basic concepts of the mathematical analysis |
to understand means of description of the computer systems |
to understand basic issues of parallel programing and the means of their solution |
to understand basic architecture of the computer systems |
Skills |
---|
to use object oriented programing and suitable development tools |
to search independently in electronic resources, such as IEEExplore |
to perform object oriented design and decomposition of the problem |
to work with the basic methods of statistics and probability, including the tools for the calculation |
Competences |
---|
N/A |
learning outcomes |
---|
Knowledge |
---|
to understand methods of design of reliability models |
to know how to interpret the benchmark experiment results |
to understand methods of design of analytical models of queuing systems |
to understand properties and limitations of simulation and analytical models |
to understand properties and limitations of the (pseudo)random number generators |
to understand how analytical and simulation of Markov systems works |
to understand basic means of static and dynamic analysis of the software reliability |
to explain and illustrate means of analysis, design and implementation of the reliable software systems working with large data, composed of many different components |
Skills |
---|
to implement some types of the simulation models (discrete event simulations, simulations with the regular time step, cellular automatons) and analyse the obtain results |
to design and create benchmark experiment, measure the desired characteristics of the tested system. |
to design and create an experiment to measure characteristics of the system under load |
to design and create analytical model of the tested system, interpret the calculated characteristics |
to present thoroughly results of measurements in simulation or on the real system |
Competences |
---|
N/A |
to evaluate reliability and performance properties of the tested software, based on the static analysis or dynamic testing |
teaching methods |
---|
Knowledge |
---|
Practicum |
Lecture with visual aids |
Lecture supplemented with a discussion |
Lecture |
Self-study of literature |
Skills |
---|
Individual study |
Self-study of literature |
Skills demonstration |
Task-based study method |
Competences |
---|
Lecture with visual aids |
assessment methods |
---|
Knowledge |
---|
Combined exam |
Seminar work |
Skills |
---|
Skills demonstration during practicum |
Competences |
---|
Combined exam |
Recommended literature
|
-
Bernardi, Simona; Merseguer, José; Petriu, Dorina C. Model-driven dependability assessment of software systems. Heidelberg : Springer, 2013. ISBN 978-3-642-39511-6.
-
Hamlet, Dick. Composing Software Components: A Software-testing Perspective. Springer, 2010. ISBN 978-1441971470.
-
Hlavička, Jan. Architektura počítačů. Praha : ČVUT, 1994.
-
Hlavička, Jan. Číslicové systémy odolné proti poruchám. Vyd. 1. Praha : ČVUT, 1992. ISBN 80-01-00852-5.
-
Lyu, Michael R. Handbook of Software Reliability Engineering. Mcgraw-Hill, 1996. ISBN 978-0070394001.
-
Racek, Stanislav; Roubín, Miroslav. Pravděpodobnostní modely počítačů. 1. vyd. Plzeň : ZČU, 1996. ISBN 80-7082-300-3.
|