| Preface | 6 |
|---|
| A message from the sponsors | 6 |
| THE SAFETY-CRITICAL SYSTEMS CLUB | 7 |
|---|
| Safety-critical Systems Symposium | 7 |
| What is the Safety-Critical Systems Club? | 7 |
| Objectives | 7 |
| History | 7 |
| The Club’s activities | 7 |
| Education and communication | 8 |
| Influence on research | 8 |
| Membership | 8 |
| Contents | 9 |
|---|
| Safety Cases | 11 |
|---|
| A New Approach to creating Clear SafetyArguments | 12 |
| 1 Introduction | 12 |
| 2 The difficulties with a single argument | 15 |
| 3 Constructing assured safety arguments | 15 |
| 3.1 Asserted inference | 18 |
| 3.2 Asserted context | 18 |
| 3.3 Asserted solution | 21 |
| 3.4 Confidence argument structure | 22 |
| 3.5 The overall confidence argument | 26 |
| 4 Example assured safety argument | 27 |
| 5 Conclusions | 31 |
| Acknowledgments | 32 |
| References | 32 |
| Safety Cases – what can we learn from Science? | 33 |
| 1 Introduction | 33 |
| 2 How science works | 34 |
| 2.1 Some history | 34 |
| 2.2 Knowledge through science | 35 |
| 2.3 The practice of science | 36 |
| 3 Some comments on safety case fundamentals | 37 |
| 4 Safety cases from a scientific viewpoint | 38 |
| 4.1 Safety cases, hypotheses and challenges | 39 |
| 4.1.1 The safety case hypothesis | 39 |
| 4.1.2 Challenging the safety case hypothesis | 39 |
| 4.1.2.1 While the safety case is being developed | 40 |
| 4.1.2.2 Independent assessment of the completed safety case | 40 |
| 4.1.2.3 After the system has entered service | 41 |
| 4.2 ‘Normal science’ and paradigm shift | 41 |
| 4.3 Implications | 42 |
| 5 Compatibility with standards and regulatory requirements | 44 |
| 5.1 Def Stan 00-56 Issue 4 | 44 |
| 5.2 IEC 61508 | 45 |
| 5.3 CAP 670/SW01 | 46 |
| 6 Conclusions | 47 |
| References | 48 |
| Accounting for Evidence: Managing Evidencefor Goal Based Software Safety Standards | 49 |
| 1 Introduction | 49 |
| 2 Managing the argument | 50 |
| 3 Managing the processes that create evidence | 52 |
| 4 Assessing the evidence | 54 |
| 4.1 Overall process of assessment | 54 |
| 4.2 Limitations, counter-evidence and assurance deficits | 56 |
| 4.3 Safety case evidence report | 57 |
| 5 Conclusion | 58 |
| Acknowledgments | 59 |
| References | 59 |
| Projects, Services and Systems of Systems | 60 |
|---|
| Distinguishing Fact from Fiction in a System ofSystems Safety Case | 61 |
| 1 Introduction | 61 |
| 2 Hazard assessment approach | 63 |
| 2.1 Feature modelling | 66 |
| 2.2 Configuration space structure | 67 |
| 2.3 Pre-deployment hazard assessment | 68 |
| 2.4 Post-deployment hazard assessment | 69 |
| 2.5 Safety case | 70 |
| 3 Analysis of the human element | 72 |
| 3.1 Towards human factors methods for SoS hazard identification | 72 |
| 3.2 Human factors differential analysis for IAT | 73 |
| 4 Validation | 75 |
| 5 Summary and future work | 76 |
| References | 77 |
| A Project Manager’s View of Safety-CriticalSystems | 79 |
| 1 Introduction | 79 |
| 2 The commercial reality | 80 |
| 3 Doing what has to be done | 82 |
| 4 Requirements, acceptance criteria and constraints | 84 |
| 5 Testing in a smart way | 87 |
| 5.1 Did we test the product in the right way? | 87 |
| 5.2 Did we introduce other defects that were not found? | 89 |
| 6 Project management of product defects | 89 |
| 6.1 The fundamental question | 90 |
| 6.2 Relating cost of failure to cost of testing | 91 |
| 6.3 Benefit, cost and risk | 92 |
| 6.4 Design for testing | 92 |
| 7 Concluding remarks | 93 |
| References | 94 |
| System Safety in an IT Service Organization | 95 |
| 1 Introduction | 95 |
| 1.1 About Logica | 95 |
| 1.2 About Logica in the UK | 96 |
| 1.3 Logica services | 96 |
| 1.3.1 Safety-related services | 96 |
| 1.4 Service model description | 97 |
| 1.4.1 Service delivery lifecycle | 97 |
| 1.4.1.1 Bid | 97 |
| 1.4.1.2 Due diligence | 98 |