tceic.com

学霸学习网 这下你爽了

学霸学习网 这下你爽了

- A CLASS OF EFFICIENT-ENCODING GENERALIZED LOW-DENSITY PARITY-CHECK CODES
- Abstract
- List of Tables iv
- List of Tables ix
- Contents List of Tables v
- List of Figures, x List of Tables, xii List of Tables, xii
- List of TABLES v
- List of Tables xx
- List of Tables.............................................................................
- List of Tables and Charts
- List of Tables iv
- table of contents & list of figures and tables
- Contents List of Figures vii List of Tables ix List of Algorithms x
- List of Tables xii
- List of Tables........................................ 4

Serially Concatenated Trellis Coded Modulation

Paul K. Gray

B.Eng., M.Eng.

Doctor of Philosophy Dissertation The University of South Australia,

Faculty of Information Technology, School of Physics and Electronic Systems Engineering. March 1999

Contents

List of Figures List of Tables List of Acronyms Summary Declaration Acknowledgements 1 Introduction

1.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . .

v x xiii xiv xv xvi 1

6

2 Concatenation of Trellis Codes and Scramblers

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

8

2.2 Error Probability Bounds for Trellis Codes . . . . . . . . . . . . . . . . . . 10 2.2.1 Distance Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.2 Event Error and Bit Error Probability Bounds . . . . . . . . . . . . 11 2.2.3 Quasi-regular Signal Sets . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Gray Scrambled TCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 Gray Mapped TCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

ii

Contents

2.5 Gray Scrambling and Gray Mapping Are Equivalent . . . . . . . . . . . . . 18 2.6 Search for Better Scramblers . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.7 Memoryless Scramblers Are Best . . . . . . . . . . . . . . . . . . . . . . . 21 2.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3 Markov Chain Decoding of Concatenated Codes

25

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 MC Model of a Viterbi Decoder . . . . . . . . . . . . . . . . . . . . . . . . 26 3.3 Coding for the Gilbert{Elliott Channel . . . . . . . . . . . . . . . . . . . . 28 3.3.1 Bounds for Convolutional Codes on Gilbert-Elliott Channels . . . . 29 3.3.2 Decision Feedback Decoding on Gilbert-Elliott Channels . . . . . . 33 3.4 Markov Chain Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4 Iterative Decoding of SCCC

42

4.1 SCCC and PCCC Background . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.1.1 PCCC Encoder Structure . . . . . . . . . . . . . . . . . . . . . . . 43 4.1.2 PCCC Decoder Structure . . . . . . . . . . . . . . . . . . . . . . . 45 4.1.3 SCCC Encoder Structure . . . . . . . . . . . . . . . . . . . . . . . . 47 4.1.4 SCCC Decoder Structure . . . . . . . . . . . . . . . . . . . . . . . . 49 4.1.5 SISO Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2 SCCC Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.2.1 The Weight Enumerating Function of the Equivalent Block Code . 62 4.3 Design Criteria for Binary SCCCs . . . . . . . . . . . . . . . . . . . . . . . 65 4.3.1 Maximum Exponent of N . . . . . . . . . . . . . . . . . . . . . . . 65 4.3.2 The Exponent of N for the SCCC Free Distance . . . . . . . . . . . 66 4.3.3 Design Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

iii

Contents

4.3.4 SCCC Design Criteria Summary . . . . . . . . . . . . . . . . . . . . 70 4.4 New Binary SCCCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.4.1 Search Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.4.2 Search Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.4.3 Binary SCCC Performance . . . . . . . . . . . . . . . . . . . . . . . 73 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5 Serially Concatenated Trellis Coded Modulation

80

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.2 SCTCM Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.2.1 The Distance Spectrum of the Equivalent Block Code . . . . . . . . 84 5.3 Design Criteria For SCTCM . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.1 Maximum Exponent of N . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.2 The Exponent of N for the SCTCM Free Distance . . . . . . . . . . 88 5.3.3 SCTCM Design Criteria . . . . . . . . . . . . . . . . . . . . . . . . 89 5.4 Search For New SCTCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.4.1 SCTCM Search Criteria . . . . . . . . . . . . . . . . . . . . . . . . 92 5.4.2 Search Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.5 SCTCM Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.6 Comparison With Existing Codes . . . . . . . . . . . . . . . . . . . . . . . 106 5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

6 Conclusions

114

6.1 Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 6.2 Areas for Further Research . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

A Generator Polynomial Notation iv

123

Contents

A.1 Non-systematic Feedforward Convolutional Encoders . . . . . . . . . . . . 123 A.2 Systematic Recursive Convolutional Encoders . . . . . . . . . . . . . . . . 124

B Distance Spectra of SCCC C Performance Bounds of SCCC D Simulated Performance of SCTCM Bibliography

125 137 147 162

v

List of Figures

1.1 Serially concatenated trellis coded modulation . . . . . . . . . . . . . . . . 2.1 Gray mapped signal set examples . . . . . . . . . . . . . . . . . . . . . . . 2 9

2.2 Naturally mapped signal set examples . . . . . . . . . . . . . . . . . . . . . 10 2.3 TCM block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4 Gray mapped QPSK and naturally mapped 8PSK . . . . . . . . . . . . . . 13 2.5 Gray scrambled 8PSK TCM block diagram . . . . . . . . . . . . . . . . . . 13 2.6 BER bounds and simulations for 8 state 8PSK TCM with and without Gray scrambling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.7 Gray mapped 8PSK TCM block diagram . . . . . . . . . . . . . . . . . . . 24 2.8 Equivalence of Gray scrambling and Gray mapping . . . . . . . . . . . . . 24 3.1 Two-state Markov Chain model . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2 Viterbi decoder error burst . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.3 Gilbert{Elliott channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.4 Coding system for the Gilbert{Elliott channel . . . . . . . . . . . . . . . . 30 3.5 Decision{feedback decoder system . . . . . . . . . . . . . . . . . . . . . . . 33 3.6 Decision{feedback decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.7 Serially concatenated convolutional code . . . . . . . . . . . . . . . . . . . 35 3.8 BER bounds for rate 1=3 concatenated example . . . . . . . . . . . . . . . 37 3.9 BER simulations for rate 1=3 concatenated example . . . . . . . . . . . . . 38 3.10 ACS choices when start of burst is missed . . . . . . . . . . . . . . . . . . 40

vi

List of Figures

4.1 Soft{output decoded serially concatenated convolutional codes . . . . . . . 42 4.2 Iteratively decoded serially concatenated convolutional codes . . . . . . . . 43 4.3 PCCC encoder block diagram . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.4 PCCC encoder with three constituent codes . . . . . . . . . . . . . . . . . 46 4.5 PCCC decoder block diagram . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.6 SCCC encoder block diagram . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.7 HCCC encoder block diagram . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.8 SCCC decoder block diagram . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.9 The SISO module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.10 PCCC decoder block diagram showing SISO modules . . . . . . . . . . . . 52 4.11 SCCC decoder block diagram showing SISO modules . . . . . . . . . . . . 53 4.12 The trellis encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.13 An edge of the trellis section . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.14 The calculation of At(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.15 The calculation of Bt? (s) . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

1

4.16 The calculation of Pt(c; O) . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.17 The calculation of Pt(u; O) . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.18 Rate 1=3 binary SCCC encoder . . . . . . . . . . . . . . . . . . . . . . . . 75 4.19 Rate 1=2 binary SCCC encoder . . . . . . . . . . . . . . . . . . . . . . . . 75 4.20 Rate 3=5 binary SCCC encoder . . . . . . . . . . . . . . . . . . . . . . . . 75 4.21 BER bounds for the best rate 1=3 binary SCCC . . . . . . . . . . . . . . . 77 4.22 BER bounds for the best rate 1=2 binary SCCC . . . . . . . . . . . . . . . 78 4.23 BER bounds for the best rate 3=5 binary SCCC . . . . . . . . . . . . . . . 79 5.1 SCTCM encoder block diagram . . . . . . . . . . . . . . . . . . . . . . . . 81 5.2 Comparison of rate 1=3 8PSK SCTCM with Gray mapped and naturally mapped signal sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

vii

List of Figures

5.3 Rate 1=3 8PSK SCTCM encoder . . . . . . . . . . . . . . . . . . . . . . . 95 5.4 Rate 1=2 16QAM SCTCM encoder . . . . . . . . . . . . . . . . . . . . . . 95 5.5 Performance bounds of rate 1=3 8PSK SCTCM and rate 1=3 SCCC with Gray mapped signal sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.6 Performance bounds of rate 1=2 16QAM SCTCM and rate 1=2 SCCC with Gray mapped signal sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.7 Summary of performance bounds of rate 1=2 16QAM SCTCM . . . . . . . 98 5.8 Comparison of rate 1=3 8PSK SCTCM with the same number of states in the inner and outer codes at 8 iterations . . . . . . . . . . . . . . . . . . . 101 5.9 Simulation of rate 2=3 8PSK constituent code with varying encoder complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.10 Expanded view of Figure 5.9 . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.11 Comparison of rate 1=2 16QAM SCTCM with the same number of states in the inner and outer codes at 8 iterations . . . . . . . . . . . . . . . . . . 104 5.12 Comparison of rate 1=2 16QAM SCTCM with di erent number of states in the inner and outer codes at 8 iterations. . . . . . . . . . . . . . . . . . 107 5.13 Comparison of simulated performance (8 iterations) and performance bound for rate 1=2 16QAM SCTCM with 2 state inner and 8 state outer code. . . 108 5.14 Comparison of coding schemes all with bandwidth e ciencies of 2 bits per symbol, encoding delays of 16384 bits, and 6 iterations . . . . . . . . . . . 112 5.15 Comparison of coding schemes all with bandwidth e ciencies of 2 bits per symbol, encoding delays of 32768 bits, and 8 iterations . . . . . . . . . . . 113 B.1 BER bounds for 4 state SCCC decomposed by spectral line . . . . . . . . . 126 B.2 BER bounds for 8 state SCCC decomposed by spectral line . . . . . . . . . 129 B.3 BER bounds for 16 state SCCC decomposed by spectral line . . . . . . . . 133 C.1 BER bound for rate 1=3 binary SCCC with 4 state inner code and 4 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 C.2 BER bound for rate 1=3 binary SCCC with 8 state inner code and 8 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

viii

List of Figures

C.3 BER bounds for rate 1=3 binary SCCC with 16 state inner code and 16 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 C.4 BER bounds for rate 1=3 binary SCCC with 32 state inner code and 32 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 C.5 BER bound for rate 1=2 binary SCCC with 4 state inner code and 4 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 C.6 BER bound for rate 1=2 binary SCCC with 8 state inner code and 8 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 C.7 BER bounds for rate 1=2 binary SCCC with 16 state inner code and 16 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 C.8 BER bound for rate 3=5 binary SCCC with 8 state inner code and 8 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 C.9 BER bounds for rate 3=5 binary SCCC with 16 state inner code and 32 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 D.1 Simulation of rate 1=3 8PSK SCTCM with 4 state inner code and 4 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 D.2 Simulation of rate 1=3 8PSK SCTCM with 8 state inner code and 8 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 D.3 Simulation of rate 1=3 8PSK SCTCM with 16 state inner code and 16 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 D.4 Simulation of rate 1=2 16QAM SCTCM with 4 state inner code and 4 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 D.5 Simulation of rate 1=2 16QAM SCTCM with 8 state inner code and 8 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 D.6 Simulation of rate 1=2 16QAM SCTCM with 16 state inner code and 16 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 D.7 Simulation of rate 1=2 16QAM SCTCM with 2 state inner code and 4 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 D.8 Simulation of rate 1=2 16QAM SCTCM with 2 state inner code and 8 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 D.9 Simulation of rate 1=2 16QAM SCTCM with 2 state inner code and 16 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

ix

List of Figures

D.10 Simulation of rate 1=2 16QAM SCTCM with 4 state inner code and 8 state outer code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 D.11 Simulation of rate 1=2 16QAM SCTCM with 2 state inner code, 8 state outer code, and 16384 bit encoder delay . . . . . . . . . . . . . . . . . . . 158 D.12 Simulation of rate 1=2 16QAM SCTCM with 4 state inner code, 4 state outer code, and 16384 bit encoder delay . . . . . . . . . . . . . . . . . . . 159 D.13 Simulation of rate 1=2 16QAM SCTCM with 2 state inner code, 8 state outer code, and 32768 bit encoder delay . . . . . . . . . . . . . . . . . . . 160 D.14 Simulation of rate 1=2 16QAM SCTCM with 4 state inner code, 4 state outer code, and 32768 bit encoder delay . . . . . . . . . . . . . . . . . . . 161

x

List of Tables

2.1 Gray scrambler input/output relationship . . . . . . . . . . . . . . . . . . . 14 2.2 Truncated distance spectrum of 8 state 8PSK TCM code . . . . . . . . . . 14 2.3 Information labelings of 8 state 8PSK TCM error paths at Eb =N = 6 dB . 15

0

2.4 Average number of information bit errors per symbol for 8PSK . . . . . . . 17 2.5 Average number of information bit errors per symbol for 16QAM . . . . . 17 3.1 16 state rate 2/3 8PSK Gilbert{Elliott channel model parameters . . . . . 36 4.1 Constituent codes of rate 1/3 SCCCs used in Figures B.1, B.2, and B.3 . . 68 4.2 Rate 1=2 outer constituent codes 1, Table 11.1] . . . . . . . . . . . . . . . 73 4.3 Rate 2=3 outer constituent codes 1, Table 11.1] . . . . . . . . . . . . . . . 73 4.4 Rate 3=4 outer constituent codes 1, Table 11.1] . . . . . . . . . . . . . . . 73 4.5 Rate 2=3 inner constituent codes for binary SCCC . . . . . . . . . . . . . . 74 4.6 Rate 3=4 inner constituent codes for binary SCCC . . . . . . . . . . . . . . 74 4.7 Rate 4=5 inner constituent codes for binary SCCC . . . . . . . . . . . . . . 74 5.1 Sets of distances for the Gray mapped 8PSK signal set of Figure 2.1 . . . . 84 5.2 Rate 2=3 8PSK inner constituent codes for SCTCM with the same number of states in inner and outer codes . . . . . . . . . . . . . . . . . . . . . . . 94 5.3 Rate 3=4 16QAM inner constituent codes for SCTCM with the same number of states in inner and outer codes . . . . . . . . . . . . . . . . . . . . . 94 5.4 Rate 3=4 16QAM inner constituent codes for SCTCM with di erent number of states in inner and outer codes . . . . . . . . . . . . . . . . . . . . . . . 95

xi

List of Tables

5.5 Complexity metrics for the rate 1=2 16QAM codes of Figure 5.12 . . . . . 106 5.6 Complexity metrics for the codes of Figure 5.14 . . . . . . . . . . . . . . . 110 5.7 Complexity metrics for the codes of Figure 5.15 . . . . . . . . . . . . . . . 110 B.1 Distance spectrum ACS (W; H ) for 4 state SCCC . . . . . . . . . . . . . . . 127 B.2 Distance spectrum ACo (W; L) for 4 state outer code . . . . . . . . . . . . . 128 B.3 Distance spectrum ACi (L; H ) for 4 state inner code . . . . . . . . . . . . . 128 B.4 Distance spectrum ACS (W; H ) for 8 state SCCC . . . . . . . . . . . . . . . 130 B.5 Distance spectrum ACo (W; L) for 8 state outer code . . . . . . . . . . . . . 131 B.6 Distance spectrum ACi (L; H ) for 8 state inner code . . . . . . . . . . . . . 132 B.7 Distance spectrum ACS (W; H ) for 16 state SCCC . . . . . . . . . . . . . . 134 B.8 Distance spectrum ACo (W; L) for 16 state outer code . . . . . . . . . . . . 135 B.9 Distance spectrum ACi (L; H ) for 16 state inner code . . . . . . . . . . . . 136

xii

List of Acronyms

Term De nition

0

Page

Eb =N 16QAM 8PSK ACS AWGN BCJR BER HCCC MAP MC PCBC PCCC PCTCM QPSK S{type SCBC SCCC SCTCM SISO SNR SOVA SRCC TCM

ratio of energy per information bit to noise spectral density : : : : : : : : : : : : 11 sixteen quadrature amplitude modulation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 octal phase shift keying : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 add-compare-select : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 40 additive white Gaussian noise : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 95 Bahl, Cocke, Jelinek, and Raviv : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 46 bit error rate : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 10 hybrid concatenated convolutional codes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 49 maximum a posteriori : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 46 Markov Chain : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 26 parallel concatenated block codes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 46 parallel concatenated convolutional code : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 43 parallel concatenated trellis coded modulation : : : : : : : : : : : : : : : : : : : : : : : : : 48 quaternary phase shift keying : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8 S{type random interleaver : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 45 serially concatenated block codes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 50 serially concatenated convolutional code : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 47 serially concatenated trellis coded modulation : : : : : : : : : : : : : : : : : : : : : : : : : 48 soft-input, soft-output : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 46 signal to noise ratio : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 20 soft output Viterbi algorithm : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 46 systematic recursive convolutional codes : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 45 trellis coded modulation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 8

xiii

Summary

This dissertation deals with a number of variants of serially concatenated trellis coded modulation (SCTCM). It commences with the examination of a trellis code concatenated with a Gray code scrambler. It has been observed that the bit error rate of a TCM scheme can be improved by using a Gray code scrambler. It has also been observed that the bit error rate can also be improved by using Gray mapped signal sets. It is shown that these two methods are equivalent, and the extension of these results to scramblers with memory is also investigated. Next the concatenation of a trellis code with an outer convolutional code is investigated. It has been observed that the error events of a Viterbi decoder can be modelled as a two state Markov Chain, and a new test is developed which proves this hypothesis. Other authors have developed a decision feedback decoder for use on Gilbert{Elliott channels, a time varying channel modelled as a two state Markov Chain. These two concepts are combined to develop a low complexity decoder for SCTCM which models the inner trellis decoder as a Gilbert{Elliott channel. In an e ort to further improve the performance of SCTCM, iteratively decoded serially concatenated convolutional codes (SCCC) were also investigated. These codes have drawn interest recently as the serial analogues of Turbo Codes, which are parallel concatenated convolutional codes (PCCC). The distance spectrum of SCCC are examined and it is conrmed that the performance bound is dominated by terms with an input Hamming weight from the inner code equal to the free distance of the outer code, as has previously been reported. Other authors have concluded that these terms are made up of the concatenation of inner decoder error events with information weight 2, and it is the Hamming distance of these error events, the \e ective free distance", which should be minimised. However, it is observed that there are other error events from the inner decoder which contribute to these dominant terms. This information is used to develop new design criteria for SCCC and a search for new SCCC codes is conducted. This search results in the discovery of better SCCC than previously reported. Finally the new design criteria for SCCC are modi ed for SCTCM. Performance bounds for SCTCM are derived and used to generate design criteria for SCTCM. These are used to conduct a search for new rate 1=3 8PSK and rate 1=2 16QAM SCTCM, for a number of di erent constraint lengths. Monte{Carlo simulations of these new codes were conducted, which revealed the interesting result that the codes constructed from short constraint length constituent codes outperform those constructed from longer constraint length codes. The best of these new rate 1=2 16QAM SCTCM were compared with the best published SCTCM and parallel concatenated trellis coded modulation (PCTCM) schemes. It was found that these new codes have comparable performance to the existing codes, and signi cantly lower complexity.

xiv

Declaration

I declare that this thesis does not incorporate without acknowledgement any material previously submitted for a degree or diploma in any university; and that to the best of my knowledge it does not contain any materials previously published or written by any other person except where due reference is made in the text.

Paul K. Gray

xv

Acknowledgements

Thanks go to all the principal supervisors I have had during the course of my studies: Dr Christian Schlegel, Dr Lars Rasmussen, Dr Phil Whiting, and Dr Mark Rice. Also thanks go to my associate supervisor, Dr Steven Pietrobon, who made it right through to the end. All of them have made valuable contributions to the nished product seen here. I would also like to thank both the Department of Employment, Education and Training (DEET) and the Institute for Telecommunications Research (ITR) at the University of South Australia for the scholarships which made this study possible. Special thanks also go to all my colleagues who have contributed by way of participation in numerous discussions over the years, particularly Dr Adrian Barbulescu, Wade Farrell, Mark Ho, Mark Reed, and Dr Weimin Zhang.

xvi

Chapter 1 Introduction

The rst uses of error correcting codes for communications were in deep space applications, where the motivation was overcoming the free space loss between the spacecraft transmitter and the earth based receiver. Spacecraft sent beyond earth orbit were by necessity small in size and could not support large antennas or powerful transmitters, and there are practical limits to the size of the earth based receive antennas. The coding gain o ered by these error correcting codes allowed these deep space communications links to be maintained over greater and greater distances. Bandwidth was not a great concern on these communications links, and the bandwidth expansion of these codes did not cause problems. Also, with a single earth based receiver, located in a large facility, the additional complexity required to decode these codes was not a signi cant problem. On the other hand earth orbit and terrestrial communications links su ered a relatively small free space loss which, coupled with low data rates, meant that uncoded transmissions were su cient for these applications. Today however, the situation has changed dramatically. The demand for high data rates and mobility, and hence small antennas and low transmit powers, in wireless communications has made the coding gains o ered by error correcting codes essential. At the same time this increase in demand has placed great pressure on spectrum allocations, making

1

Introduction

Information Bits Outer Encoder Interleaver Inner Encoder

Channel

Decoded Bits

Outer Decoder

Deinterleaver

Inner Decoder

Figure 1.1: Serially concatenated trellis coded modulation the bandwidth expansion su ered by these codes undesirable. Consider a power limited system, such as a xed satellite service. A 3 dB improvement in the coding gain of each channel will allow the system to either double the number of channels supported, or to double the data rate of each channel. Now consider a bandwidth limited system, such as a mobile terrestrial service. A doubling of the bandwidth e ciency of each channel, such as going from rate 1/2 coded QPSK to rate 2/3 coded 8PSK, will either halve amount of spectrum occupied by the service, or double the number of carriers supported in the same bandwidth. Also, in the case of this mobile service, terminals must be both small and cheap, making low complexity desirable. Therefore there are three important qualities required of the error correcting codes used in modern communications systems: Large coding gain; High bandwidth e ciency; and Low decoder complexity. This dissertation deals with a number of variants of serially concatenated trellis coded modulation (TCM). These schemes consist of an outer code cascaded with an inner TCM code, as depicted in in Figure 1.1. Often the inner code and the outer code are separated by an interleaver, the purpose of which is to permute in time the encoded symbols of the outer code. The aim of such schemes is to generate a powerful code from the concatenation of two simpler constituent codes, but which admits a simple decoding algorithm through separately decoding the constituent codes. Also, the use of the inner TCM code allows the higher bandwidth e ciencies of TCM schemes to be taken advantage of.

2

Introduction

Forney, in his classic work Concatenated Codes 2], investigated the concatenation of an inner convolutional code with an outer Reed{Solomon code. Hard decisions were passed from the inner Viterbi decoder to the outer Reed{Solomon decoder, and an interleaver was used to break up the error bursts produced by the Viterbi decoder. This form of concatenated codes has proven to be very e ective at achieving very low bit error rates (BER) with low complexity decoders, and has been incorporated into international standards 3] and extended to use TCM schemes as the inner code 4]. The use of hard decisions at the output of the inner decoder means that the performance is limited. In 5] it is shown that the capacity of a concatenated coding system is increased if some form of reliability information is passed to the outer decoder. The lack of practical soft-decision Reed{Solomon decoders means that these classical concatenated schemes are therefore limited. However, there do exist practical soft-input, soft-output (SISO) algorithms for convolutional codes | the most notable being the maximum a posteriori (MAP) algorithm 6], and the soft-output Viterbi algorithm (SOVA) 7]. The development of these algorithms led naturally to the proposal of the concatenation of two convolutional codes 8]. A signi cant breakthrough in concatenated coding occurred in 1993 with the introduction of Turbo Codes 9]. These are iteratively decoded parallel concatenated convolutional codes (referred to here as PCCC) which consist of two convolutional encoders, one of which encodes the information bits directly, while the other encodes the information bits following interleaving. These codes achieved quite astonishing performance: the performance of the codes of 9] was within 0.7 dB of the Shannon Limit, and PCCC schemes have since been found whose performance is within 0.35 dB of the Shannon Limit 10]. Maximum likelihood decoding of these concatenated codes is prohibitively complex | for example the PCCC of 11] is equivalent to a time-varying convolutional code with 2 states 12]. The key to solving the decoding complexity of PCCC schemes is the existence of a sub-optimal decoding algorithm which achieves performance very close to that of a maximum likelihood decoder. This algorithm iteratively decodes each code separately using SISO algorithms such as MAP or SOVA 13, 14].

1030

In 15] the serial analogues of Turbo Codes were introduced. These iteratively decoded serially concatenated convolutional codes (SCCC) are constructed from the same constituent codes and interleaver elements as PCCC, but are concatenated in a serial rather than a parallel fashion. Again, an iterative decoding algorithm is used which achieves near-optimum results. SCCC achieve comparable performance to PCCC, and in some cases can o er superior performance 15{18].

3

Introduction

One drawback of both PCCC and SCCC is decoder complexity. The MAP algorithm is approximately 4 times as complex as the Viterbi algorithm for the same code, and usually between 4 and 8 iterations of the algorithm are needed to achieve the desired performance. For some applications, such as hand-held terminals, it may be desirable to tradeo some of the high coding gain of SCCC for lower decoder complexity. As such the rst half of this dissertation investigates low complexity methods of improving the performance of traditional serially concatenated TCM schemes. In Chapter 2 methods of improving the BER of TCM schemes are examined. In 19] it was observed that the use of a Gray scrambler can improve the BER of a TCM scheme. It is shown here that this is equivalent to using a Gray mapped signal set, as proposed in 20] and 21]. This Gray scrambling is the same as concatenating the inner TCM code with a trivial rate 1, constraint length 1 convolutional outer code. The e ect of increasing the constraint length of this outer \code", i.e., using scramblers with memory, was also investigated. However, it was found that this increase in memory did not o er any further improvement. The next obvious step was to decrease the rate of the outer code, i.e., to add parity to the outer codes. This is of course the same as concatenating an outer convolutional code with an inner TCM code. In Chapter 3 a low complexity decoding algorithm for concatenated convolutional and TCM codes is developed which makes use of knowledge of the memory of the inner decoder. In traditional decoding algorithms for concatenated codes an interleaver is used to break up the memory of the inner decoder, such that the input to the outer decoder can be considered memoryless. However, memory increases capacity 22] and a coding scheme for a channel with memory should use knowledge of this memory 23]. Now, it has been observed that that the output of a Viterbi decoder can be modeled as a two state Markov Chain 24{26]. A new test for this is developed here for this hypothesis and it is shown that this is also true for TCM schemes. Other authors 27, 28] have proposed decoding algorithms for the Gilbert{Elliott channel, a time-varying channel which can be modeled with a two state Markov Chain, which preserves the capacity of the channel with memory. The combination of these two concepts leads to the proposal of a new, low complexity decoding algorithm for concatenated codes, based upon the modeling of the inner trellis decoder as a Gilbert{Elliott channel. Bounds calculations for this algorithm suggested that improvements in performance of up to 1 dB were possible. However, due to limitations of the decision feedback decoder used as the outer decoder, and the extreme nature of the Markov Chain model of the inner decoder (one state has an error rate of 0.5), none of this performance gain could be achieved in practice. It appeared that the only way to realise the promised gain was to use a soft-output algorithm for the inner

4

Introduction

decoder. However, in this case the algorithm becomes the same as a single iteration of the SCCC decoding algorithm and there is no reduction in complexity. Having examined reduced complexity decoding algorithms for concatenated codes and being led back to SCCC, the second half of this dissertation concentrates on ways of improving the performance of SCCC and iteratively decoded serially concatenated trellis coded modulation (referred to here as SCTCM). In Chapter 4 some background information is presented for both iteratively decoded PCCC and SCCC, including the iterated MAP decoding algorithms. Then the performance bounds for SCCC of 15] are revisited. In 15] design criteria for SCCC are developed based on these bounds. These criteria are used as the basis for code searches to nd the best constituent codes for SCCC. The authors of 15] conclude that the outer code in a SCCC should be a traditional convolutional code with as large a free distance as possible, and that the inner code should be a recursive convolutional code with as large an \e ective free distance" as possible. E ective free distance is the minimum output Hamming weight of all codewords with minimum input Hamming weight (maximum e ective free distance is also the criteria used to choose constituent codes of PCCCs). However, by examining the distance spectra of the SCCC it is found that it is not the e ective free distance terms which dominate the SCCC bound. This leads to the modi cation of the search criteria of 15] and the discovery of new SCCCs with better performance than those previously found. This modi cation is also used in the next chapter as a basis for a search for new SCTCM. A recent area of research activity is to merge the concepts of trellis coded modulation with either PCCC or SCCC to improve their bandwidth e ciencies. These are referred to as parallel concatenated TCM (PCTCM) 29{31] and SCTCM 31{34]. However, much of this recent work on SCTCM has persisted in using the e ective free distance as the design criteria for the inner code. In Chapter 5 performance bounds for SCTCM are derived and used to develop design criteria similar to those used in Chapter 4 for SCCC. These new design criteria are applied to a search for both rate 1=3 8PSK and rate 1=2 16QAM SCTCM with a range of constraint lengths in both the inner and outer codes. The performance of these codes are simulated, with the interesting result that, at low signal to noise ratios, it is the shorter constraint length codes which give the best performance. The best rate 1=2 16QAM SCTCM schemes are compared to existing equivalent PCTCM and SCTCM schemes, under the same conditions. The performance of these new codes is found to be comparable to that of the existing schemes, and their complexity is found to be signi cantly less than that of the existing schemes.

5

1.1. Summary of Contributions

Finally, in Chapter 6 the conclusions which can be drawn from this dissertation are summarised. Also, promising areas for further research are discussed.

1.1 Summary of Contributions

This dissertation examines serially concatenated trellis coded modulation schemes with the aim of producing low complexity codes which are both powerful and possess bandwidth e ciency. The original contributions made during the course of this investigation are summarised below.

Chapter 2

It is shown that Gray scrambled TCM and Gray mapped TCM are equivalent. Scramblers with memory are investigated and it is found that memoryless scramblers are best.

Chapter 3

A new test is developed for the hypothesis that a Viterbi decoder can be modelled as a Markov Chain. A decision feedback decoder for the Gilbert{Elliott channel is incorporated into a low complexity decoder for a convolutional code concatenated with a TCM scheme.

Chapter 4

It is shown that the performance bound for SCCC is not always dominated by terms from the inner code with e ective free distance. New design criteria for SCCC are developed. New SCCC codes are found, some of which have superior performance to those previously published.

Chapter 5

The performance bounds for SCCC are modi ed to produce performance bounds for SCTCM.

6

1.1. Summary of Contributions

New design criteria for SCTCM are developed. New SCTCM codes are found which have large coding gains, high bandwidth efciencies, and reduced decoding complexity. Some of these codes have superior performance to those previously published, and some of these have lower decoder complexity than those previously published.

7

Chapter 2 Concatenation of Trellis Codes and Scramblers

2.1 Introduction

Prior to Ungerboeck's landmark trellis coded modulation (TCM) paper 35] coding for multilevel signal sets usually consisted of a binary code with large free Hamming distance combined with a Gray mapped signal set. The Gray mapping ensured that any two points with a small Euclidean distance also have a small Hamming distance. This can be seen in Figure 2.1, where Gray mapped octal phase shift keying (8PSK) and sixteen quadrature amplitude modulation (16QAM) are illustrated. This approach works perfectly well with Gray mapped quaternary phase shift keying (QPSK), where Hamming distance and Euclidean distance are isomorphic, but there are problems with multilevel signal sets, where the Gray mapping does not monotonically translate larger Hamming distances into larger Euclidean distances. In 35] Ungerboeck presented a method for constructing TCM codes which minimised the event error probability Pe. This method is based upon nding signal set mappings which can be successively partitioned into subsets with increasing minimum Euclidean distance.

8

2.1. Introduction

011 001 0010 0110 1110 1010

010

000

0011

0111

1111

1011

110

100

0001

0101

1101

1001

111

101

0000

0100

1100

1000

8PSK

16QAM

Figure 2.1: Gray mapped signal set examples Examples of such mappings are shown in Figure 2.2. It is only this property of the signal set which was considered signi cant, and as such most multilevel signal sets used in conjunction with TCM are naturally mapped (natural mapping meets Ungerboeck's signal set partitioning criteria). However, in most applications it is the bit error probability, BER or Pb, which is of most interest. To address this problem a number of recent authors have suggested ways in which the BER of TCM can be reduced. In 19] the scrambling of the information bits with a Gray coder prior to encoding is discussed, while in 20] and 21] the use of a Gray coded signal set mapper is examined. In this chapter the Gray scrambling technique of 19] and the Gray mapping technique of 20] and 21] are discussed. In Section 2.5 it is shown that these two techniques are equivalent, and that they are algebraically related through a pair of linear transformations. In Section 2.6 the results of a search for better scramblers are presented. This search was not limited to combinatoric scramblers; scramblers with memory were also examined. However, as discussed in Section 2.7, no further gain can be obtained by using memory in the scrambler. The results in this chapter were presented in 36].

9

2.2. Error Probability Bounds for Trellis Codes

010 001 1000 1101 1100 1001

011

000

1111

1010

1011

1110

100

111

0100

0001

0000

0101

101

110

0011

0110

0111

0010

8PSK

16QAM

Figure 2.2: Naturally mapped signal set examples

2.2 Error Probability Bounds for Trellis Codes

Before discussing the two bit error rate (BER) reduction methods an important tool for examining the performance of trellis codes must be introduced. This is the union bound on the event error and bit error probabilities of a trellis code. These bounds can be used to estimate the performance of a trellis code, and can also be used to examine the structure of the code. As with convolutional codes, the bound is calculated by rst generating a distance spectrum of the trellis code. An important di erence between the bound calculation of convolutional codes and trellis codes arises when non-regular signal sets are used. These concepts are introduced below. For more detail the reader is referred to 37].

2.2.1 Distance Spectra

The block diagram of a rate k=n trellis code is shown in Figure 2.3. At every time t the convolutional encoder accepts k information bits and outputs n code bits. Information symbols xt are the integer representation of the k information bits, and code symbols zt are the integer representation of the n code bits. The signal set mapper uses the n code bits to choose one of 2n points in the signal set, producing the complex point at (in multidimensional TCM this maybe a sequence of complex points). A codeword is then a path through the trellis labeled with sequences of information symbols, code symbols, and signal set points. The code consists of all such codewords.

10

2.2. Error Probability Bounds for Trellis Codes

x

k t

z

Convolutional Encoder

n t

. . .

. . .

x1

t

z1

t

Signal Set Mapper

a

t

Figure 2.3: TCM block diagram When a decoder makes an error it outputs an erroneous path, which has information labels and code labels a certain Hamming distance from the correct path, and signal set labels a certain Euclidean distance from the correct path. The number of bit errors produced during this error event is equal to the Hamming distance of the information labels, and the probability of the error event depends on the Euclidean distance of the signal set labels. If the convolutional encoder is linear and the signal set is regular then, without loss of generality, the all{zero path can be considered to be the correct path and all other paths the erroneous paths. Thus the weights of labels of all the paths through the trellis also give the distances of labels for any correct path. This set of paths is known as the distance spectrum of the code. In practice only paths with signi cant probability of occurrence are included in the spectrum. The distance spectrum of a code is found by enumerating all paths which diverge from the all{zero path at some time t and remerge with the all{zero path at a later time, and recording the information labels, output labels, and signal set labels for each path. The distance spectrum can then be used to calculate bounds for the event error probability and the bit error probability.

2.2.2 Event Error and Bit Error Probability Bounds

The event error probability can be upper bounded by

Pe <

X

e2

Q

r kE ! d b ;

2

enN

0

(2.1)

0

where is the distance spectrum, de is the Euclidean distance of error path e, Eb =N is the ratio of the energy per information bit to the noise spectral density , and the Q function is de ne by 1 Z 1 e ?2u2 du Q(y) = p 2 y (2.2) = 0:5 erfc(y=2):

11

2.3. Gray Scrambled TCM

In practice the full distance spectrum is not calculated, but only a subset consisting of those spectral lines with a signi cant contribution to (2.1). Similarly the bit error probability can be upper bounded by ! r 1 X w Q d k Eb ; Pb < k e enN e2 where we is the Hamming weight of the information label of error path e.

2 0

(2.3)

2.2.3 Quasi-regular Signal Sets

Recall from above that if the convolutional code is linear and the signal set is regular then the all{zero path can be considered as the correct path. Gray mapped QPSK is a regular signal set, however 8PSK and QAM signal sets, regardless of the mapping, are not. A signal set is regular if the Euclidean distance between any two points in the signal set depends only on the di erence between the output labels for those points. For example, consider the Gray mapped QPSK and naturally mapped 8PSK signal sets shown in Figure 2.4. Note that with Gray mapped QPSK a binary di erence of 01 and 10 in the labels will always produce one distance, while a binary di erence of 11 will always produce another. On the other hand, with naturally mapped 8PSK the points labeled 000 and 111 have a binary di erence 111, as do the points labeled 001 and 110, yet these pairs have di erent Euclidean distances. Likewise the pairs 000 and 011, and 001 and 010 have the same binary di erence but di erent Euclidean distances. The implication of this is that for trellis codes using non-regular signal sets the Euclidean distance, and hence probability, of error paths is data dependent. This means that the all-zero path cannot be considered as the correct path, and the distance spectrum must enumerate all possible correct paths. However, if the set of Euclidean distances for any two paths depends only on the di erences between the output labels and not the states of the paths then the signal set is called quasi-regular 37] and the complexity of the distance spectrum calculation is only slightly more complex than that of a regular code. Note that all PSK and QAM signal sets are quasi-regular.

2.3 Gray Scrambled TCM

In 19] Zhang, Schlegel, and Alexander present results which show that the BER of 8PSK TCM systems can be reduced by scrambling the information bits with a Gray coder prior

12

2.3. Gray Scrambled TCM

010 001 01 011 000 00

100

111 11 101 110 10

8PSK

QPSK

Figure 2.4: Gray mapped QPSK and naturally mapped 8PSK

u

2

t

x

Gray Scrambler

2

t

z3

t

u1

t

x1

t

Convolutional Encoder

z2

t

z1

t

8PSK Natural Mapper

a

t

Figure 2.5: Gray scrambled 8PSK TCM block diagram to encoding. A block diagram of the Gray scrambled TCM system is shown in Figure 2.5 (in 19] only systematic rate 2/3 TCM codes with naturally mapped 8PSK signal sets are considered). The Gray scrambler is a simple combinatoric circuit with an input/output relationship as shown in Table 2.1. The reduction in BER achieved with a Gray scrambler is only small | approximately one third | but the complexity of the scrambler is trivial. In 19] all rate 2=3 8PSK trellis codes with between 4 and 256 states from 38] are examined. It is found that for all of these codes the BER performance with a Gray scrambler is approximately 66% of that without the scrambler. Explanations for this behaviour are given in 19] using both error transition matrices and distance spectra. Table 2.2 reproduced from 19] Table 1] shows the truncated distance spectrum of the rate 2/3 8 state 8PSK TCM code from 38]. Paths of Euclidean distance d are grouped together and the table lists the average number of paths A in each group. Also shown is the average number of information bit errors both with and without Gray scrambling (Bu and Bx, respectively); each of these is further divided into the contribution from bit 1

2

13

2.3. Gray Scrambled TCM

Input Output ut ut ut = 2ut + ut xt xt xt = 2xt + xt 0 0 0 0 0 0 0 1 1 0 1 1 1 0 2 1 1 3 1 1 3 1 0 2 Table 2.1: Gray scrambler input/output relationship

2 1 2 1 2 1 2 1

d A Bx Bx1 Bx2 Bu Bu1 Bu2 4.58579 2 7 2 5 5 2 3 5.17157 4 16.5 5.5 11 11 5.5 5.5 5.75736 8 42 14 28 28 14 14 6.00000 1 2 1 1 3 1 2 6.34315 16 96 32 64 64 32 32 6.58579 4.5 16 6 10 15 6 9 6.92894 32 216 72 144 144 72 72 7.17157 13 60 19.5 40.5 48.5 19.5 29 7.41422 2 9 4 5 7 4 3 7.51472 64 480 160 320 320 160 160 7.75736 34 192 63 129 145 63 82 8.00000 9 45 19 26 38 19 19 Table 2.2: Truncated distance spectrum of 8 state 8PSK TCM code

2

and bit 2. From Table 2.2 it is clear that the e ect of the Gray scrambler is to reduce the average number of bit errors in the least signi cant bit of the information symbol by approximately half, leading to a total reduction of approximately one third. The mechanism for this improvement can be seen most clearly by expanding the distance spectrum into individual error events. Table 2.3 shows the 40 most probable error paths in the distance spectrum for the 8 state 8PSK TCM code from 38] at an Eb =N of 6 dB (calculated from Table 2.2 and (2.1)). For each error path this table shows the sequence ^ of information label errors (et , where xit = xit eit ) at the output of a maximum likelihood ^ decoder, the sequence of information label errors (ft , where uit = uit fti) at the output of the descrambler, and the probability of occurrence of the error path.

0

Note that without the descrambler the most common information labels in the error paths of Figure 2.3 are et = 0, et = 1, and et = 3, which contain 0, 1, and 2 bit errors, respectively. With the descrambler the most common information label errors are now ft = 0, ft = 1, and ft = 2, which contain only 0 or 1 bit errors. The descrambler has left the information error symbols 0 and 1 untouched and transposed information error

14

2.3. Gray Scrambled TCM

Decoder Error Labels e e e ::: 1001 333 3101 3003 1000001 3100001 1103 1000000001 33033 330101 13301 1313 21 330003 3100000001 30303 300033 3000101 1000103 3000003 1000000000001 330100001 3100103 301301 30113 13300001 130301 13013 11303 110033 1100101 10003301 1000313 102 10031 20001 3100000000001 3000100001 1100003 1000000103

1 2 3

Descrambled Error Labels Path Probability f f f ::: 1001 9:65 10? 222 4:83 10? 2101 4:83 10? 2002 2:85 10? 1000001 2:85 10? 2100001 1:42 10? 1102 1:42 10? 1000000001 8:44 10? 22022 7:12 10? 220101 7:12 10? 12201 7:12 10? 1212 7:12 10? 31 5:11 10? 220002 4:22 10? 2100000001 4:22 10? 20202 4:22 10? 200022 4:22 10? 2000101 4:22 10? 1000102 4:22 10? 2000002 2:51 10? 1000000000001 2:51 10? 220100001 2:11 10? 2100102 2:11 10? 201201 2:11 10? 20112 2:11 10? 12200001 2:11 10? 120201 2:11 10? 12012 2:11 10? 11202 2:11 10? 110022 2:11 10? 1100101 2:11 10? 10002201 2:11 10? 1000212 2:11 10? 103 1:52 10? 10021 1:52 10? 30001 1:52 10? 2100000000001 1:26 10? 2000100001 1:26 10? 1100002 1:26 10? 1000000102 1:26 10?

1 2 3 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 0

Table 2.3: Information labelings of 8 state 8PSK TCM error paths at Eb =N = 6 dB

15

2.4. Gray Mapped TCM

symbols 2 and 3. This has the e ect of reducing the number of bit errors in the more probable error paths, at the expense of increasing the number of bit errors in the less probable paths. The total e ect of this permuting of information error symbols is found by calculating the Pb bound using all the terms in the distance spectrum. However, as noted in 19], Table 2.2 lists all the distance spectrum terms with a signi cant contribution to the Pb bound. This means that those error paths which experience an increase in the number of bit errors also have an event error probability which more than compensates this increase. Both the Pb bound and the simulated Pb are shown in Figure 2.6 for the 8 state 8PSK TCM scheme both with and without Gray scrambling 19] Figure 3]. The close correspondence between the bound and the simulation shows that su cient terms have been included in the bound calculation. The above discussion is true of all codes from 38], which were designed for optimum Pe rather than Pb. It is clear from Table 2.3 that the improvement in BER is due to the Gray descrambler at the output of the decoder, rather than the Gray scrambler at the input to the encoder. Indeed, as the code labels zt are unchanged by the scrambling process, the event error probability Pe is the same with and without scrambling. This means that trellis codes can be designed for minimum Pe using the techniques described in 35], and then a scrambler can be designed which will minimise Pb for that code.

2.4 Gray Mapped TCM

In 20] and 21] Du and Kasahara present 8PSK and 16QAM trellis codes which also have a lower BER than the codes of 38]. In these papers code searches are performed using Gray mapped signal sets, as opposed to the naturally mapped signal sets (called Set Partitioning mapping in 20, 21]) usually used with trellis codes. A block diagram of the Gray mapped 8PSK system is shown in Figure 2.7. Systematic rate 2/3 Gray mapped 8PSK and rate 3/4 Gray mapped 16QAM TCM codes with between 8 and 256 states are considered in this paper, all of which have a lower BER than the equivalent codes from 38]. In 20, 21] Gray mapping and natural mapping of the signal sets used in TCM schemes are compared by considering the average number of information bit errors in all decoded symbols with the same Euclidean distance from the correct symbol. The observation is made that when the Euclidean distance is small the average number of information bit errors is less with Gray mapping than with natural mapping, and that when the Euclidean distance is large the opposite is true. This is illustrated in Tables 2.4 and

16

2.4. Gray Mapped TCM

Euclidean distance Average information bit errors per symbol i Natural mapping Gray mapping i 1 0p :765 1 2 2 3 1:848 4 4 1 2 Table 2.4: Average number of information bit errors per symbol for 8PSK

3 4 3 2 5 4 1 2 3 2

Euclidean distance Average information bit errors per symbol i Natural mapping Gray mapping pi 1 p0:4 2 p0:8 3 p1:6 4 p2:0 5 3:2 1 3 Table 2.5: Average number of information bit errors per symbol for 16QAM

65 48 13 8 3 2 33 16 31 48 10 8 3 2 103 48

2.5 for the naturally mapped and Gray mapped 8PSK and 16QAM signal sets shown in Figures 2.1 and 2.2. The Euclidean distances in these two tables are for signal sets with an average power of 1:0 Watt. This means that if a trellis code is designed with a minimum event error probability then the most probable error paths with a Gray mapped signal set will have a smaller number of information bit errors than those with a naturally mapped signal set. Thus if codes designed using the two signal sets have similar event error probabilities the code designed with the Gray mapped signal set will have a lower bit error probability. Indeed, the Gray mapped codes found in 20, 21] have identical Pe to the naturally mapped codes, but with lower Pb. Despite the fact that the BER reduction arises from the use of a Gray mapped signal set rather than the Gray scrambler used in the previous section, there are remarkable similarities between the two methods. Both methods achieve the same Pe, which is the same as that for the original Ungerboeck code. Indeed, both methods achieve the same Pb . However, the reduction in BER is now achieved by changing the signal set and signal set mapping. Consider again the error paths in Figure 2.3; the Gray mapped TCM scheme produces error paths with the information error labels of column two directly from the maximum likelihood decoder. In the next section an algebraic description of the two methods is introduced and it is

17

2.5. Gray Scrambling and Gray Mapping Are Equivalent

shown that they are equivalent.

2.5 Gray Scrambling and Gray Mapping Are Equivalent

The Gray coded signal set mapper used by Du and Kasahara in 20] is simply a relabeling of the natural mapped signal set points. As mentioned in 20] such a linear transformation has no e ect on the performance of the scheme. Again looking at 8PSK TCM, the Gray mapped 8PSK signal used in 20] can then be represented as a naturally mapped 8PSK signal set preceded by the n n matrix transformation 01 1 11 C = @ 0 1 1 A: (2.4) 0 0 1 The Gray scrambler considered by Zhang et al in 19] precedes the generator matrix and is (2.5) S= 1 1 : 0 1 Now looking at the = 3 codes, the naturally mapped systematic 8PSK TCM code from 38] and 19] has generator matrix

G =

1

1 0 0 1

D2 D3 +1 D D3 +1

:

(2.6)

In 20] an equivalent systematic code with identical minimum Euclidean distance and number of nearest neighbours which, when used in conjunction with a Gray mapped signal set, gives better BER performance has generator matrix

G =

2

1 0 0 1

D3 +D2 +D+1 D3 +1 D3 +D+1 D3 +1

1

!

:

(2.7)

It is easy to verify that SG = G C , so the naturally mapped code with a Gray scrambler is identical to the code given in 20]. In general, the algebraic relation between a systematic trellis code with a Gray scrambler and a naturally mapped signal set and the equivalent systematic trellis code based on a Gray mapped signal set is

2

Sk Gnatural = GGray Cn;

(2.8)

where Sk is a k k Gray code matrix transformation, and Cn is a n n Gray code matrix transformation. This is illustrated in Figure 2.8.

18

2.6. Search for Better Scramblers

The relationship (2.8) does not hold for all naturally mapped codes from 38] and Gray mapped codes from 20]. This is because, for some constraint lengths, Du and Kasahara have found codes with lower number of nearest neighbours (and hence better Pe) than the original Ungerboeck codes. This is a minor di erence resulting from improvements in the code search algorithm; if the same code search algorithm is used to search for naturally mapped trellis codes and Gray mapped trellis codes, then (2.8) will hold for the resulting codes. Indeed, (2.8) can be used to convert the Gray mapped trellis codes of 20] to naturally mapped trellis codes with better Pe then those of 38]. Most published TCM codes use naturally mapped signal sets. A simple way of improving the BER performance of these codes is to precede the encoder with a Gray scrambler. However, as seen in Section 2.3, this only has the e ect of permuting the information labels which make up the error paths. If memory is added to the scrambler then it may be possible to improve the BER of a TCM code further. In the next section a method for searching for scramblers with memory is presented. It should be noted at this point that the Gray scrambling and the Gray mapping techniques of improving the BER of trellis codes both require all bits to be checked by the convolutional code. Neither technique results in a partitioning of the signal set into subsets of increasing minimum distance, as does natural mapping. This means that the complexity reduction achieved with traditional trellis codes by leaving some bits unchecked by the convolutional encoder cannot be obtained with these techniques.

2.6 Search for Better Scramblers

The union bound on the averaged rst event error probability is

Pe

X

ck

Pr(ck )

X

ei

Pr(ei jck )

(2.9)

where Pr(ck ) is the probability of a correct path ck and Pr(eijck ) is the probability of error path ei given that correct path ck was transmitted. In general to calculate this bound the summation over all error paths for all correct paths is performed, which is very complex. However, if quasi-regular codes (or indeed, regular codes) 37] are used then the all{zeroes path can be taken as the correct path and the bound becomes

Pe

X

ei

Pr(ei)

(2.10)

19

2.6. Search for Better Scramblers

Also, a bound for the probability of bit error can be calculated as follows

Pb

X

ei

W(ei ) Pr(ei )

(2.11)

where W ( ) is the Hamming weight of the error path. The aim of the scrambler is to reduce Pb by permuting error paths of large Hamming weight and high probability with those of small Hamming weight and low probability. However the Hamming weight of all error paths cannot be reduced. Therefore the approach is to use the bound on Pb as a cost function to choose the best scrambler, so that the e ect of the scrambler on an error path is weighted by its probability. Now consider the e ect of some scrambler S ( ) on a sequence of correct data c; the input to the encoder will be S (c), and if an error e occurs the output of the decoder will be S (c) + e, and the output of the descrambler will be S ? (S (c) + e). If the scrambler is linear

1

S ? (S (c) + e) = S ? (S (c)) + S ? (e) = c + S ? (e)

1 1 1 1

(2.12)

so the scrambling does not a ect the correct path. Thus the aim is to nd a scrambler S which minimises

X ^ Pb = W(S ? ( i ) ) Pr( i )

1

i

where is a subset of the set e of all error paths, consisting of only the error paths which ^ have a signi cant e ect on the cost function Pb. This set then depends on the speci c channel and on the signal to noise ratio (SNR) and can be approximated as the same set necessary to estimate Pe accurately. To perform a search rst determine the set , and for each path record both its probability and associated information bit sequence. This calculation is very similar to the distance spectrum calculation for the code. Then systematically search through candidate ^ descramblers and choose the one which provides the lowest Pb value. As mentioned above only linear scramblers are being considered, so the scramblers and descramblers can be represented as k k matrices of binary rational functions. Naturally each descrambler should be chosen to have a realisable scrambler. This can be achieved by ensuring that the descrambler matrix is invertible. The scrambler can also be viewed as a rate (k; k) convolutional code, and selection of a catastrophic scrambler should be avoided. This can be done by having no feedback in the descrambler 39], which is equivalent to selecting only polynomial matrices for the descrambler. Two generator matrices G and G are equivalent (i.e. they generate the same code) if and only if there is a k k nonsingular binary rational matrix T such that G = TG 39].

1 2 1 2

2

(2.13)

20

2.7. Memoryless Scramblers Are Best

Also, for any code G there exists an equivalent systematic encoder matrix Gsys such that Gsys = TG. Now Gsys has a trivial inverse of degree 0 whereas G generally does not. This means that the error paths produced by G? will have lower degree than those produced sys by G? , hence, while scramblers S and S give identical performance with generator matrices Gsys and G, respectively, scrambler S will have lower degree than S . Thus the search for the best scrambler for the code generated by G should involve rst nding the error path set for the equivalent systematic code Gsys. The best scrambler S for the systematic code can then be found, and the best scrambler for G will then be ST .

1 1 1 2 1 2

In summary, the search procedure for the scrambler S which minimises the BER of the code with generator matrix G is: 1. Find the equivalent systematic encoder Gsys = TG. 2. Generate the distance spectrum of Gsys, recording both the Euclidean distance and the information error sequence polynomial for each path. 3. Generate all non-singular, polynomial descrambler matrices S ? up to some memory order.

1

4. Find the descrambler S ? which minimises (2.13).

1

5. The scrambler which minimises the BER of G is ST .

2.7 Memoryless Scramblers Are Best

A search was performed for the best scrambler for = 3 systematic Ungerboeck codes with k varying from 2 to 5. The scramblers considered had a maximum memory order of 6. In all cases a memoryless scrambler was found to give best performance. Also, all scramblers found were equivalent to, and gave identical performance to the Gray scramblers used in 19]. The reason why adding memory to the Gray scramblers does not further improve the performance can be seen by examining a list of error paths ordered according to probability, such as Table 2.3. This table shows that a memoryless scrambler produces an almost ideal list, i.e., error paths with high probability have low Hamming weight and vice versa . To get further improvement a small number of error paths must be permuted, leaving most xed. Now the error paths of a rate k=n trellis code form a k-dimensional vector space, and a k-dimensional vector space has at most k invariant sub-spaces, so it is clear that

21

2.8. Summary

a small number of vectors cannot be changed. A nonlinear scrambler could do this, but then (2.12) would no longer hold.

2.8 Summary

It has been previously shown that the BER of TCM schemes can be improved by preceding the encoder with a linear scrambler. It has also been shown that the BER can be improved by using a Gray mapped signal set. In this chapter it has been shown that these two methods are identical, and result from the same transformation of the error events of the TCM scheme. Adding memory to the scrambler was also investigated in this chapter. However, no further improvement was found in the performance compared to that achieved with the memoryless scramblers. This was due to the near ideal pro le of the error events: error events with small Euclidean distances are generally associated with a small number of bit errors, and error events with large Euclidean distances are generally associated with a large number of bit errors. The best scrambler in all cases was found to reduce the BER by approximately 1=3, as illustrated for the 8PSK case in Figure 2.6. This gain is only signi cant in applications where the gradient of the BER curve is small, such as low Eb =N operating points or on fading channels. For example, the Eb =N required to achieve a BER of 10? with the = 3 8PSK Ungerboeck trellis code is reduced by 0.25 dB when a scrambler is used. However, even at high Eb=N the additional complexity of the scrambler is insigni cant, and as such should be used.

0 0 2 0

An alternative way of viewing a scrambler with memory is as a rate 1 (k; k) convolutional code. If adding memory to the scrambler could gain no further improvement then perhaps adding redundancy can. This of course is equivalent to concatenating the trellis code with an outer convolutional code. This is examined in the following chapters.

22

2.8. Summary

1

Bound { Natural Mapping Bound { Gray Mapping Simulation { Natural Mapping Simulation { Gray Mapping

10?

1

10?

2

Bit Error Rate

10?

3

10?

4

10?

5

10?

6

0

1

2

3

4 5 Eb =N (dB)

0

6

7

8

Figure 2.6: BER bounds and simulations for 8 state 8PSK TCM with and without Gray scrambling

23

2.8. Summary

u

2

t

z3

t

u1

t

Convolutional Encoder

z2

t

z1

t

8PSK Gray Mapper

a

t

Figure 2.7: Gray mapped 8PSK TCM block diagram

u

k t

x

Gray Scrambler

k t

z

Convolutional Encoder

n t

. . .

. . .

. . .

u1

t

Sk

x1

t

G

natural

z1

t

Natural Mapper

a

t

u

k t

y

Convolutional Encoder

n t

z

Gray Scrambler

n t

. . .

. . .

. . .

u1

t

G

Gray

y1

t

Cn

z1

t

Natural Mapper

a

t

Gray Mapper

Figure 2.8: Equivalence of Gray scrambling and Gray mapping

24

Chapter 3 Markov Chain Decoding of Concatenated Codes

3.1 Introduction

In the previous chapter trellis codes concatenated with outer scramblers were considered as a means of improving BER. The use of scramblers with memory has found to give no improvement over memoryless scramblers. The next logical enhancement is to add redundancy to the scrambler. However, a scrambler with memory and redundancy is of course a convolutional encoder. In this chapter the concatenation of an inner trellis code and an outer convolutional code will be considered. The classic work on the concatenation of two error correcting codes is Forney's Concatenated Codes 2]. The aim of concatenation is to e ciently realise a low BER by concatenating two codes which are decoded separately. In 2] this was restricted to the concatenation of an inner convolutional code with an outer Reed{Solomon code, and hard decisions were passed from the inner decoder to the outer decoder. This technique has proven to be a very practical method of achieving low BERs | for example in 4] a bandwidth e cient multidimensional trellis code concatenated with a Reed{Solomon

25

3.2. MC Model of a Viterbi Decoder

code achieves a BER of 10? at 155 Mbit/s.

10

However, the use of hard decisions at the output of the inner decoder results in a performance degradation (as do hard decisions at the output of the channel). In 5] it is shown that the capacity of a concatenated coding system is increased if some form of reliability information for the decoded bits, sometimes called side information, is passed along with the decoded bits to the outer decoder. The existence of e cient algorithms for convolutional codes which both input and output soft decisions, and the lack of good algorithms for block codes which accept soft decisions has resulted in recent authors suggesting concatenating two convolutional codes 8]. The most common soft-input, soft-output algorithms for convolutional codes are the Maximum A-Posteriori (MAP) algorithm 6] and the Soft{Output Viterbi Algorithm (SOVA) 7]. However, the complexity of the MAP algorithm is approximately 4 times that of the Viterbi algorithm, and the complexity of the SOVA algorithm is approximately 1.5 times that of the Viterbi algorithm. It would be advantageous to develop an algorithm for concatenated decoding which passes reliability information to the outer decoder and yet has lower complexity than the MAP or SOVA algorithms. One such possibility is to model the inner Viterbi Decoder as a Markov Chain and to treat it as a Gilbert{Elliott channel. In Section 3.2 it is shown that a Viterbi decoder for convolutional codes can be modelled as a two-state Markov Chain. A new test for this hypothesis is given, and it is shown that this hypothesis is also true when decoding TCM schemes. In Section 3.3 performance bounds for the Gilbert{Elliott channel are presented, and a decision{feedback decoder for the Gilbert{Elliott channel is examined, which makes use of channel state information. If the decoder for a TCM scheme can be modelled as a two state Markov Chain, and making use channel state information when decoding transmissions made over a Gilbert{ Elliott channel (a two state Markov Chain channel model), then it makes sense to model the inner decoder of a concatenated coding scheme as a Gilbert-Elliott channel. This is investigated in Section 3.4, where a low complexity decision{feedback decoder is proposed for concatenated codes.

3.2 MC Model of a Viterbi Decoder

A number of authors 24{26] have shown that the errors generated by a Viterbi Decoder can be modeled by a two-state Markov Chain (MC). This model is represented in Figure 3.1. Here the decoder is in the good state G when it is in a wait and in the bad state

26

3.2. MC Model of a Viterbi Decoder

PBG PBB

B

PGB

G

PGG

Figure 3.1: Two-state Markov Chain model

: : : 00001xxx : : : x1 00 {z : 0 000 : : : | :: } k( ? 1)

Figure 3.2: Viterbi decoder error burst

z

e }|

{

B when it is in a burst. During a wait the decoder is decoding correctly and does not produce any bit errors. However during a burst the decoder is in an error event and its estimate of the most likely path through the trellis di ers from the correct path. The lengths of the bursts and waits are distributed geometrically with the probability of being in the good state or the bad state depending only on the previous state.

For any non-systematic, rate k=n, constraint length convolutional code every error event is concluded by k( ? 1) correct bits (i.e., from any point in the trellis ? 1 trellis stages will return the decoder to the correct path). Thus a burst is de ned as beginning with an error and ending with k( ? 1) correct bits. This is illustrated in Figure 3.2 for a burst of e bits in length, where x is either a 0 or a 1. During this burst there will be no sequences of correct bits k( ? 1) long (except the concluding sequence). In 24] and 25] the k( ? 1) correct bits are de ned as belonging to a wait, however, as these bits are part of the error event, it is perhaps more accurate to count them as belonging to the burst. All that is required to determine the transition probabilities of the MC model are the average burst length e and the average wait length c. These can be determined from simulation. The BER during a wait is 0:0 and the BER during a burst is 0:5. The transition probabilities are

PBG = 1=e; PBB = 1 ? PBG; PGB = 1=c, and PGG = 1 ? PGB :

(3.1) (3.2) (3.3) (3.4)

27

3.3. Coding for the Gilbert{Elliott Channel

In 24{26] there is no formal test for the validity of the MC model. Whittle 40] gives a test statistic which can be used as the criterion for a set of hypothetical transition probabilities of a MC model. The statistic used by Whittle is

' = 2 log

2 2

Y Pjk !n ! ^ ; Pjk

jk

jk

j; k 2 fG; B g

(3.5)

which is distributed with 2 degrees of freedom. Here Pjk is the hypothesis being tested (i.e., the transition probabilities given in (3.1){(3.4)) and njk is the number of observed direct transitions from j to k. We also have ^ Pjk = njk (3.6) nk where,

nG = nGG + nGB , and nB = nBG + nBB :

(3.7) (3.8)

The hypothesis (3.1){(3.4) was tested for 16 state 8PSK, 16QAM, and 32CROSS trellis codes. In each case a simulation was performed to determine e and c for 100 error events. An independent simulation was then performed and the number of state transitions njk were counted. The hypothesis was accepted for all codes.

3.3 Coding for the Gilbert{Elliott Channel

The Gilbert{Elliott channel 41, 42] is a time varying binary symmetric channel with crossover probabilities determined by a two state Markov Chain model, as shown in Figure 3.3. In the good state G one binary symmetric channel BSCG is used (which has probability of error pG), while in the bad state B another binary symmetric channel BSCB is used (which has probability of error pB ). The transition probabilities of the MC are as in the previous section. The Gilbert{Elliott channel is used to model many channels with memory. Usually interleaving is used to break up the memory of such channels. With a su ciently large interleaver the channel can be considered memoryless, and coding techniques for memoryless channels can then be used. However, memory increases capacity 22] and the drawback of interleaving is that the capacity of the the memoryless interleaved channel is less than that of the original channel. Therefore a coding scheme for a channel with

28

3.3. Coding for the Gilbert{Elliott Channel

PBG PBB

B

PGB

1 ? pB

G

0 0

pG

1 ? pG

PGG

0

pB

0

1

BSCB

1

1

BSCG

1

Figure 3.3: Gilbert{Elliott channel memory should use knowledge of this memory 23]. Mushkin and Bar{David 27], and Goldsmith and Varaiya 28] both calculate the capacity of the Gilbert{Elliott channel ( 28] extends the work of 27] to include nite state Markov chains where the states are not necessarily binary symmetric channels). The capacity of the Gilbert{Elliott channel varies between two extremes, depending on how much information is known about the channel state. When no channel state information is available the capacity is the same as the interleaved memoryless channel. When perfect state information is available the capacity of the channel increases to the statistical average of the capacities of the individual channels associated with each Markov chain state.

3.3.1 Bounds for Convolutional Codes on Gilbert-Elliott Channels

Hagenauer 43] presents bounds calculations for the performance of convolutional codes on Gilbert{Elliott channels. He does this for a number of cases, namely No channel state estimation, Exact channel state estimation, Erroneous channel state estimation.

29

3.3. Coding for the Gilbert{Elliott Channel

u

Conv. Encoder

x

Interleaver

x

a ^

j

j

a ^

Deinterleaver Viterbi Decoder

Mod

GilbertElliott Channel

u ^

Demod

y

j

y

Figure 3.4: Coding system for the Gilbert{Elliott channel These calculations are reproduced below for convenience. Figure 3.4 shows the system considered in 43]. A rate k=n convolutional encoder with input u = (u ; u ; : : : ; uj ; : : : ) and output x = (x ; x ; : : : ; xj ; : : : ) is connected to a row interleaver. The uj and xj are equi-probable binary variables with values 1. The interleaver matrix is lled with the sequence x row by row and the contents of the matrix are read column by column and transmitted over the Gilbert{Elliott channel. At the receiver soft decisions and channel state estimates are determined and deinterleaved to produce y = (y ; y ; : : : ; yj ; : : : ) and ^ = (^ ; a ; : : : ; aj ; : : : ), respectively. The Viterbi decoder a a ^ ^ selects a path through the trellis which represents the code sequence x m for which the probabilities

1 2 1 2 1 2 1 2 ( )

Pr(y ; ^ jx m ) > Pr(y ; ^ jx m0 ); 8m0 6= m: a a

( ) ( )

(3.9)

The interleaver is assumed to be su ciently large to ensure that the values yj and aj are ^ statistically independent with respect to j . This means that (3.9) is equivalent to nding max m where

X

j

log Pr(yj ; aj jxjm ) ; ^

( )

(3.10)

P (y; ^jx) = a

X

a

P (a)P (^ja)P (yja; x): a

(3.11)

Now, for the Gilbert{Elliott channel the channel state a can have two possible values | B or G, as illustrated in Figure 3.3. The probability P (a) is the steady-state probability of being in either state: BG (3.12) P (a = G) = P P+ P ; and BG GB GB P (a = B ) = P P+ P : (3.13)

BG GB

The probability P (^ja) is the probability of the state estimator making an error, which is a taken to be if a = B and if a = G. Finally, the probability P (yja; x) is the transition probabilities of BSCB when a = B and BSCG when a = G (see Figure 3.3). The Viterbi

1 2

30

3.3. Coding for the Gilbert{Elliott Channel

algorithm for the Gilbert{Elliott channel therefore consists of nding the path x m which satis es

( )

max m

X

j

xm Lj j

(3.14)

with the metric increment 8 1?p >log G if aj = G; ^ < pG L j = yj > ^ :log 1 ? pB if aj = B; pB where + P (1 ? ) pG = PGB pB + PBG(1 ? )pG PGB BG is the average BER when the state estimator indicates a = G and ^ ? +P pB = PGB (1 (1 ?)pB + PBG pG PGB ) BG is the average BER when the state estimator indicates a = B . ^

^ ^ ^ ^ ^ 1 2 1 2 ^ 1 2 1 2 ( ) ( )

(3.15)

(3.16)

(3.17)

Bounds for the event error and bit error probabilities can also be calculated. Let Pe be the probability that a wrong path x m0 is chosen instead of the correct path x m , which happens when

X

j

xjm Lj >

(

0)

X

j

xjm Lj :

( )

(3.18)

If both paths di er in only d positions then

Pd = Pr

1 X

d X r=1

Lr < 0 ;

!

(3.19)

and the union bound on the event error probability is given by

Pe <

d=dfree

adPd ;

(3.20)

where dfree is the free distance of the code and ad is the multiplicity of paths with weight d. The bit error probability can then be bounded by

1 1 X cP; Pb < k d d d d

= free

(3.21)

where cd = adbd and bd is the average information weight of paths with weight d.

31

3.3. Coding for the Gilbert{Elliott Channel

Looking rst at the case of no channel state information,

8 d > X d e < P (1 ? Po)d?e d odd; Pd = >e d = e o :Pd? d even;

=( +1) 2 1

(3.22)

where Po is the average channel error probability, which for the Gilbert-Elliott channel will be +P Po = PGB pB + PBGpG : (3.23) PGB BG Now, if the channel state is known exactly then = = 0 and pG = pG and pB = pB . Amongst the d bits of any path there will be dB bits in a burst and dG = d ? dB in a wait. Amongst the bits in the burst there will be eB erroneous bits and amongst the bits in the wait there will be eG erroneous bits. Therefore from (3.15)

1 2 ^ ^

d X r=1

Lr = (dG ? 2eG) log 1 ? pG + (dB ? 2eB ) log 1 ? pB ; pG pB

(3.24) (3.25)

and from (3.19)

Pd = Pr ((dG + mdB ) < 2(eG + meB )) ;

where m is the metric ratio p log ?BB p m= : p log ?GG p

1 1

(3.26)

This leads to

B =0

d X d d Pburst (1 ? Pburst )d?d Pd = dB d "X

B

B

eB

dB peB (1 ? p )dB ?eB X dG peG (1 ? p )dG ?eG ; (3.27) G B G eB B e G eG

#

where Pburst = P (a = B ) and the summation over eB and eG is restricted to those values of eB and eG which satisfy the inequality of (3.25). Finally, Pd can be determined for the case of a state estimator where ; > 0. This is done by replacing pB by pB and pG by pG, and Pburst by + (3.28) P (^ = B ) = PGB (1 ? +)P PBG a PGB BG in (3.26) and (3.27).

1 2 ^ ^ 1 2

32

3.3. Coding for the Gilbert{Elliott Channel

u

j;n

x

Encoder

j;n

Interleaver

x

i

GilbertElliott Channel

y

i

Deinterleaver

y

j;n

Decision ^j;n Feedback Decoder Decoder

x

u ^

j;n

Figure 3.5: Decision{feedback decoder system

3.3.2 Decision Feedback Decoding on Gilbert-Elliott Channels

The next problem of course is how to make these channel estimates in practice. In both 27] and 28], similar decision{feedback decoders are presented which perform this channel estimation. In 27] the binary Gilbert{Elliott channel is considered, so the state estimation performed is actually Pr(zijzi? ), where zi is the error on the channel at time i. In 28] the more general non-binary nite state Markov channel is considered so this statistic is no longer valid. Rather, the state estimation performed is Pr(si jzi? ), where si is the channel state at time i. For the Gilbert{Elliott channel these two statistics are equivalent.

1 1

A block diagram for a system using the decision{feedback decoder is shown in Figure 3.5. Note that a rectangular interleaver is still used for the channel, however the interleaver is an invertible operation, so by the data processing theorem 44] it does not reduce the capacity. This means that it is not the interleaving which reduces the capacity of the channel, but the use of decoding algorithms which do not take advantage of knowledge of the memory of the channel. Bits are written into the J N interleaver row by row and transmitted over the channel column by column. The reverse operation is performed in the deinterleaver. There are two interchangeable time indices in use in Figure 3.5: The i index is the time index of the channel, and the fj; ng pair refer to row j and column n of the interleaver and deinterleaver. A block diagram of the decision{feedback decoder is shown in Figure 3.6. The decoder nds the maximum likelihood sequence of states on the channel. By operating on deinterleaved data the soft decision decoder which estimates the sequence of states can use the Viterbi algorithm. This is because the deinterleaver causes the symbols within any row of the deinterleaver to become independent as the number of rows becomes large. However, the symbols within any column are dependent, and this fact is used by the state estimator. The state estimator uses a recursive relationship within each column to estimate the state statistics. By using N bit delays the state estimator can maintain independent recursions for all columns within the deinterleaver (the delay of the maximum likelihood estimator is assumed to be N bits).

33

3.3. Coding for the Gilbert{Elliott Channel

STATE ESTIMATOR MAXIMUM-LIKELIHOOD ESTIMATOR

y

j;n

N BIT DELAY

y ?1

j

;n

f (x; y;

^j;n

)

m(y;

)

m

j;n

SOFT DECISION DECODER

x ?1 ^

j

;n

^j ?1;n

N BIT DELAY

Figure 3.6: Decision{feedback decoder For the Gilbert{Elliott channel the following recursive formulae are used 28]:

i+1 (G) = f (xi ; yi; i )

+1

= Pr(si = Gjxi; yi) = Pr(si = Gjxi; yi)PGG + Pr(si = B jxi ; yi)PBG 8 p (G)P + p (B)P > G i GG B i BG if yi 6= xi, < G ) i( = > (1 ?ppG )i(GG+ pB + B )? pB ) i(B )PBG ( GG if yi = xi, : (1 ? piG) )iPG) + (1 ? pB ) i(B) ( (1

(3.29)

and

i+1 (B ) = f (xi ; yi; i )

+1

= Pr(si = B jxi ; yi) = Pr(si = Gjxi; yi)PGB + Pr(si = B jxi; yi)PBB 8 p (G)P + p (B)P > G i GB B i BB if yi 6= xi, < pG i(G) + pB i(B) = > (1 ? pG ) i(G)PGB + (1 ? pB ) i(B )PBB if yi = xi, : (1 ? pG) i(G) + (1 ? pB ) i(B)

(3.30)

The soft decision decoder shown in Figure 3.6 has branch metrics mj;n and nds the sequence ^ which maximises Pr(y ; ^j^ ). The branch metric calculation is x x

m(yi; i) = log (Pr(yi; i jxi)) ;

where if yi 6= xi, Pr(yi; ijxi) = pG i(G) Pr( i) + pB i(B ) Pr( i) (1 ? pG) i(G) Pr( i) + (1 ? pB ) i(B ) Pr( i) if yi = xi.

(3.31)

(

(3.32)

34

3.4. Markov Chain Decoding

Outer Encoder Interleaver Inner Encoder AWGN Channel Inner VD Deinterleaver Outer VD

Gilbert-Elliott Channel

Figure 3.7: Serially concatenated convolutional code Note that Pr( i ) is a constant for each time i and can therefore be omitted from the metric calculation. In both 27] and 28] it is shown that, when no errors are made in xi , the decision{feedback ^ decoder preserves the capacity of the Gilbert{Elliott channel. This error propagation is however very important, as will be shown in the following section.

3.4 Markov Chain Decoding

If a Viterbi Decoder can be modeled as a Gilbert{Elliott channel and the performance of another Viterbi Decoder operating over a Gilbert{Elliott channel can be improved, then the next step is to combine the two concepts and apply them to a serially concatenated convolutional coding scheme. Here the inner Viterbi decoder is modeled as a two state Markov Chain and the outer code is treated as if it is using a Gilbert{Elliott channel. This is illustrated in Figure 3.7. To illustrate this consider the example of an inner rate 2=3 8PSK trellis code concatenated with an outer rate 1=2 convolutional code. Both codes have 16 states; the inner code is taken from 38, Table II] and the outer code is taken from 1, Table 11.1]. A 100 100 interleaver was used. Firstly a simulation of the inner code was performed to determine the parameters of the Gilbert{Elliott channel model. For each Eb=N value both the average burst length e and the average wait length c were measured (all Eb=N gures used in this example are for the concatenated rate 1=3 code). The Markov chain transition probabilities PBG, PBB , PGB , and PGG are then calculated from (3.1){(3.4). These results are summarised in Table 3.1. The channel transition probabilities are pB = 0:5 and pG = 0:0 always.

0 0

Using these parameters bounds on the BER of the concatenated rate 1/3 code can be calculated as described in Section 3.3. Figure 3.8 shows these bounds for the case of no

35

3.4. Markov Chain Decoding

Eb =N (dB) e (bits) c (bits) 4.0 69.42 37.28 4.5 56.34 48.73 5.0 46.78 68.17 5.5 40.09 107.02 6.0 34.21 182.65 6.5 28.57 342.66 7.0 23.21 708.03 7.5 19.55 1589.45 8.0 16.43 4035.39 Table 3.1: 16 state rate 2/3 8PSK Gilbert{Elliott channel model parameters

0

channel state information, exact channel state information, and erroneous channel state information. The bounds with erroneous channel state information assume = = and are calculated for = 0:1, = 0:01, and = 0:001.

1 2

The results of these bound calculations for this example are very promising indeed. At a BER of 10? there is over 1 dB of coding gain to be achieved simply by providing the outer decoder with an accurate estimate of the state of the inner decoder when it is modeled as a Gilbert-Elliott channel. Even when this state estimate contains 10% errors the improvement is still approximately 0.5 dB, and when the state estimate error drops to 1% nearly all the gain is realised. The decision{feedback decoder uses a simple recursive algorithm, so this gain has been achieved with only a small increase in complexity | the outer Viterbi decoder is still a hard-input algorithm.

6

To con rm these bounds calculations simulations of the concatenated system with no state estimation and exact state estimation were performed. In the no state estimation case decoding is performed as normal, i.e., Hamming distances are used as the branch metrics. For exact state estimation a genie-aided decoder is used. The genie-aided decoder determines the state of the Gilbert{Elliott channel by looking at both the input and the output of the channel. The branch metrics used in the genie-aided decoder are as in (3.15) with pG = pG and pB = pB . Now for the inner code pG = 0:0 and pB = 0:5, so the metric increments are 1 ( 100 used in practice) whilst in the good state, and 0 whilst in the bad state. The results of these two simulations are shown in Figure 3.9, along with the corresponding bounds. This gure shows that the bound calculations are accurate.

^ ^

Next the decision{feedback decoding algorithm was applied to the rate 1/3 example scheme. Note that when the decision{feedback decoder is used the outer decoder is a hard-in/hard-out decoder. When simulated it was found that this algorithm performed no better than the case when no state information is passed to the outer decoder. This was

36

3.4. Markov Chain Decoding

10?

2

10?

No CSI Erroneous CSI ( = 0:1) Erroneous CSI ( = 0:01) Erroneous CSI ( = 0:001) Exact CSI

3

10?

4

10? Bit Error Rate

5

10?

6

10?

7

10?

8

10?

9

10?

10

5

5.5

6

6.5 Eb =N (dB)

0

7

7.5

8

Figure 3.8: BER bounds for rate 1=3 concatenated example

37

3.4. Markov Chain Decoding

1

No CSI Simulation No CSI Bound Exact CSI Simulation Exact CSI Bound

10?

1

10?

2

Bit Error Rate

10?

3

10?

4

10?

5

10?

6

4

4.5

5

5.5

6 6.5 Eb =N (dB)

0

7

7.5

8

Figure 3.9: BER simulations for rate 1=3 concatenated example

38

3.4. Markov Chain Decoding

not because the state estimator was failing to estimate the state which the inner decoder was in (it was in fact making good estimates), but because the two-state maximum likelihood state sequence estimator (second half of Figure 3.6) was always outputing x = y. ^ This is because of the extreme nature of model used, i.e., if the pseudo-channel is in the good state then pG = 0:0 and x = y is the best decision, and if the pseudo-channel is in ^ the bad state then pB = 0:5 and x = y is as good a decision as any. Thus the algorithm ^ performs no better than concatenated decoding with no state estimation, where y is input directly to the outer decoder. As the state estimator section of the decision{feedback decoder operated correctly the next approach was to use the state probabilities directly in the outer decoder (now a soft-in/hard-out decoder). Here the branch metrics of the outer decoder become

L=

(

(G) + 0:5 (B ) if y = x; 0:5 (B ) if y 6= x:

(3.33)

This seems a promising approach as the cumulative state metrics in the outer decoder should become dominated by the terms arising during waits (recall that the outer decoder is still operating on deinterleaved data, so it will see one bit from a burst in the middle of many bits from waits). However, there was again no improvement over straight-forward concatenated decoding. This surprising result is because the Viterbi algorithm is very sensitive to missing the start of a burst, i.e., if the rst bit in a burst is erroneously marked as having a high probability of being in a wait then the concatenated scheme with state estimation will work no better than a concatenated scheme with no state estimation. This of course is unavoidable with a decision feedback approach. To check this the genie-aided simulation used in the example above was modi ed to always make an error in state estimation of the rst bit in every burst. The result of this simulation was that with an error only in the state estimation of the rst bit of every burst the concatenated scheme with state estimation performs no better than the scheme with no state estimation. The reason for this sensitivity is because the decision{feedback algorithm always misses the start of a burst and a burst always commences with an error. Thus the rst bit in a burst will be incorrectly marked with a high (G) and subsequent bits in the burst, which are separated by the deinterleaver, will be correctly marked with a high (B ) value. The high (B ) value in the branch metrics (3.33) of the outer decoder cause the latter bits in a burst to be treated somewhat like an erasure, leading to the observed coding gain when all bits in the burst are correctly marked. However, when the rst bit in a burst is incorrectly marked as being in a wait, i.e., the branch metric indicates that there is a

39

3.5. Summary

time

i?2

i?1

i

i+1

i+2

...

...

...

...

...

...

Figure 3.10: ACS choices when start of burst is missed very high probability that y = x when in fact y 6= x, the outer Viterbi decoder is forced to take the wrong branch. This is illustrated graphically in the partial trellis diagram of Figure 3.10. The correct path is assumed to be the all-zeroes path and rst bit in the (deinterleaved) burst occurs at time i. Solid lines indicate the branches with high probability which are chosen by the add-compare-select (ACS) algorithm, and dashed lines indicate the branches with low probability which are rejected by the ACS algorithm. Clearly when the traceback algorithm is performed the wrong path will be taken at time i, resulting in an error with Hamming weight at least equal to dfree of the outer code. This problem does not occur with the other bits in the burst because the branches are correctly labeled with the probabilities. A method must therefore be found to make channel state estimates without the delay incurred by decision{feedback. The strongest candidate for this task is to modify the inner Viterbi algorithm to produce channel state estimates along with hard decisions. This could be achieved in a similar fashion to the SOVA algorithm 7]. However, this would require similar complexity to the SOVA algorithm, and as such the SOVA algorithm may as well be used to supply soft inputs directly to the outer decoder.

3.5 Summary

In this chapter the similarities between the Viterbi decoder for the inner code of a concatenated coding scheme and the Gilbert{Elliott channel were noted. A new test was developed which showed that Viterbi decoder for a TCM scheme could be modelled as a two state Markov Chain. This model was then used to modify a decision{feedback decoder, previously proposed for the Gilbert{Elliott channel, to produce a low complexity decoder for coding schemes with an inner TCM code and outer convolutional code. Unfortunately, the performance improvement of the decision{feedback decoder promised by bounds calculations and genie{aided simulations has not been realisable. This does

40

3.5. Summary

not preclude serially concatenated convolutional codes from further investigation | however, they cannot be decoded with the reduced complexity algorithm investigated above. Instead some form of soft-output decoding algorithm must be used to allow delay{free detection of the start of each inner decoder error events. Algorithms such as MAP and SOVA can be used in this case, but then the algorithm will have the same complexity as a single iteration of the decoding algorithm for SCCC (also known as \serial Turbo codes"). Recent extensions of work on parallel concatenated convolutional codes (\turbo" codes) to the serial case have shown great promise. Here the inner and outer convolutional codes are separated by a random interleaver | very similar to the scramblers with memory investigated in Chapter 2 | and decoded with iterated MAP or SOVA decoders. These serial \turbo" codes achieve a signi cant performance improvement over the techniques investigated so far and will be examined in the remaining chapters.

41

Chapter 4 Iterative Decoding of SCCC

The performance of serially concatenated convolutional codes can be improved by using a soft{output decoding algorithm, such as MAP or SOVA, to decode the inner code. This is illustrated in Figure 4.1. The interleaver shown in this gure is usually viewed as a device to break up the error bursts produced by the inner decoder and to transform the outer channel into a memoryless channel 8]; it is usually a rectangular interleaver. However, signi cant further improvements in performance can be made if the outer decoder also uses a soft{output decoding algorithm and the entire decoding process is iterated 45]. This is illustrated in Figure 4.2. Better performance is achieved if the interleaver in this gure is a random interleaver, which is now viewed as a component of the code structure. These iteratively decoded serially concatenated convolutional codes have received attention recently as the serial analogues of \Turbo Codes", which are iteratively decoded parallel concatenated convolutional codes 15{18]. The random interleaver is an essential

Outer Encoder Interleaver Inner Encoder Channel Soft Output Decoder Deinterleaver Hard Output Decoder

Figure 4.1: Soft{output decoded serially concatenated convolutional codes

42

4.1. SCCC and PCCC Background

Outer Encoder Interleaver Inner Encoder Channel Soft Output Decoder Deinterleaver Soft Output Decoder Hard Decision

Interleaver Iterated Decoder

Figure 4.2: Iteratively decoded serially concatenated convolutional codes feature of both the serial and parallel forms of these codes. The encoder for these codes should not be viewed as the concatenation of two independent constituent codes separated by a memory destroying interleaver, but as a single encoder. However, the decoding of these codes is performed by iteratively decoding the constituent codes separately, and, while sub-optimal, this decoding technique achieves near-optimal performance. This chapter commences with a brief introduction to both parallel and serial concatenated convolutional codes. This is followed in Section 4.2 with an outline of the bounds calculations developed in 15, 18]. In Section 4.3 these bounds are used to develop improved search criteria for binary codes, and nally, in Section 4.4, these new criteria are applied to a code search. In the remaining chapters the notation used by the authors of 15, 18] will be adopted: iteratively decoded serially concatenated convolutional codes will be referred to as SCCC; and iteratively decoded parallel concatenated convolutional codes (\Turbo Codes") will be referred to as PCCC.

4.1 SCCC and PCCC Background

4.1.1 PCCC Encoder Structure

The encoder for a parallel concatenated convolutional code is depicted in Figure 4.3. Both encoders are systematic convolutional codes; encoder 1 has rate k=n and encoder 2 has rate k=n . The same information is encoded by both encoders, but the information bits are interleaved by the bit-wise random interleaver prior to the second encoder. This interleaver applies an N bit random permutation to the sequence of information bits coming into the second encoder. There is a corresponding deinterleaver in the decoder, and

1 2

43

4.1. SCCC and PCCC Background

k k

Encoder 1

n1 ? k

k

Encoder 2

n2 ? k

Figure 4.3: PCCC encoder block diagram both the interleaver and deinterleaver must be synchronised. This means that some form of framing information must also be transmitted. The most straight-forward implementation of a PCCC is with frames of information bits equal in size to the interleaver, and the PCCC can be analysed as a block code with a code word of size N bits 12]. However, it is also possible to analyse 12] and decode 13] the convolutional code in a continuous fashion. As both encoders are systematic it is not necessary to transmit the information bits twice. Therefore only the n ? k parity bits of the second encoder are transmitted. This results in an overall PCCC code rate of k=(n + n ? k).

2 1 2

The interleaver is an essential feature of the PCCC, and the performance of the PCCC improves with increasing N . However, as the interleaver is random, the transmission of the codeword cannot commence until the interleaver has been lled, and decoding cannot be completed until the entire codeword is received. This results in an overall latency of 2N bits (plus propagation delay and decoding delay). This latency means that in delay sensitive applications, such as voice transmissions, a trade-o must be made between delay and performance. If the PCCC is used as a block code then the possibility exists of terminating the trellises of constituent encoders to improve performance 46]. However, due to the presence of the interleaver, the constituent codes must be terminated with di erent sequences. A method of constraining the interleaver is given in 46] that ensures that the two constituent codes terminate in the same state, but this constraint may reduce the e ectiveness of the interleaver, possibly resulting in worse performance than the unterminated case. Unterminated trellises will be used throughout this dissertation. The interleaver must be random, but for simplicity of implementation the same interleaver

44

4.1. SCCC and PCCC Background

will be used for each block. A pseudo-random process must be used to generate the N bit permutation, but this may result in a poor interleaver. An interleaver must be both random and separate adjacent input bits by as much possible 47]. In 47] an algorithm is presented which generates random interleavers which separate adjacent bits on their inputs by at least S bits on their outputs (S is chosen to be as large as possible given the interleaver size). These are referred to as S{type random interleavers and will be used throughout this dissertation. This algorithm is not optimal and may not always, particularly for small interleavers, produce interleavers with the best performance. As such it is advisable to generate a number of interleavers using this method and choose the one which gives the best performance. There are many possible variations to the basic PCCC code structure. One possible variation is to have more than two constituent codes in a PCCC 48]. An example of this is shown in Figure 4.4, where three constituent codes are used. Note that there are now two interleavers (the input to encoder 1 could also be interleaved, but it is the relative permutation between the constituent encoder inputs which is important). Also, the systematic outputs of all but one of the constituent encoders are punctured, giving an overall rate of k=(n + n + n ? 2k). The rst PCCCs proposed were very power e cient but had poor bandwidth e ciency, and other variations have concentrated on improving this. The encoder structure of PCCC lends itself readily to puncturing, and bandwidth e ciency can be improved by selectively puncturing parity bits 49]. As another means of improving bandwidth e ciency there have been many trellis coded schemes proposed for PCCC 29]. Approaches taken have ranged from pragmatic trellis coding, through multi-level coding, to classic Ungerboeck TCM approaches.

1 2 3

As far as the constituent codes of the PCCC are concerned it has been found that the best performance is achieved if these codes are systematic recursive convolutional codes (SRCC). This is because the performance of the PCCC is improved if the \e ective free distance" is maximised, and SRCCs have a higher e ective free distance than the more commonly used non-recursive convolutional codes 12]. The e ective free distance of a convolutional code is the minimum Hamming weight of all codewords with minimum information Hamming weight. Most SRCC have a minimum information Hamming weight of 2, whereas non-recursive codes have minimum information Hamming weight of 1.

4.1.2 PCCC Decoder Structure

The maximum likelihood decoding of a PCCC is prohibitively complex | for example the PCCC of 11] is equivalent to a time-varying convolutional code with 2 states 12].

1030

45

4.1. SCCC and PCCC Background

k k

Encoder 1

n1 ? k

k

1

Encoder 2

n2 ? k

k

2

Encoder 3

n3 ? k

Figure 4.4: PCCC encoder with three constituent codes The key to the success of PCCC is the existence of a sub-optimal decoding algorithm which yields near-optimal results. In this algorithm the constituent codes are decoded separately with soft-output decoding algorithms. As the encoders share input sequences, albeit interleaved, the soft output from one decoder can be connected to the soft input of another decoder via an interleaver. Maximum performance is then achieved by iterating the process. This is illustrated in Figure 4.5 for a PCCC with two constituent encoders. Note that all of the signals shown in this gure are soft decisions, and that the interleavers and deinterleavers must now permute soft decisions. The parallel concatenated code structure can be used for any constituent codes, block or convolutional, provided that there exists a soft decoding algorithm for the constituent code. A number of parallel concatenated block codes (PCBC) have been proposed 12], but existence of e cient soft output decoding algorithms for convolutional codes means that PCCCs dominate. The two most common decoding algorithms are the maximum a posteriori (MAP) algorithm 6] and the soft output Viterbi algorithm (SOVA) 7]. The MAP algorithm gives best performance, but the SOVA algorithm provides reduced complexity. There are two forms of the MAP algorithm in common use for decoding PCCC: The Bahl, Cocke, Jelinek, and Raviv (BCJR) algorithm 50], so named after the originators of the algorithm, and the soft-input, soft-output (SISO) algorithm 13, 14]. The BCJR algorithm rst appeared in 1972 in 50] and was used in the original PCCC paper 9], and in many PCCC schemes since. The SISO algorithm is essentially the same as the BCJR algorithm but has a more elegant form and has greater exibility (e.g., it

46

4.1. SCCC and PCCC Background

From Demodulator

Decoder 1

Decoder 2

Hard Decision

Decoded Bit

?1

Figure 4.5: PCCC decoder block diagram can decode codes with parallel branches); as such it is the algorithm used throughout this dissertation. Both the BCJR and the SISO algorithms can be implemented in both multiplicative and additive forms. The multiplicative forms are more suited to programmable DSP architectures and the additive forms are more suited to hardware implementations, where multiplications require greater complexity. The description of the SISO algorithm given below will be for the multiplicative form.

4.1.3 SCCC Encoder Structure

The encoder for a serially concatenated convolutional code is depicted in Figure 4.6. Both encoders are convolutional codes; the outer encoder has rate ko=no and the inner encoder has rate ki=ni. The information bits are encoded by the outer encoder and the resulting code bits are interleaved by the bit-wise random interleaver and become the information bits of the inner encoder. This interleaver applies an N bit random permutation to the sequence of code bits coming from the outer encoder. There is a corresponding deinterleaver in the decoder, and both the interleaver and deinterleaver must be synchronised. This means that, as in the case of PCCC, some form of framing information must also be transmitted. The most straight-forward implementation of a SCCC is with frames of information bits which, after encoding by the outer encoder, are equal in size to the interleaver. As for PCCCs the SCCC can be analysed as a block code, with a code word with Nko=no information bits and Nni =ki code bits 18], and again it

47

4.1. SCCC and PCCC Background

k

o

Outer Encoder

n

o

k

i

Inner Encoder

n

i

Figure 4.6: SCCC encoder block diagram is also possible to analyse 18] and decode 13] the convolutional code in a continuous fashion. The overall SCCC code rate is (kiko)=(nino ). It is usual, although not essential, to have no = ki, resulting in an overall code rate of ko=ni. The interleaver is an essential feature of the SCCC, and the performance of the SCCC improves with increasing N . However, as the interleaver is random, the transmission of the codeword cannot commence until the interleaver has been lled, and decoding cannot be completed until the entire codeword is received. The PCCC encoding latency was N information bits, but the SCCC encoding latency is Nko=no information bits, due to the action of the outer code. This results in an overall latency of 2Nko =no information bits (plus propagation delay and decoding delay). This di erence in latency between PCCC and SCCC means that two schemes should be compared not with identical interleaver sizes, but with identical latency. As with PCCC this latency means that in delay sensitive applications, such as voice transmissions, a trade-o must be made between delay and performance. If the SCCC is used as a block code then the possibility again exists of terminating the trellises of the constituent encoders to improve performance. Now the outer code trellis must be terminated, and the resulting code bits encoded by the inner code before it in turn is terminated. This results in an overhead which, for large interleavers, does not result in a measurable improvement in performance. Only large interleavers (with a latency of 10000 information bits) are considered in this dissertation | as such unterminated trellises are used throughout. As with PCCC, S{type random interleavers give best performance with SCCC (see Section 4.1.1). As such S{type random interleavers will be used throughout this dissertation. The SCCC encoder structure is perhaps more easily adapted for TCM operation. In parallel concatenated trellis coded modulation (PCTCM) scheme the output of two or more constituent encoders must be input to the signal set mapper 30], complicating the design and analysis of the resulting TCM scheme. However, in the case of serially concatenated trellis coded modulation (SCTCM) only the inner encoder must be input to the signal set mapper. Thus the traditional methods of design and analysis can be

48

4.1. SCCC and PCCC Background

p

k

k

p

p

Parallel Encoder

n ?k

p

p

k

o

Outer Encoder

n

o

s

k

i

Inner Encoder

n

i

Figure 4.7: HCCC encoder block diagram readily adapted to SCTCM schemes. An recent variation on the basic PCCC and SCCC is the hybridisation of PCCC and SCCC encoder structures, referred to as hybrid concatenated convolutional codes (HCCC) 14, 51]. In these schemes a SCCC (or PCCC) scheme is parallel (or serially) concatenated with another scheme. An example of this is shown in Figure 4.7, where two constituent encoders are serially concatenated, and the result is parallel concatenated with a third constituent encoder. All encoders are convolutional codes; the outer encoder has rate ko=no, the inner encoder has rate ki=ni, and the parallel encoder has rate kp=np. The inner encoder and the parallel encoder are systematic, and the systematic output bits of the parallel encoder are punctured. There are two interleavers, s and p, of size Ns and Np bits, where Ns = Npno =ko. If no = ki and kp = ko then the overall rate of the HCCC is ko=(ni + np ? ko). In 15, 18] it is concluded that the inner constituent code of a SCCC scheme should be a SRCC and that of the outer constituent code should be a traditional convolutional with maximum free distance. It is also concluded that the e ective free distance of the inner code should be maximised, and as such the constituent codes found for PCCC can be used as the inner codes of a SCCC scheme. However, as will be shown in Section 4.3 tighter design criteria can be used, resulting in the new codes of Section 4.4.

4.1.4 SCCC Decoder Structure

The near-optimal iterative decoding structure for PCCC schemes can be readily adapted for the decoding of SCCC schemes. The decoder structure for the SCCC encoder of Figure 4.6 is shown in Figure 4.8. Here the soft outputs of the inner code bits from the

49

4.1. SCCC and PCCC Background

From Demod Decoded

Inner Decoder

?1

Outer Decoder

Hard Decision

Bit

Figure 4.8: SCCC decoder block diagram demodulator are decoded to produce soft decisions of the inner information bits. These are, after deinterleaving, also the soft decisions of the outer code bits. These are in turn decoded to produce improved soft decisions of the outer code bits, which, when interleaved, are improved soft decisions of the inner information bits. This process is iterated a number of times, before nally the outer decoder produces a hard decision of the outer information bits. Block codes can also be used as the constituent encoders, resulting in serially concatenated block codes (SCBC) 15, 18]. However, the sparsity of good soft input, soft output decoding algorithms for block codes means that SCCC dominates. Either MAP or SOVA algorithms can be used as the soft output decoding algorithms for SCCC. MAP algorithms should be used where reduced complexity is desired. The SISO algorithm 13, 14, 17] is a MAP algorithm developed to allow the decoding of SCCC. (The SISO algorithm has soft information bit and code bit outputs, whereas the BCJR algorithm only has soft information bit outputs.) The SISO algorithm will be used throughout this dissertation. The SISO algorithms can be implemented in both multiplicative and additive forms. The multiplicative form is more suited to programmable DSP architectures and the additive forms is more suited to hardware implementations, where multiplications are more difcult. The description of the SISO algorithm given below will be for the multiplicative form.

4.1.5 SISO Module

The SISO algorithm is a exible form of the MAP algorithm which can be used with a general trellis code. It is suitable for both PCCC and SCCC, and in particular it can cope with parallel branches in the trellis. The SISO algorithm is presented in 13, 14, 17]

50

4.1. SCCC and PCCC Background

and is reproduced here in its multiplicative form. The basic unit of the algorithm is the SISO module, shown in Figure 4.9. This is a four port device, with two inputs and two outputs, which implements the SISO algorithm. The module's inputs are the probabilities of the information symbols P (u ; I ) and code symbols P (c ; I ) labelling the edges of the code trellis, and the outputs are updates of these probabilities based upon knowledge of the trellis, P (u ; O) and P (c ; O), respectively. Figure 4.10 shows the use of the SISO module in a PCCC decoder. The inputs to the rst SISO module are the probabilities of the code symbols and information symbols for the rst code. The code symbol probabilities are similar to the branch metrics used in a Viterbi decoder and are determined from the demodulated signal amplitude and an estimate of the noise variance. The information symbol probabilities for the rst code are the deinterleaved updated information symbol probabilities of the second code from the previous iteration (during the rst iteration nothing is know about these probabilities, so the a priori distribution should be used). Only the updated probabilities of the information symbols of the rst code are used, the updated code symbol probabilities are ignored. The updated probabilities of the rst code's information symbols are interleaved and become the input to the second SISO module, along with the second code's code symbol probabilities, again determined from the output of the demodulator and an estimate of the noise variance (note that only the parity bits from the second encoder are transmitted, so the probability of the punctured systematic bits are all 0:5). Again the code symbol output is not used, and the probabilities of the information symbols from the second code are deinterleaved to become the input to the rst SISO module during the next iteration. After all iterations are complete the probabilities of the information symbols from both codes can be combined by multiplying the probabilities. A nal decision can then be made by choosing the information symbols with the highest probabilities. Figure 4.11 shows the use of the SISO module in a SCCC decoder. As only the inner encoder is connected to the channel, the output of the demodulator along with an estimate of the noise variance is used solely to calculate the probabilities of the code symbols for the inner code. This is input to the inner SISO module, along with the interleaved code symbol probabilities of the outer code from the previous iteration (because outer code symbols are inner information symbols). Only the updated inner code information symbol probabilities are used, which is deinterleaved to become the outer code code symbol probabilities. As the outer code information symbols are not transmitted nothing is known about them and they are assumed to be equi-probable, i.e., all P (u ; I ) of the outer SISO module can be set to 1.0. The outer code code symbol probabilities are interleaved and input to the inner code SISO module during the next iteration, and, if it is the nal iteration, the

51

4.1. SCCC and PCCC Background

P (c ; I ) P (u ; I )

SISO Module

P (c ; O) P (u ; O)

Figure 4.9: The SISO module

From Demod Not Used From Demod Not Used

SISO 1

SISO 2

Decoded Bit

Hard Decision

?1

Figure 4.10: PCCC decoder block diagram showing SISO modules updated outer code information symbol probabilities is used to make the nal decision. Now the operation of the SISO module will be described. Firstly the notation used, including that of the encoder and the trellis, will be de ned. Then the symbol-wise SISO algorithm will be presented. Finally, the more useful bit-wise form of the algorithm will be discussed.

SISO Algorithm Notation

The trellis encoder, as shown in Figure 4.12, has input U and output C . U = (Ut)t2T is the sequence of input, or uncoded, symbols, de ned over the time index set T (a sequence s is denoted (s)). For the case of a framed PCCC or SCCC, T will be nite and de ned by the block size. Ut is a k bit symbol, so each Ut will take on one of 2k values from

U = fu ; : : : ; u k g :

1 2

(4.1) (4.2) (4.3)

Associated with the sequence of input symbols is a sequence of a priori probabilities:

P (u ; I ) = (Pt (ut ; I ))t2T ;

where I indicates the input to the module, and

Pt (ut; I ) = Pr Ut = ut]:

52

4.1. SCCC and PCCC Background

From Demod Not Used

Inner SISO

?1

Outer SISO

Decoded Bit

P (u ) = 1:0

Hard Decision

Figure 4.11: SCCC decoder block diagram showing SISO modules Likewise, C = (Ct )t2T is the sequence of output, or coded, symbols, de ned for the same time index set T . Ct is a n bit symbol, so each Ct will take on one of 2n values from

C = fc ; : : : ; c n g :

1 2

(4.4) (4.5) (4.6)

Also associated with the sequence of output symbols is a sequence of a priori probabilities:

P (c ; I ) = (Pt (ct ; I ))t2T ;

where

Pt (ct; I ) = Pr Ct = ct ]:

To simplify the notation Pt (ut; I ) will be denoted as Pt (u; I ), and Pt(ct ; I ) as Pt(c; I ). The SISO algorithm can work with a time-varying trellis, although the description given below will be for the time-invariant case. A time-invariant convolutional code is completely speci ed by a single trellis section, say between time t and t + 1. The trellis section describes the transitions between the states at these two time instants. The trellis of a convolutional code with constraint length has 2 states. The state of the trellis at time t will be St = s, with s 2 fs ; : : : ; s +1 g. Each of these states will have 2k branches, and the branch taken at time t will depend on the input symbol ut. Associated with this branch will also be the output symbol ct , which will depend on st and ut. There will therefore be 2 2k edges in the trellis section, which represent all possible transitions between trellis sections. Each edge e therefore has the following functions associated with it:

+1 1 2 +1

the starting state sS (e);

53

4.1. SCCC and PCCC Background

u

Trellis Encoder

c

k

n

Figure 4.12: The trellis encoder

sS (e)

e (e) (e), c u

sE (e)

Figure 4.13: An edge of the trellis section the ending state sE (e); the input symbol u(e); and the output symbol c(e). This is depicted in Figure 4.13.

Symbol-wise SISO Algorithm

The SISO module, as shown in Figure 4.9, is a four port device. The module's inputs are the sequences of probabilities P (c ; I ) and P (u ; I ). Based on these inputs, and knowledge of the trellis the module outputs the output probabilities, P (c ; O) and P (u ; O). In the nite time case considered here all the c and u sequences of probabilities will have T elements. At each time t each u will consist of 2k probabilities, one for each possible symbol, and each c will consist of 2n probabilities, also one for each possible symbol. For

54

4.1. SCCC and PCCC Background

the binary case the input probabilities can be written as

k Y j =1 n Y j =1

Pt (u; I ) =

and

Pt(uj ; I )

(4.7)

Pt (c; I ) =

Pt (cj ; I );

(4.8)

where uj and cj are bit j of symbols u and c, respectively. If the output of the encoder is connected to a nonbinary-input channel then (4.8) should not be used, rather Pt(c; I ) should be calculated directly (see Section 5.5). At time t the output probabilities can be computed from the recursive form of the MAP algorithm:

Pt (c; O) = Ht

and

X

e:c(e)=c

At? sS (e)]Pt u(e); I ]Pt c(e); I ]Bt sE (e)]

1

(4.9)

Pt (u; O) = Jt

X

e:u(e)=u

At? sS (e)]Pt u(e); I ]Pt c(e); I ]Bt sE (e)];

1

(4.10)

where Ht and Jt are normalisation constants chosen at each time t to ensure that

X

c

Pt(c; O) = 1

(4.11)

and

X

u

Pt(u; O) = 1;

(4.12)

respectively. The quantities At (s) and Bt (s) are state metrics and can be obtained through the forward and backward recursions, respectively, using

At (s) = Lt

and

X

es e s

: E ( )=

At? sS (e)]Pt u(e); I ]Pt c(e); I ] for t = 1; : : : ; T

1

(4.13)

Bt? (s) = Mt

1

X

e:sS (e)=s

Bt sS (e)]Pt u(e); I ]Pt c(e); I ] for t = T; : : : ; 1:

(4.14)

55

4.1. SCCC and PCCC Background

The trellis always starts in the all-zero state, so the forward recursion has the initial values

A (s) = 1 if s = 0; (4.15) 0 otherwise: If the trellis is terminated in a known state sT at t = T then the backward recursion will have the initial values ( BT (s) = 1 if s = sT ; (4.16) 0 otherwise; else the backward recursion can be initialised with the nal results of the forward recursion, and

0

(

BT (s) = AT (s); for all s:

+1

(4.17)

For each t there will be 2 At (s) and Bt (s) values, one for each state. As the backward recursion cannot commence until the entire block is received, all the At (s) values must be stored. Also, in practice, due to the limited precision used in performing these calculations, the At (s) and Bt (s) values will under ow very quickly. As such the forward and backward recursions should be renormalised on a regular basis. Note that as long as all At (s) or Bt (s) are scaled by the same value at any time t then this scale factor simply appears as a constant outside the summation in (4.9) and (4.10), and will not e ect the result. As such it is su cient to use the normalisation constants 1 (4.18) Lt = max A (s) s t? and Mt? = max 1B (s) ; (4.19) s t or even more simply by using the exponent of the maximum value as the scaling values, i.e.,

1 1

Lt = exp ? log max At? (s) s

1

h j

ki

(4.20)

and

1

Mt? = exp ? log max Bt (s) s

h j

ki

;

(4.21)

reducing the complexity of the algorithm. As the nal decision on ut will be made by selecting the ut with the highest probability, then it is not necessary to normalise (4.9) and (4.10) (if (4.13) and (4.14) are scaled then (4.9) and (4.10) will not under ow). This reduces the complexity of the algorithm further.

56

4.1. SCCC and PCCC Background

At? sS (e )]

1 1

Pt u(e

...

1

); I ]P

...

e

t

1

c(e ); I]

1 2

P t u(e

2

;I c(e k ) I ]P t k );

]

At (s)

At? sS (e k )]

1 2

ek

2

Figure 4.14: The calculation of At (s) Also, note that Pt c(e); I ] in (4.9) and Pt u(e); I ] in (4.10) can be brought out of the summations, reducing (4.9) and (4.10) to

Pt (c; O) =

and

X

e:c(e)=c

At? sS (e)]Pt u(e); I ]Bt sE (e)]

1

(4.22)

Pt (u; O) =

X

e:u(e)=u

At? sS (e)]Pt c(e); I ]Bt sE (e)];

1

(4.23)

respectively. Notice that that these tth probabilities are obtained using the probabilities of all symbols of the sequence except the tth ones. These calculations are depicted graphically in Figures 4.14 to 4.17. The complexity of the algorithm can be reduced still further by noting that not all the output ports of the SISO modules shown in Figures 4.10 and 4.11 are used, and that some of the input ports have constant inputs. Thus the implementation of the SISO module can be tailored to suit the speci c requirements.

Bit-wise SISO Algorithm

The symbol-wise SISO algorithm described above allows the calculation of the sequences of symbol probabilities P (c ; O) and P (u ; O). However, in a PCCC or SCCC encoder bits are interleaved, so in the PCCC and SCCC decoders of Figures 4.5 and 4.8 bit-wise soft

57

4.1. SCCC and PCCC Background

Bt? (s)

1

I] c(e ); I ]P t u(e ); e Pt ... Pt u(e k ); I ]Pt c (e k ); I] ek

1 1

Bt sE (e )]

1

1

2

...

2

2

Bt sE (e k )]

2

Figure 4.15: The calculation of Bt? (s)

1

decisions must be interleaved and deinterleaved. Therefore the SISO algorithm must be modi ed to calculate the sequences of bit probabilities P (c j ; O) and P (u j ; O) for the j th bit within each symbol. The calculation of the forward and backward recursions remains the same as shown in (4.13) and (4.14). The input symbol probabilities can be calculated from (4.7) and (4.8), except in the case where c is connected to a nonbinary-input channel, in which case Pt (c; I ) can be calculated directly. The comments made above regarding normalisation to prevent under ow are still valid for the bit-wise form of the algorithm. As was the case with the symbol-wise form of the algorithm, the output probabilities Pt (cj ; O) and Pt(uj ; O) for the j th bit within each symbol at time t can be calculated by considering all edges containing that bit:

Pt (cj ; O) =

and

X

e:cj (e)=cj

At? sS (e)]Pt u(e); I ]Pt c(e); I ]Bt sE (e)]

1

(4.24)

Pt (uj ; O) =

X

eu e u

: j ( )= j

At? sS (e)]Pt u(e); I ]Pt c(e); I ]Bt sE (e)]:

1

(4.25)

Note that if the input probabilities are calculated from (4.7) and (4.8) then the input probabilities for the current bit will be a constant in these expressions and can be absorbed

58

4.1. SCCC and PCCC Background

Pt

I] u(e i);

Bt sE (e )]

1

At? sS (e )]

1 1

e

1

At? sS (e )]

1 2

Pt u(e

e ... At? sS (e

1 2

i ); I ]

2

Bt sE (e )] ...

2

+1 )]

Pt u(e +1 ); I ] e +1

2 2

Bt sE (e

2

+1 )]

Figure 4.16: The calculation of Pt (c; O) by the normalisation process. This reduces the calculations to

Pt (cj ; O) =

and

e:cj (e)=cj

0 1 n X Y At? sS (e)]Pt u(e); I ] B Pt ci(e); I ]C Bt sE (e)] @ A

1

i=1 i6=j

(4.26)

Pt (uj ; O) =

eu e u

: j ( )= j

0 1 n Y X At? sS (e)] B Pt ui(e); I ]C Pt c(e); I ]Bt sE (e)]: @ A

1

i=1 i6=j

(4.27)

When the c of an encoder is connected to a nonbinary-input channel, then it does not matter that Pt(c; I ) cannot be split as shown in (4.26), because in that case the P ((c); O) output of the SISO module is not used and (4.26) need not be calculated.

59

4.2. SCCC Bounds

Pt

I] c(e i);

Bt sE (e )]

1

At? sS (e )]

1 1

e

1

At? sS (e )]

1 2

Pt c(e

e ... At? sS (e

1 2

i ); I ]

2

Bt sE (e )] ...

2

+1 )]

Pt c(e e

2

+1 ); I ] +1

Bt sE (e

2

+1 )]

2

Figure 4.17: The calculation of Pt(u; O)

4.2 SCCC Bounds

Consider the concatenation of two linear convolutional constituent codes Co and Ci, joined by a bit-wise random interleaver de ned by the permutation . The outer code Co has rate Ro = k=p and the inner code Ci has rate Ri = p=n, resulting in an overall SCCC CS with rate RS = k=n. The interleaver is of length N , which is an integer multiple of p (actually, the inner and outer codes can have rates Ri = ki=ni and Ro = ko=no, respectively, constraining the interleaver length to N = lcm(no ; ki). However, this complicates the bound expressions and is not considered here). The block size of the SCCC CS is NRo bits. There are N ! interleavers of length N , many of which will yield di erent performance when used in the SCCC. Rather than exhaustively enumerating the performance for all possible interleavers | N is typically thousands of bits | an abstract interleaver called the uniform interleaver is used 12]. Any interleaver will permute an input word with Hamming weight ? l into an output word which also has Hamming weight l, and there are N of such l

60

4.2. SCCC Bounds

permutations. The uniform interleaver will permute an input word with Hamming weight ? l randomly to every possible permutation of it, with uniform probability 1= N . The l uniform interleaver permits the calculation of the SCCC performance bound averaged over the ensemble of all interleavers, and 12, Corollary 1] guarantees the existence of at least one real interleaver which meets the bound calculated with the uniform interleaver. It is common practice with both PCCCs and SCCCs to reset the state of the constituent encoders at the beginning of each interleaver block and transmit these blocks, along with framing information, as frames. If the trellis is terminated at both the beginning and the end of a frame then the weight enumerating function of the constituent codes can be determined from the equivalent block code. In the case of continuous transmission and decoding the analysis is much more complex. It involves the use of a hyper-trellis having as hyper-states pairs of states of the outer and inner codes. Each hyper-branch represents all the paths taken through both the inner and outer trellises for one interleaver block, i.e., N=p trellis steps, starting and nishing at known states | represented by the two hyperstates. Thus the weight enumerating function for each hyper-branch can be determined by treating the constituent codes as equivalent block codes, starting and nishing at these known states. The upper bound to the bit error probability can then be found by applying the standard transfer function bound approach 52] to the hyper-trellis. However, for codes with a reasonable number of states and for large interleavers this technique becomes prohibitively complex. In 12, Section IV-B] an approximation is introduced which is accurate when the interleaver length is larger than the constituent code's memory (ten times the memory is suggested). This approximation involves replacing the complete transfer function of the hyper-trellis by the weight enumerating function which labels the hyper-branch joining the zero states of the hyper-trellis. This of course is the weight enumerating function found from the equivalent block codes of the constituent codes when they both start start and nish in the all-zero state. This means that for the case of continuous transmission and decoding an accurate approximation to the bit error probability bound is that of the case of a terminated trellis. The observation has also been made that in practical schemes it is not necessary to terminate the trellis at the end of the frame 49, 53], and the above argument supports this observation. De ne the weight enumerating function of any code C as X AC (W; H ) = AC W w H h; w;h

w;h

(4.28)

a polynomial in the dummy variables W and H , where AC is the number of codewords w;h of C with input Hamming weight w, and output Hamming weight h. Weight enumerating functions can also be expressed for all codewords with input Hamming weight w, and for

61

4.2. SCCC Bounds

all codewords with output Hamming weight h, i.e.,

AC (w; H ) =

and

1 @ w AC (W; H ) w! @W w

W =0

;

(4.29)

1 h C (W; AC (W; h) = h! @ A@H h H )

The weight enumerating functions of the outer and inner constituent convolutional codes, ACo (W; H ) and ACi (W; H ), can be found from knowledge of their respective trellises 37]. Any speci c codeword of Co with output Hamming weight l will, through the action of the uniform interleaver, generate a codeword of Ci of input Hamming weight l with ? probability 1= N . There are a total of ACo of these codewords with input Hamming w;l l Ci of these codewords with output Hamming weight h weight w in Co , and a total of Al;h in Ci. Thus the total number of codewords of the SCCC CS with input Hamming weight w and output Hamming weight h is

H =0

:

(4.30)

ACS w;h

N X AC AC l;h ; w;l? = N

o i

l=0

l

(4.31)

and the weight enumerating function of the SCCC is

ACS (W; H ) =

N X AC (W; l) AC (l; H ) ?N :

o i

l=0

l

(4.32)

The upper bound to the bit error probability of the SCCC is 12, Equation 6]

Pb 6

NRo N=Ri w=1 h=1

XX

w ACS 2NR erfc w;h

o

p

hRoRi Eb=N :

0

(4.33)

For large interleavers this expression will contain a very large number of terms, however it can be evaluated more quickly by only including those terms which make a signi cant contribution to the bound, i.e., the h summation may be truncated.

4.2.1 The Weight Enumerating Function of the Equivalent Block Code

The coe cients ACo and ACi of the equivalent block codes (with M steps) to the conl;h w;l stituent convolutional codes can be found by concatenating the error events of the convolutional codes. The error events of the convolutional codes can be found by modifying

62

4.2. SCCC Bounds

the techniques of 37] to allow remergings with the all-zero state. Let T C (W; H; Z; J ) be the transfer function of the convolutional code which enumerates all paths in the trellis remerging with the zero state at or before step M , with possible remergings with the zero state for just one step before this.

T C (W; H; Z; J ) =

X

w;h;z;j

Tw;h;z;j W w H hZ z J j

(4.34)

where Tw;h;z;j is the number of paths in the trellis with an input Hamming weight of w, an output Hamming weight of h, length z, and j remergings with the zero state (i.e., the concatenation of j error events). The codewords of the equivalent block code can be determined by observing that each error path of length z and number of remergings j gives rise to K z; j ] codewords, where 12, Appendix] (4.35) K z; j ] = M ?jz + j : So, the weight enumerating coe cients of the equivalent block code are given by

AC = w;h K z; j ]

and de ning

X

z;j

C K z; j ]Tw;h;z;j :

(4.36)

Now, for large M

M ; j

(4.37) (4.38)

C Tw;h;j =

X

z

C Tw;h;z;j

the coe cient ACo of the outer code can be approximated by w;l

ACo w;l

X N=p C j Tw;l;j ;

o

j

(4.39)

and the coe cient ACi of the inner code can be approximated by l;h

ACi l;h

X N=p C j Tl;h;j :

i

j

(4.40)

Using (4.32) the weight enumerating coe cient of the SCCC is then

ACS w;h

l d

N X X X ?N=p ?N=p C C j? j Tw;l;j Tl;h;j ; N

o i o

= o H

j

o

ji

l

o

i

i

(4.41)

63

4.2. SCCC Bounds

where do is the free Hamming distance of the outer code. H This expression can be simpli ed further by using the asymptotic approximation to the binomial coe cient for large M , Mm : M (4.42) m m! Substituting (4.42) into (4.41) yields

ACS w;h

N X XX l=do j o j i H

N jo

+ i

j ?l?1

l! Ci Co j o+j i j o !j i ! Tw;l;j o Tl;h;j i ; p l! pj j j o !j i!

o+ i

(4.43)

and nally, substituting (4.43) into (4.33) yields

Pb 6

NRo N=Ri N

X X X XX w j N 2NRo

w=1 h=1 l=do j o j i H

o+ i

j ?l?1

Ci Co Tw;l;jo Tl;h;ji erfc

p

hRoRi Eb=N : (4.44)

0

This can be expressed as erfc() functions of h and coe cients

Pb 6

where

X

h

Dh erfc

p

hRo RiEb =N ;

0

(4.45)

Dh =

NRo N w

X X XX w j 2NRo N

=1 = o H

o+ i

j ?l?1

l d

j

o

j

i

l! Ci Co j o+j i j o !j i ! Tw;l;j o Tl;h;j i : p

(4.46)

Now, for large N the dominant coe cient of h will be the one for which the exponent of N is maximum, e.g., for N = 10 the next smallest exponent of N will be 10? smaller. De ne this maximum exponent for each h as

4 4

(h) = max j o + j i ? l ? 1 : w;l

(4.47)

In 15] design criteria for SCCC are developed by examining both the overall maximum of the exponent given in (4.47), and the value of the exponent corresponding to minimum output Hamming weight h. The maximum exponent allows the dominant contribution to Pb to be found as N ! 1. The exponent for the minimum weight allows Pb to be evaluated as Eb =N ! 1.

0

64

4.3. Design Criteria for Binary SCCCs

4.3 Design Criteria for Binary SCCCs

4.3.1 Maximum Exponent of N

Looking rst at the maximum exponent of N , de ne = max (h) = max j o + j i ? l ? 1 : h w;l;h (4.48) If a non-recursive convolutional inner encoder is used then the minimum input Hamming weight of the inner code is 1. Thus the maximum number of concatenated codewords j i corresponding to an input Hamming weight in the inner encoder of l is l. The uniform interleaver ensures that this event will occur. Therefore the maximum exponent of N for a non-recursive convolutional inner code is

NR

= max fj o ? 1g ; w;l;h

(4.49)

which will always be positive. This means that if a non-recursive inner code is used then there exists some h whose coe cient in N increases with N . This causes the bound to diverge as N increases and results in poor performance. If a recursive convolutional inner encoder is used then the minimum input Hamming weight of the inner code is 2 54]. In this case a codeword of input Hamming weight l corresponds to a concatenation of at most b l c error events. Therefore the maximum exponent of N for a recursive convolutional inner code is = max j o + l ? l ? 1 w;l;h 2 (4.50) o? l+1 ?1 : = max j w;l;h 2 Now, if the free Hamming distance of the outer code is do then the following inequality H applies to the number of concatenated error events in the outer code corresponding to an output Hamming weight of l (4.51) j o 6 dlo ; H which is not dependent on h, so 6 max dlo ? l + 1 ? 1 : (4.52) l 2 H In 15] it is shown that this is maximised for l = do , and that the inequality becomes an H equality if w 2 WH , where WH is the set of input Hamming weights w which generate

2 R R

65

4.3. Design Criteria for Binary SCCCs

codewords of the outer code with Hamming weight l = do . This means that the maximum H exponent of N for a recursive convolutional inner code is o = ? dH + 1 (4.53) 2 which is always negative. Thus a recursive convolutional inner code should be used in SCCC.

R

This exponent will be reduced further if do is increased. Thus the outer code should H have as large a free distance as possible. This will be achieved with a non-recursive convolutional code. Equation (4.53) gives the maximum exponent of N in the BER bound. It is also the exponent of N for codewords which have an output Hamming weight from the outer code of do and input Hamming weight from the inner code of do . This is an intuitively pleasing H H result, as it implies that the Pb bound is dominated by error events for which l = do , i.e., H error event from the inner decoder which generate errors in the outer code with Hamming weight equal to the free distance of the outer code.

4.3.2 The Exponent of N for the SCCC Free Distance

As Eb =N ! 1 the performance of the SCCC will be dominated by the minimum value of h in the bound, which is the free distance of the SCCC, dS . For any particular l and H h the maximum number of concatenated error events will bounded by (4.54) j i 6 dh ; i H and (4.55) j o 6 dlo ; H respectively. Therefore S (dS ) 6 max dH + lo ? l ? 1 ; (4.56) H l diH dH which, as the ?l term will dominate, will be maximised for the minimum value of l yielding codewords of output Hamming weight dS . Now, the minimum weight l will result from H codewords of the outer code with output Hamming weight do , so minflg = do and (4.56) H H becomes S (4.57) (dS ) 6 diH ? do : H H dH

0

66

4.3. Design Criteria for Binary SCCCs

Most SCCC will have a free distance greater than the free distance of their inner codes, but as will be seen in Section 4.3.3, generally dS < 2diH . (4.57) therefore becomes H (dS ) 6 1 ? do ; H H which is always negative. This bound implies that at high Eb =N increasing the interleaver length will always result in a reduction of the BER. Also, this interleaver gain will be increased by choosing an outer code with large do . H

0

(4.58)

4.3.3 Design Criteria

Design criteria for the constituent codes of SCCC can now be developed by examining the output Hamming weight associated with , denoted by h( ). The recursive convolutional inner code has minimum input Hamming weight 2, and de ne diH; to be the minimum output Hamming weight of these input Hamming weight 2 codewords, sometimes called the e ective free Hamming distance. In 15] the authors conclude that it is these codewords which contribute to h( ). Their argument is that error events with l = do consist of a concatenation of error events from the inner code with output Hamming H weight diH; . For even values of do , do =2 of these inner code error events are concatenated H H together to produce an SCCC error event with l = do , hence H do diH; (4.59) h( ) = H 2 : For example, if do = 6 and diH; = 5 then a concatenation of 3 inner code error events, H each with input Hamming weight 2, will result in a SCCC error event with l = 6. Each of these inner code error events will have an output Hamming weight of 5, so the SCCC error event will have h( ) = 15. For odd values of do SCCC error events are the H concatenation of several inner code error events with input Hamming weight 2 and one with input Hamming weight 3, hence (do ? 3)di (4.60) h( ) = H 2 H; + diH; ; where diH; is the minimum output Hamming weight of codewords of the inner code with input Hamming weight 3. For example, if do = 7, diH; = 8, and diH; = 5, then a H concatenation of 2 inner code error events with input Hamming weight 2 and 1 inner code error event with input Hamming weight 3 will result in a SCCC error event with l = 7. The input Hamming weight 2 error events have output Hamming weight 8 and the input Hamming weight 3 error events have output Hamming weight 5, so the SCCC error event

R R 2 R 2 R 2 2 R R 2 3 3 2 3

67

4.3. Design Criteria for Binary SCCCs

Inner Code do States Generators diH; diH; H 5 4 h =7h =3h =5 4 3 6 8 h = 13 h = 15 h = 17 5 4 7 16 h = 23 h = 35 h = 27 8 5 Table 4.1: Constituent codes of rate 1/3 SCCCs used in Figures B.1, B.2, and B.3

(1) (2) 2 3 0 1 2 (1) (2) 0 1 2 (1) (2) 0 1 2

Outer Code States Generators 4 g =5g =7 8 g = 64 g = 74 16 g = 46 g = 72

will have h( ) = 21. Thus the authors of 15] conclude that the inner code in a SCCC should be chosen to be recursive, and to have as large a dH; as possible. This is the same criteria used for PCCC and the codes found in 54] can be used.

R 2

However, the concatenation of do =2 inner code error events each of output Hamming H weight diH; is not guaranteed to be the most likely SCCC error event with l = do | H o . This there can be more likely inner coder error events with input Hamming weight dH can be seen by examining the contribution of each spectral line to the overall bound, as was done in 55] for PCCC. The BER bound decomposed by spectral line is shown in Figures B.1, B.2, and B.3 of Appendix B for the three rate 1/3 SCCC of Table 4.1. Each of these gures plots

2

Pb = Dh erfc

p

hRo RiEb =N ;

0

(4.61)

1 2

for a number of values of h, where Dh is given in (4.46). The outer codes are rate nonsystematic convolutional codes taken from 1, Table 11.1(c)], and the inner codes are rate systematic recursive convolutional codes taken from 54, Table 1]. See Appendix A for de nitions of the generator polynomials. A 20000 bit S{type random interleaver, giving an encoding delay of 10000 bits, is used in each case. A maximum of h = 17 has been used in calculating each of the distance spectra.

2 3

Looking rst at the SCCC constructed from 4 state constituent codes, Figure B.1 of Appendix B shows that this BER bound is dominated by h = 7 terms at high Eb =N , and by h = 9 and h = 11 terms as the Eb =N is reduced. Table B.1 shows the distance spectrum for the SCCC, listing the input Hamming weight w, the output Hamming weight h, and the multiplicity ACS for each line. From this it can be seen that w = 1 w;h dominates the h = 7, h = 9, and h = 11 terms (these terms are highlighted in the table). Table B.2 shows the distance spectrum of the outer code, where the outer convolutional code is treated as a block code and concatenated error events have already been included. Looking at this table there is only one term with w = 1, that with l = 5, which is the free Hamming distance of the outer code do . This clearly shows that the SCCC bound H is dominated by terms with l = do . Table B.3 shows the distance spectrum of the inner H

0 0

68

4.3. Design Criteria for Binary SCCCs

code, also for the equivalent block code. Note that there is a w = 5, h = 5 term, but it has a small multiplicity which causes it to be insigni cant, despite the smaller h. Meanwhile the w = 5, h = 7 and w = 5, h = 11 terms have large multiplicities, counteracting their large output Hamming weights. Looking at (4.60) and Table 4.1 the authors of 15] also conclude that the bound is dominated by h = 7 for this code. Figure B.2 of Appendix B shows that the SCCC constructed from 8 state codes is dominated by h = 8 terms at high Eb =N , and to a lesser extent by h = 9, h = 10, and h = 11 terms at lower Eb =N . Note that from (4.59) and Table 4.1 the authors of 15] conclude that this SCCC should be dominated by h = 15. Table B.4 shows that these terms are in turn dominated by their w = 2 term. The h = 15 term has a higher multiplicity than the h = 8 term, but this is not large enough to compensate for the larger Hamming weight. Table B.5 shows the distance spectrum of the outer code, and for w = 2 there are l = 6, l = 8, l = 10, and l = 14 terms; the l = 6, l = 8, and l = 10 terms have similar multiplicities, and the l = 14 term has a much large multiplicity. Now, looking at Table B.6 the l = 10 and l = 14 terms are only associated with terms with large h and hence low probability, while the l = 8 term is associated with terms with lower multiplicity than the l = 6 term. This is reinforced by the e ect of the uniform interleaver. Recall from (4.31) that N Co Ci CS = X Aw;l? Al;h ; (4.62) Aw;h N

0 0

l=0

l

and note that N = l+1 N l N ?l l+1 (4.63) 1 N N l + 1 if N l: Thus (4.62) is dominated by the l = do term, which is a similar argument to that which H produced (4.53). Now, the authors of 15] conclude that the most likely error event with l = 6 will be a concatenation of 3 events from the inner code each with l = 2 and h = 5, resulting in h = 15. However, the above results clearly show that there are error events with l = 6 which have a smaller h and hence are more likely. Consider one more example to verify this | that of an SCCC constructed from codes with 16 states. Figure B.3 of Appendix B shows that the BER bound is dominated by terms with h = 11, while Table B.7 shows that these terms are in turn dominated by w = 1 and w = 3. As expected Table B.8 shows that there are two do = 7 terms, one with w = 1 H and the other with w = 3. Now, looking at Table B.9 it can be seen that, allowing for the reduction in probability as h increases, that the l = 7 terms are dominated by h = 11.

69

4.3. Design Criteria for Binary SCCCs

From (4.60) and Table 4.1 15] conclude that h = 21 should dominate, and while these bounds do not include a h = 21 term (as it would take too long to compute the bound), it is unlikely that a term with such a large distance would dominate.

4.3.4 SCCC Design Criteria Summary

The authors of 15] correctly conclude that the performance of a SCCC is dominated by error events with l = do . They also conclude that these error events will be dominated H by the concatenation of inner code error events with output Hamming weight equal to the e ective free distance, and hence the inner code should be a recursive convolutional with maximum e ective free distance (this is also the design criteria for the constituent codes of a PCCC 12]). However, with a recursive convolutional inner code there are error events with l = do which are more likely than this. Thus the inner convolutional code in H an SCCC should be designed to minimise the contribution of error events with l = do . H In summary, the design criteria for an SCCC are: The inner code should be a recursive convolutional code; The contribution of error events from the inner code with l = do should be minH imised; The outer code should have maximum free distance; A random bit-wise interleaver should be used; The interleaver should meet the S{type criteria; and The interleaver should be as large as possible. Using these design criteria a search can now be conducted for suitable recursive convolutional codes to be used as the inner code in an SCCC. The approach taken will be to estimate the contribution of the inner code to the SCCC BER bound, and minimise this.

70

4.4. New Binary SCCCs

4.4 New Binary SCCCs

4.4.1 Search Criteria

The conclusion of the previous section was that the bound is dominated by error events with l = do . Therefore the bound contribution from error events of the inner code H o should be considered and minimised. One approach to this minimisation with l = dH would be to consider the minimum output Hamming weight of inner codewords with input Hamming weight do and maximise this. However, as do increases, the multiplicity H H of these codewords begins to have an important impact. A straight{forward method of accounting for both the output Hamming weight and the multiplicity of code words with this weight is to calculate the contribution to the BER bound of all codewords from the inner code with input Hamming weight do , and nd the code which minimises this. H Using Equations (4.31) and (4.33) with l = do , the bound can be approximated by H

X X AC AC ;h w p w;d d ? N 2NRo erfc hRoRiEb=N : Pb .

NRo N=Ri w=1 h=1

o o H i o H

do H

0

(4.64)

As has already been observed the outer code should have as large a do as possible. H This equation also implies that the outer code should have minimum number of nearest P neighbours, NRo ACo o . This is the usual criteria for a good convolutional code and the w;dH w codes of, say, 1] can be used.

=1

The outer code is therefore xed, and

Pb . D

where

N=Ri h=1

X

ACoi ;h erfc dH

o o H

p

hRo RiEb =N ;

0

(4.65)

X AC w D = ? N w;d : 2NR

NRo w=1 d

o H

o

(4.66)

From (4.40) this can be expressed as

Pb . D

N=Ri

X X N=p C p Td ;h;j erfc hRoRi Eb=N : ji

h=1 j i

i o H

0

(4.67)

Therefore the search criteria for the inner code is

i

=1

8 h0 9 <X X N=p C = p min : Td ;h;j erfc hRoRi Eb=N ; ; C ji h j

i i o H

0

(4.68)

71

4.4. New Binary SCCCs

where h0 is the maximum value of h which has a signi cant e ect on the result. Note that (4.68) is dependent on do , N , Ro, Ri, and Eb=No. H Typically rate b=(b + 1) constituent codes will be used for both inner and outer codes, and recall that Ro = k=p and Ri = p=n, so that both Ro and Ri will be xed according to the design of the SCCC rate. Also, the free distance do of the outer code will be xed by H the desired complexity of the outer code, leaving N and Eb=N as the only unknowns in the search criteria. N should be set according to the desired codec delay, but for large N the search criteria will not be sensitive to this parameter. The desired operating Eb =N point should also be used in the code search, however the curves of Section 4.3.3 show that (4.68) is also relatively insensitive to this parameter.

0 0

4.4.2 Search Results

According to the authors of 15] the inner recursive convolutional code of a SCCC should be the same as a constituent code of a PCCC, and as such the codes given in 54] can be used. A search has been performed, using the criteria of the previous section, for new rate p=(p + 1) SCCC inner constituent codes with the same parameters as those given in 54]. The search was conducted for rate 2=3, 3=4, and 4=5 inner codes, with a number of di erent states. In each case a rate (p ? 1)=p outer code with the same number of states as the inner code was used, giving an overall SCCC rate of k=(k +2). The outer codes were taken from 1, Table 11.1] | Table 4.2 shows the rate 1=2 codes, Table 4.3 the rate 2=3 codes, and Table 4.4 the rate 3=4 codes . Also shown in these tables is the free distance do of the outer codes, which was used in the search. For the search an interleaver size H which gives an encoding delay of 10000 information bits was used, and an operating point of Eb =N = 4:0 dB was assumed. Tables 4.5, 4.6, and 4.7 show the new inner codes found for rates 2=3, 3=4, and 4=5, respectively. Also shown in these tables are the equivalent codes from 54]. See Appendix A for a description of the generator polynomials.

1 0

Notice that some of the new codes found are identical to the old codes. This is especially true as the rate of the constituent code is increased because, for the low number of states considered here, the number of potentially good codes reduces as the number of branches increases. Indeed, the \new" code listed for the 16 state rate 3=4 case was found to be one of the codes listed in 54]. In 54] a number of equally good codes, for their design

1 There is no 16 state rate 3 4 code given in 1], instead a 32 state outer code will be used with the 16 state inner code.

=

72

4.4. New Binary SCCCs

do H 5 6 7 8 Table 4.2: Rate 1=2 outer constituent codes 1, Table 11.1]

States Generators 4 g =5g =7 8 g = 64 g = 74 16 g = 46 g = 72 32 g = 65 g = 57

(1) (2) (1) (1) (1) (2) (2) (2)

g =6 4 g =4 g =6 8 g =7 g =4 16 g =7 Table 4.3: Rate 2=3 outer constituent codes

(1) 1 (1) 2 (1) 1 (1) 2 (1) 1 (1) 2 (2) 1 (2) 2 (2) 1 (2) 2 (2) 1 (2) 2 (3) 1 (3) 2 (3) 1 (3) 2 (3) 1 (3) 2

States

Generators =6g =2g =2g =4g =4g =2g =1g =4g =7g =1g =2g =5g

do H

3 4 5 1, Table 11.1]

criteria, are listed for each case. The \new" and \old" 16 state codes in Table 4.6 have been included to compare the best code found with the new design criteria to one of the equally good codes found with the old design criteria.

4.4.3 Binary SCCC Performance

Binary SCCCs can be now constructed using the codes of the previous section, and performance bounds calculated. The bounds will be used to compare the performance of the inner constituent codes found with the new design criteria with that of the codes found using the criteria of 15]. Generators do H g =4g =4g =4g =4 8 g =0g =6g =2g =4 4 g =0g =2g =5g =5 g =6g =2g =2g =6 32 g = 1 g = 6 g = 0 g = 7 5 g =0g =2g =5g =5 Table 4.4: Rate 3=4 outer constituent codes 1, Table 11.1] States

(1) 1 (1) 2 (1) 3 (1) 1 (1) 2 (1) 3 (2) 1 (2) 2 (2) 3 (2) 1 (2) 2 (2) 3 (3) 1 (3) 2 (3) 3 (3) 1 (3) 2 (3) 3 (4) 1 (4) 2 (4) 3 (4) 1 (4) 2 (4) 3

73

4.4. New Binary SCCCs

States New code Old code 54] 4 h =7h =3h =5 8 h = 13 h = 15 h = 17 16 h = 23 h = 35 h = 25 h = 23 h = 35 h = 27 32 h = 45 h = 73 h = 67 h = 45 h = 43 h = 61

0 1 2 0 1 2 0 0 1 1 2 2 0 0 1 1 2 2

Table 4.5: Rate 2=3 inner constituent codes for binary SCCC States New code Old code 54] 4 h =7h =5h =3h =1 8 h = 13 h = 15 h = 17 h = 11 16 h = 23 h = 35 h = 33 h = 25 h = 23 h = 35 h = 27 h = 31 Table 4.6: Rate 3=4 inner constituent codes for binary SCCC

0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3

Figure 4.18 shows a rate 1=3 SCCC constructed from a rate 1=2 outer constitiuent code and a rate 2=3 inner constitiuent code. The 20000 bit random interleaver results in an encoding delay of 10000 information bits. Figures C.1{C.4 of Appendix C show the BER performance bound calculated using (4.33) for constituent code complexities of 4{32. The rate 1=2 outer codes were taken from Table 4.2, and the rate 2=3 inner codes were taken from Table 4.5. For a complexity of 4 and 8 states the new search yielded the same codes as those recommended in 15], so Figures C.1 and C.2 only have one curve shown. However, for a complexity of 16 states the new search yielded a di erent code to that recommended in 15], and Figure C.3 shows a slight improvement in the performance bound, particularly at medium Eb=N . As the complexity is increased to 32 states the advantage of the new codes becomes more marked, as shown in Figure C.4.

0

Figure 4.19 shows a rate 1=2 SCCC constructed from a rate 2=3 outer constitiuent code and a rate 3=4 inner constitiuent code. The 15000 bit random interleaver results in an encoding delay of 10000 information bits. Figures C.5{C.7 of Appendix C show the BER performance bound calculated using (4.33) for constituent code complexities of 4{16. The rate 2=3 outer codes were taken from Table 4.3, and the rate 3=4 inner codes were taken from Table 4.6. For this rate none of the searches yielded di erent codes to that States New code Old code 54] 8 h = 13 h = 15 h = 17 h = 11 h = 7 16 h = 23 h = 35 h = 33 h = 23 h = 35 h = 33 h = 31 h = 25 h = 37 h = 31

0 1 2 3 4 0 1 2 0 1 2 3 4 3 4

Table 4.7: Rate 4=5 inner constituent codes for binary SCCC

74

4.4. New Binary SCCCs

Rate 1/2 Convolutional Encoder 20000 bit Random Interleaver Rate 2/3 Recursive Convolutional Encoder

Figure 4.18: Rate 1=3 binary SCCC encoder

Rate 2/3 Convolutional Encoder 15000 bit Random Interleaver Rate 3/4 Recursive Convolutional Encoder

Figure 4.19: Rate 1=2 binary SCCC encoder recommended in 15]. No new code was found for a complexity of 16 states, but in 15] a number of codes are given for this complexity. As such Figure C.7 shows a comparison of the code found with the new criteria with one of the codes from 15] which are considered equally good. Figure C.7 show that these two codes have virtually the same performance. Figure 4.20 shows a rate 3=5 SCCC constructed from a rate 3=4 outer constitiuent code and a rate 4=5 inner constitiuent code. The 13332 bit random interleaver results in an encoding delay of 9999 information bits. Figures C.8 and C.9 of Appendix C show the BER performance bounds calculated using (4.33) for constituent code complexities of 8 and 16 states. The rate 3=4 outer codes were taken from Table 4.4, and the rate 4=5 inner codes were taken from Table 4.7. For a complexity of 8 states the new search yielded the same codes as those recommended in 15], so Figure C.8 only has one curve shown. However, for a complexity of 16 states the new search yielded a di erent code to that recommended in 15], and Figure C.9 shows a very slight improvement in the perfromance bound, particularly at Eb=N = 4:0 dB, where the search was conducted.

0

Rate 3/4 Convolutional Encoder

13332 bit Random Interleaver

Rate 4/5 Recursive Convolutional Encoder

Figure 4.20: Rate 3=5 binary SCCC encoder

75

4.5. Summary

4.5 Summary

In this chapter iteratively decoded serially concatentated convolutional codes have been introduced. The chapter commenced with some background material on SCCC, along with PCCC and hybrids of SCCC and PCCC. The SISO algorithm, which is used to decode these codes, was also presented. The performance bounds derived in 15] were reproduced here and used to develop improved design criteria for SCCC. A code search was conducted using these improved design criteria, resulting in new inner constituent codes for use in SCCC. Finally, the performance bounds were calculated, showing that these new SCCC have better performance bounds than those previously published. There are a number of important notes which must be made about these performance bounds. These bounds are upper limits on the performance of the codes derived from the use of the union bound. As such the bounds are only valid for the case of maximumlikelihood decoding at medium-high values of Eb=N , and they will diverge signi cantly from the true performance at low Eb =N . Also, in practice a sub-optimal decoding algorithm is used which is not maximum-likelihood, and furthermore, the bound is based upon the uniform interleaver, rather than a real random interleaver. However, as is pointed out in 18], there is much heuristic evidence to suggest that, despite this apparent inconsistency, these bounds do make good design criteria for SCCC.

0 0

It is also worth noting that these new SCCC design criteria o er only a small improvement in bound performance (the best improvement was for the rate 1/3 32 state SCCC, with an improvement of 0:2 dB). However, the search technique appears to be more accurate than that used in 15], resulting in some better codes. As such these new design criteria will be modi ed in the following chapter to design serially concatenated trellis coded modulation (SCTCM) systems. Figures 4.21, 4.22, and 4.23 summarise the performance bounds for the best codes found in this chapter. These gures show that there is a signi cant improvement in the performance bound by increasing the number of states in the constituent codes. However, as will be seen in the following chapter, this may not be the case for implementations of these codes at practical operating points.

76

4.5. Summary

10?

10

4 states 8 states 16 states 32 states

10?

20

Bit Error Rate 10?

30

10?

40

4

5

6

7 Eb =N (dB)

0

8

9

10

Figure 4.21: BER bounds for the best rate 1=3 binary SCCC

77

4.5. Summary

1

4 states 8 states 16 states

10?

10

Bit Error Rate 10?

20

10?

30

4

5

6

7 Eb =N (dB)

0

8

9

10

Figure 4.22: BER bounds for the best rate 1=2 binary SCCC

78

4.5. Summary

10?

10

8 states 16 states

Bit Error Rate

10?

20

10?

30

4

5

6

7 Eb =N (dB)

0

8

9

10

Figure 4.23: BER bounds for the best rate 3=5 binary SCCC

79

Chapter 5 Serially Concatenated Trellis Coded Modulation

5.1 Introduction

In the previous chapter performance bounds for SCCC were investigated and used to generate design criteria, which in turn were used to search for new constituent codes. It was assumed however that these codes would be used on binary signalling channels. These codes can be used on any channel where Hamming distance and Euclidean distance are isomorphic, such as BPSK and Gray mapped QPSK. However, these modulation schemes can only o er bandwidth e ciencies of less than two information bits per symbol. Modern communication systems often require greater bandwidth e ciencies than this. For example, in 4] a six dimensional 8PSK TCM scheme is used which achieves a bandwidth e ciency of 2.5 information bits per symbol, and in 49, 53] a 16QAM PCCC scheme is used which achieves a bandwidth e ciency of greater than 2 information bits per symbol on a nonlinear, fading, mobile, satellite channel. The problem with the multilevel modulation schemes used to obtain these high bandwidth e ciencies is that Hamming distance and Euclidean distance are no longer isomorphic and the bounds of the previous chapter will not be valid for these channels. As such the codes found in the previous

80

5.2. SCTCM Bounds

k

o

Outer Encoder

n

o

k

i

Inner Encoder

n

i

Signal Set Mapper

2

Figure 5.1: SCTCM encoder block diagram chapter will not give optimal performance if used on a bandwidth e cient multilevel channel. In this chapter serially concatenated trellis coded modulation (SCTCM) will be considered as a means of extending SCCC techniques to multilevel channels. SCTCM is depicted in Figure 5.1, where a SCCC encoder has been modi ed to include a signal set mapper which is connected to the output of the inner encoder. The role of the signal set mapper is to map the ni coded bits of the inner code to a 2ni level signal set. There have been a number of results published recently on the subject of SCTCM 31{ 34, 51]. However, as was done for SCCC in 18], these papers use the e ective free distance of the inner code as the key design criteria of SCTCM. In this chapter the dominance of error events from the inner decoder of a SCTCM scheme with information Hamming weight equal to the free Hamming distance of the outer code will again be demonstrated. This fact will then be used to design new SCTCM codes with superior performance to existing codes. In Section 5.2 of this chapter SCCC bounds will be modi ed to produce SCTCM bounds. These bounds are derived in the same way as for SCCC, the main di erence being that trellis paths now have Euclidean distances rather than Hamming distances associated with them. In Section 5.3 these bounds will be used to generate design criteria for SCTCM. Finally in Section 5.4 these design criteria will be used to search for new codes for 8PSK and 16QAM signal sets. These codes will be simulated and their performance compared to that of the best known existing schemes.

5.2 SCTCM Bounds

The derivation of the performance bounds for SCTCM is very similar to that of SCCC. The SCCC bound was based upon the weight enumerating function of the code, which was formed by combining the weight enumerating functions of the two constituent codes with a uniform interleaver. Now the inner code is connected to a non-binary channel, so

81

5.2. SCTCM Bounds

Euclidean distances rather than Hamming weights must enumerated, which can be done with a distance spectrum. The distance spectrum of a code records the input Hamming weight, the squared Euclidean distance from the all zeroes codeword, and the average number of codewords which have this Hamming weight and Euclidean distance. The outer code is still a binary code, so weight enumerating functions can still be used for this. The distance spectrum for the SCTCM can be determined by combining the weight enumerating function of the outer code and the distance spectrum of the inner code with a uniform interleaver. Again consider the concatenation of two linear convolutional constituent codes Co and Ci, joined by a bit-wise random interleaver de ned by the permutation . The outer code Co has rate Ro = k=p and the inner code Ci has rate Ri = p=n, resulting in an overall SCTCM CS with rate RS = k=n. The interleaver is of length N , which is an integer multiple of p. The n bits output from the inner encoder de ne one of 2n complex points (the bounds given below are for two dimensional signal sets, but they can easily be modi ed for multi-dimensional signal sets). The block size of the SCTCM CS is NRo bits, and the bandwidth e ciency is k information bits per symbol. A uniform interleaver is again used as the key to the bound calculation. The uniform interleaver permutes an input word with Hamming weight l randomly to every possible ? permutation of it, with uniform probability 1= N . The uniform interleaver permits the l calculation of the SCTCM performance bound averaged over the ensemble of all interleavers, and 12, Corollary 1] still guarantees the existence of at least one real interleaver which meets the bound calculated with the uniform interleaver. Also, as was the case for SCCC, termination of the trellis will be assumed and the equivalent block codes of the constituent codes will be analysed. The outer code Co of a SCTCM is a binary code and its weight enumerating function will be de ned the same way as the binary codes in the previous chapter, i.e.,

AC (W; H ) =

X

w;h

AC W w H h: w;h

(5.1)

This is a polynomial in the dummy variables W and H , and AC is the number of w;h codewords of C with input Hamming weight w, and output Hamming weight h. The weight enumerating function for all codewords with output Hamming weight h is given by 1 h C (W; AC (W; h) = h! @ A@H h H )

H =0

:

(5.2)

The inner code Ci of a SCTCM and the SCTCM code CS itself have distance spectra

82

5.2. SCTCM Bounds

given by

B C (W; D) =

X

w;d

C Bw;dW w Dd;

(5.3)

C which is a polynomial in the dummy variables W and D, where Bw;d is the multiplicity of codewords of C with input Hamming weight w, and squared Euclidean distance d from the all zero codeword. The summation in d will be performed for all values of d in D, the set of all squared Euclidean distances for the code. The multiplicity of each codeword is now more than just the number of codewords, it includes factors for the probability of C the codeword occurring. The determination of Bw;d is discussed in the next section. The spectral lines for all codewords with input Hamming weight w is given by

1 w C( B C (w; D) = w! @ B@WW; D) w

W =0

:

(5.4)

Any speci c codeword of Co with output Hamming weight l will, through the action of the uniform interleaver, generate a codeword of Ci of input Hamming weight l with probability ? 1= N . There are a total of ACo of these codewords with input Hamming weight w in Co, w;l l Ci of these codewords with squared Euclidean distance d in and on average a total of Bl;d Ci. Thus the multiplicity of codewords of the SCTCM CS with input Hamming weight w and squared Euclidean distance d is

CS Bw;d N C X AC Bl;d w;l? = ; N

o i

l=0

l

(5.5)

and the distance spectrum of the SCTCM is

B CS (W; D) =

N X AC (W; l) BC (l; D) ?N :

o i

l=0

l

(5.6)

The upper bound to the bit error probability of the SCTCM can then be found by combining (2.3) of Section 2.2.2 and (5.5) to give

Pb 6

NRo

XX

w=1 d2D

CS w Bw;d NR Q o

p

dRoRi Eb=N :

0

(5.7)

For large interleavers this expression will contain a very large number of terms, however it can be evaluated more quickly by only including those terms which make a signi cant contribution to the bound, i.e., the d summation may be truncated.

83

5.2. SCTCM Bounds

e Set of squared Euclidean distances 000 D = f0:0g 001 D = f0:585786g 010 D = f0:585786; 3:414214g 011 D = f2:0g 100 D = f0:585786; 3:414214g 101 D = f2:0g 110 D = f4:0g 111 D = f3:414214g Table 5.1: Sets of distances for the Gray mapped 8PSK signal set of Figure 2.1

0 1 2 3 4 5 6 7

5.2.1 The Distance Spectrum of the Equivalent Block Code

The coe cients ACo of the equivalent block code of the outer constituent convolutional w;l C code are given by (4.39). To determine the coe cients Bl;di of the inner constituent trellis code it is not su cient to simply enumerate the input Hamming weights and distance labellings of the trellis, as was done for the outer code coe cients. This is because the signal sets now being used are non-regular, and a full enumeration of the codewords would require consideration of all pairs of paths (see Section 2.2.3 and 37]). However, if the signal sets are quasi-regular (e.g., 8PSK and 16QAM), then the all zero codeword can be considered to be the correct path and the distance spectrum of the inner code can be determined with a simple modi cation and small increase in complexity of the technique to determine the coe cients of the outer code. Quasi-regular signal sets have a set of squared Euclidean distances Di associated with each pair of signal set labels. Some pairs may only have one distance in this set, and others may have more, with a total of r sets. However, any pair of signal set labels with the same binary sum e will have the same set De associated with them. This is the key to the analysis of quasi-regular trellis codes | if the quasi-regular signal set is used with a linear code then the sets of distances between two paths depend only on the di erences between the signal set labels, and the all zero path can be considered to be the correct path. For example, Table 5.1 contains the binary sum e and sets of distances for all pairs of points in the Gray mapped 8PSK signal set of Figure 2.1. The smallest distance in each set is called the worst case distance. If there are g distances in Di and the occurrence of signal set points is uniformly distributed, as will be the case with random data, then each distance in Di will occur with probability 1=g when the signal set selector from the code is i. The error events of the inner trellis code can now be found by combining the techniques

84

5.2. SCTCM Bounds

used above and in 37], allowing remergings with the all-zero state. Let S C () be the function which enumerates all paths in the trellis which remerge with the zero state at or before step M , with possible remergings with the zero state for just one step before this.

Qqr ; r w;d;z;j;q1;:::;qr (5.8) where Sw;d;z;j;q1;:::;qr is the number of paths in the trellis with an input Hamming weight of w, a worst case squared Euclidean distance of d, length z, j remergings with the zero state (i.e., the concatenation of j error events), and qe occurrences of each of the r signal set labels e. This function is independent of the correct path and can be found by considering the all zeroes path to be the correct path. S C (W; D; Z; J; Q ; : : : ; Qr ) =

1 1 2

X

Sw;d;z;j;q1;:::;qr W w DdZ z J j Qq1 Qq2

Each of these spectral lines can be expanded by enumerating all possible combinations of distances from the sets De. Each spectral line therefore gives rise to

G = g q1 g q2

1 2

grqr

(5.9)

spectral lines, each of which occurs with probability 1=G. Note that not all signal set labels require a separate term in (5.8). For example, in the Gray mapped 8PSK case the signal set labels with only one distance in their distance sets do not require any Q terms, and both the e = 010 and e = 100 signal set labels have identical distance sets. Thus (5.8) for Gray mapped 8PSK only requires one Q term, and G = 2q . In practice the expanded spectral lines are less likely to occur than the line with the worst case distance, due to their signi cantly higher distance. This means that the distance spectrum of the code can be approximated by the worst case distance spectrum obtained by combining (5.8) and (5.9) for just the terms with worst case distance:

S C (W; D; Z; J ) =

X

w;d;z;j

Sw;d;z;j W w DdZ z J j ;

(5.10)

where Sw;d;z;j is the number of paths in the trellis with an input Hamming weight of w, a worst case squared Euclidean distance of d, length z, j remergings with the zero state, and X Tw;d;z;j;q1;:::;qr Sw;d;z;j = (5.11) gq1 gq2 grqr : q1 ;:::;qr

1 2

The codewords of the equivalent block inner code can be determined by observing, as was done for the outer code, that each error path of length z and number of remergings j gives rise to K z; j ] codewords, where 12, Appendix] (5.12) K z; j ] = M ?jz + j :

85

5.2. SCTCM Bounds

So, the distance spectrum coe cients of the equivalent block inner code are given by

C Bw;d =

X

z;j

C K z; j ]Sw;d;z;j :

(5.13)

Again, for large M K z; j ] M ; j and de ning

C Sw;d;j =

(5.14) (5.15)

X

z

C Sw;d;z;j

C the coe cients Bl;di of the inner code can be approximated by X N=p Ci C Bl;di j Sl;d;j : j

(5.16)

Using Equations (5.6), (4.39), and (5.16) the distance spectrum coe cient of the SCTCM can now be determined:

CS Bw;d l=do j o j i H N X X X ?N=p ?N=p C C j? j Tw;l;j Sl;d;j ; N

o i o

l

o

i

i

(5.17)

where do is the free Hamming distance of the outer code. H This expression can be simpli ed further by using the asymptotic approximation to the binomial coe cient for large M , Mm : M (5.18) m! m Substituting (5.18) into (5.17) yields

CS Bw;d N X XX l=do j o j i H

N jo

+ i

j ?l?1

l! Ci Co j o+j i j o !j i ! Tw;l;j o Sl;h;j i ; p

j ?l?1

(5.19)

and nally, substituting (5.19) into (5.7) yields

N XX X XX w j Pb 6 NRo N NRo w=1 d2D l=do j o j i H

o+ i

pjo+ji j o!j i!

Ci Co Tw;l;jo Sl;d;ji Q

l!

p

dRoRiEb =N ; (5.20)

0

which is very similar to (4.44), the performance bound for SCCC. The di erences between the expressions are small | distance spectra terms rather then weight enumerating functions are used for the inner code, the Q function rather than the complementary error

86

5.3. Design Criteria For SCTCM

function is used to determine the probability of each error event, and the inner code summation is performed over the set of Euclidean distances rather than Hamming distances. This expression can be written as Q() functions of d and coe cients

Pb 6

where

X

d2D

Dd Q

p

dRo RiEb=N ;

0

(5.21)

Dd =

NRo N

X X XX w j NRo N

w=1 l=do j o j i H

o+ i

j ?l?1

l! Ci Co j o+j i j o !j i ! Tw;l;j o Sl;d;j i : p

4

(5.22)

Now, as was the case for SCCC, for large N the dominant coe cient of d will be the one for which the exponent of N is maximum, e.g., for N = 10 the next smallest exponent of N will be 10? smaller. De ne this maximum exponent for each d as

4

(d) = max j o + j i ? l ? 1 : w;l

(5.23)

In the preceding chapter design criteria for SCCC where developed by examining both the overall maximum of the exponent given in (4.47), and the value of the exponent corresponding to minimum output Hamming weight h. Design criteria for SCTCM can now be developed in the same way. The overall maximum of the exponent given in (5.23) and the value of the exponent corresponding to the minimum squared Euclidean distance d will be examined. The maximum exponent allows the dominant contribution to Pb to be found as N ! 1. The exponent for the minimum distance allows Pb to be evaluated as Eb =N ! 1.

0

5.3 Design Criteria For SCTCM

5.3.1 Maximum Exponent of N

Looking rst at the maximum exponent of N , de ne = max (d) = max j o + j i ? l ? 1 : d w;l;d (5.24) This is identical to (4.48), except the maximisation is performed over d rather than h. The SCTCM di ers from the SCCC in the use of an trellis encoder as the inner code, rather than a convolutional encoder. However, from an input point of view these two encoders appear the same, so the techniques of Section 4.3.1 can be easily modi ed for SCTCM.

87

5.3. Design Criteria For SCTCM

If a non-recursive inner trellis encoder is used then the minimum input Hamming weight of the inner code is still 1, so (4.49) still holds, and non-recursive inner codes ares still precluded. If a recursive inner trellis encoder is used then the minimum input Hamming weight of the inner code is still 2, and (4.52) is still valid for SCTCM. In 15] it is shown that for SCCC this is maximised for l = do , and that the inequality becomes an equality if H w 2 WH , where WH is the set of input Hamming weights w which generate codewords of the outer code with Hamming weight l = do . As SCCC and SCTCM both have a H convolutional code as the outer code this will still be true for SCTCM. This means that the maximum exponent of N for a recursive inner trellis code is the same as given for SCCC in (4.53), which is always negative. Thus a recursive inner trellis code should also be used in SCTCM, and an outer code with as large a free distance as possible should again be used. The similarity of the maximum exponents of N for both SCCC and SCTCM means that the the Pb bound is also dominated by error events for which l = do , i.e., error events H from the inner decoder which generate errors in the outer code with Hamming weight equal to the free distance of the outer code.

5.3.2 The Exponent of N for the SCTCM Free Distance

As Eb=N ! 1 the performance of the SCTCM will be dominated by the minimum value of d in the bound, which is the free Euclidean distance of the SCTCM, dS . The maximum E number of concatenated error events of the outer code will be bounded by (4.55), and the maximum number of concatenated error events of the inner code will be bounded by (5.25) j i 6 dd : i E

0

Therefore a similar expression to (4.56) can be produced for SCTCM:

dS + l ? l ? 1 ; E (5.26) diE do H which, as the ?l term will dominate, will be maximised for the minimum value of l yielding codewords of output Euclidean distance dS . Now, the minimum weight l will result from E codewords of the outer code with output Hamming weight do , so minflg = do and (5.26) H H becomes S (5.27) (hS ) 6 diE ? do : H E dE

(dS ) 6 max E l

88

5.3. Design Criteria For SCTCM

Most SCTCM will have a free distance greater than the free distance of their inner codes, but generally dS < 2diE , and (5.27) therefore becomes E (hS ) 6 1 ? do ; E H which is similar to (4.58) for SCCC and is always negative. This bound implies that, as was the case for SCCC, at high Eb=N increasing the interleaver length will always result in a reduction of the BER. Also, this interleaver gain will be increased by choosing an outer code with large do . H

0

(5.28)

5.3.3 SCTCM Design Criteria

From (4.53) the BER bound for a SCTCM is dominated by error events for which l = do , H as was the case for SCCC. Therefore the same approach to the inner code design can be taken: that of minimising the contribution to the bound of terms with l = do . However, H short constraint length inner codes proved to be an exception to this. The free Hamming distance of the four state outer constituent code was 3 (see Table 4.3), indicating that the search for the inner code should minimise the contribution of the codewords with information weight 3 to the bound. When this was done for a 4 state rate 3=4 16QAM inner code it was found that the resulting SCTCM was dominated by terms with l = 4. Closer examination of the distance spectra of the inner constituent code revealed that the terms with l = 3 had been minimised to the point that they no longer dominated the SCTCM distance spectrum. This was because of the simple nature of the inner constituent code, which has only four states but has eight branches per state (resulting in parallel branches). The design criteria therefore include the secondary criteria of the minimisation of the contribution of l = do + 1 terms for simple codes. (It is not su cient H to simply change the primary search criteria to that of terms with l = do + 1, as this will H result in codes with greater contribution from l = do terms.) H Again both the maximum exponent of N (4.53) and the exponent of N for the SCTCM free distance (5.28) are minimised by maximising do . Therefore the outer code should H have as large a free distance as is practical. As was the case for both SCCC and PCCC, S{type random interleavers 47] should be used between the inner and outer codes. In Chapter 2 it was concluded that Gray mapped signal sets give the best performance with TCM schemes. This is because this minimises both the squared Euclidean distance and the input Hamming weight of error events. This is also a desirable quality in the

89

5.3. Design Criteria For SCTCM

SCTCM bound, so Gray mapped signal sets should be used. An example demonstrating the superiority of Gray mapped signal sets in SCTCM applications is shown in Figure 5.2. Here the simulated performance of a SCTCM scheme using a Gray mapped signal set and a SCTCM scheme using naturally mapped signal set are compared. In both cases the codes are rate 1=3 8PSK schemes, with a 4 state inner code, a 4 state outer code (taken from Table 4.2), a 10000 bit encoding delay, and 8 decoder iterations. The Gray mapped inner code has generator polynomial h = 7, h = 5, and h = 5, and the naturally mapped inner code has generator polynomials h = 7, h = 2, and h = 6 (both found using the design criteria given below).

0 1 2 0 1 2

The design criteria for SCTCM are quite similar to those for SCCC given in Section 4.3.4, and are summarised below: The inner code should be a recursive trellis code; The contribution of error events from the inner code with l = do should be minH imised; For simple codes the contribution of error events from the inner code with l = do +1 H should also be minimised; The outer code should have maximum free distance; A random bit-wise interleaver should be used; The interleaver should meet the S{type criteria; The interleaver should be as large as possible; and Gray mapped signal sets should be used. Using these design criteria a search can now be conducted for suitable recursive convolutional codes to be used as the inner code in an SCTCM. The approach taken will be to estimate the contribution of the inner code to the SCTCM BER bound, and minimise this.

90

5.4. Search For New SCTCM

1

Gray mapped Naturally mapped

10?

1

10?

2

Bit Error Rate

10?

3

10?

4

10?

5

10?

6

0.9

1

1.1

1.2 1.3 Eb =N (dB)

0

1.4

1.5

1.6

Figure 5.2: Comparison of rate 1=3 8PSK SCTCM with Gray mapped and naturally mapped signal sets

91

5.4. Search For New SCTCM

5.4 Search For New SCTCM

5.4.1 SCTCM Search Criteria

The conclusion of the previous section was that the bound is dominated by error events with l = do . Therefore the bound contribution from error events of the inner code with H o should be considered and minimised. Again the approach will be to calculate the l = dH contribution to the BER bound of all codewords from the inner code with input Hamming weight do , and nd the code which minimises this. H Using Equations (5.5) and (5.7) with l = do , the bound can be approximated by H NRo X ACo o B Coi X p w;dH dH ;d w ? N NRo Q dRoRiEb=N : Pb . (5.29) do w d2D H

0 =1

As has already been observed the outer code should have as large a do as possible. H This equation also implies that the outer code should have minimum number of nearest P neighbours, NRo ACo o . This is the usual criteria for a good convolutional code and the w;dH w codes of, say, 1] can be used.

=1

The outer code is therefore xed, and

Pb . D

where

X

d2D

C Bdoi ;d Q H

p

dRo RiEb =N ;

0

(5.30)

X AC w D = ? Nw;d : NR

NRo

o o H

w=1 do H

o

(5.31)

From (5.16) this can be expressed as X X N=p Ci Pb . D H j i Sdo ;d;j Q i d2D j

i o H

p

dRoRiEb =N :

0

(5.32)

Therefore the search criteria for the inner code is

i i

9 8 = < X X N=p C p min : C j i Sd ;d;j Q dRoRiEb =N ; ; d2D0 j

0

(5.33)

where D0 is the set of squared Euclidean distances which have a signi cant e ect on the result. Note that (5.33) is dependent on do , N , Ro , Ri , and Eb =No. H Typically rate b=(b + 1) constituent codes will be used for both inner and outer codes, and recall that Ro = k=p and Ri = p=n, so that both Ro and Ri will be xed according

92

5.4. Search For New SCTCM

to the design of the SCCC rate. The number of levels in the signal set will be n, and the bandwidth e ciency will be k information bits per symbol. The free distance do of the H outer code will be xed by the desired complexity of the outer code, leaving N and Eb =N as the only unknowns in the search criteria. N should be set according to the desired codec delay, but for large N the search criteria will not be sensitive to this parameter. The desired operating Eb =N point should be used in the code search.

0 0

5.4.2 Search Results

The search criteria of the previous section were used to perform a search for new SCTCM inner constituent codes. Both 8PSK and 16QAM signal sets were considered (the techniques can also be used to search for codes suitable for higher order modulation schemes). To ensure bandwidth e ciency only rate p=(p + 1) codes were considered. The search was conducted for rate 2=3 8PSK, and rate 3=4 16QAM inner codes, each with a 4, 8, and 16 states. In each case a rate (p ? 1)=p outer code with the same number of states as the inner code was used, giving an overall SCTCM with rate k=(k + 2) and a bandwidth e ciency of k information bits per symbol. The outer codes were taken from 1, Table 11.1] | Table 4.2 shows the rate 1=2 codes used with the rate 2=3 8PSK inner codes, and Table 4.3 shows the rate 2=3 codes used with the rate 3=4 16QAM inner codes. For the search an interleaver size which gives an encoding delay of 10000 information bits was used, and an operating point of Eb=N = 4:0 dB was assumed. Tables 5.2 and 5.3 show the new inner codes found for rates 2=3 8PSK and 3=4 16QAM, respectively. See Appendix A for a description of the generator polynomials. Also shown in these tables is the information Hamming weight (nominally the free distance do of the outer codes) H which was used as the search criteria. The weights in parentheses are the secondary search criteria used, if applicable. Figures 5.3 and 5.4 show the encoder structures for the resultant rate 1=3 8PSK SCTCM and rate 1=2 16QAM SCTCM, respectively. In both cases the signal sets of Figure 2.1 were used.

0

The e ect of reducing the number of states in the inner code and of using a di erent number of states in the inner and outer codes was also investigated. As rate 1=2 16QAM is of more practical signi cance code searches were conducted for 2 state rate 3=4 constituent trellis codes. There are no 2 state rate 2=3 constituent convolutional codes listed in 1, Table 11.1] so the 4 state code from Table 4.3 was chosen as the outer code. Codes searches were conducted for inner trellis codes with 2 states to be used in conjunction with outer codes with 4, 8, and 16 states. To examine the e ect of using a di erent number of states in the inner and outer codes but with more states in each a code search was also conducted

93

5.4. Search For New SCTCM

Inner States Outer States Search Criteria Generator Polynomial 4 4 l=5 h =7h =5h =5 8 8 l=6 h = 13 h = 17 h = 15 16 16 l=7 h = 23 h = 35 h = 25

0 1 2 0 0 1 1 2 2

Table 5.2: Rate 2=3 8PSK inner constituent codes for SCTCM with the same number of states in inner and outer codes Inner States Outer States Search Criteria Generator Polynomial 4 4 l = 3(4) h =7h =5h =5h =5 8 8 l=4 h = 13 h = 11 h = 17 h = 15 16 16 l=5 h = 23 h = 21 h = 37 h = 37

0 1 2 3 0 0 1 1 2 2 3 3

Table 5.3: Rate 3=4 16QAM inner constituent codes for SCTCM with the same number of states in inner and outer codes for an inner trellis code with 4 states to be used in conjunction with an outer code with 8 states. These searches further demonstrated the need for the secondary search criteria, as there are a number of 2 state rate 3=4 codes with no l = 3 terms at all (or indeed any terms with odd information Hamming weight). In these cases the codes were selected using the secondary criteria of l = do + 1. These codes are shown in Table 5.4. H Figure 5.5 shows the performance bounds for the rate 1=3 8PSK SCTCM constructed using the inner codes of Table 5.2. Also shown in this gure are the performance bounds for the rate 1=3 SCCC of Table 4.5 when used with a Gray mapped 8PSK signal set. With 4 state inner and outer codes there is an improvement of approximately 0.8 dB when SCTCM is used, with 8 state inner and outer codes there is no di erence in performance, and with 16 state inner and outer codes both searches resulted in the same code. A similar comparison is made in Figure 5.6 for the performance bounds of the rate 1=2 16QAM SCTCM of Table 5.3 and the rate 1=2 16QAM SCCC of Table 4.6 when used with a Gray mapped 16QAM signal set. This time there is a performance improvement in all cases | with 4 state inner and outer codes there is an improvement of approximately 0.3 dB, with 8 state inner and outer codes there is an improvement of up to 0.4 dB, and with 16 state inner and outer codes there is an improvement of 0.2 dB. Finally, in Figure 5.7 the performance bounds of all the rate 1=2 16QAM SCTCM given in Tables 5.3 and 5.4 are summarised.

94

5.5. SCTCM Performance

h h h h Table 5.4: Rate 3=4 16QAM inner constituent codes for SCTCM with di erent number of states in inner and outer codes

0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3

Inner States Outer States Search Criteria 2 4 l = 3(4) 2 8 l=4 2 16 l = 5(6) 4 8 l=4

Generator Polynomial =3h =1h =1h =1 =3h =3h =1h =1 =3h =1h =1h =1 =7h =5h =5h =5

Rate 1/2 Convolutional Encoder

20000 bit Random Interleaver

Rate 2/3 Recursive Convolutional Encoder

Gray Mapped 8PSK

Figure 5.3: Rate 1=3 8PSK SCTCM encoder

5.5 SCTCM Performance

The performance of the SCTCM codes of the previous section was determined by simulating their operation on an additive white Gaussian noise (AWGN) channel. The codes were decoded using iterated MAP algorithms. The SCTCM decoder is identical to the SCCC decoder depicted in Figure 4.8, and the MAP decoders used the bit-wise SISO algorithm described on page 57. As discussed in Section 4.1.5 the code symbol probability Pt(c; I ) for the inner decoder (used in equations (4.13), (4.14), and (4.27)) must be determined from the output of the channel and an estimate of the noise variance. At time t the inner encoder will produce symbol Ct, which will be mapped by the signal set mapper to the complex symbol with in-phase and quadrature components Tti and Ttq , respectively. This symbol is transmitted over an AWGN channel, resulting in received values Rti = Tti + Nti and Rtq = Ttq + Ntq , where N i and N q are independent Gaussian distributed random variables with zero mean and variance i and q . Given the received symbol Rt, the probability Pt(c; I ) for all 2n

2 2

Rate 2/3 Convolutional Encoder

15000 bit Random Interleaver

Rate 3/4 Recursive Convolutional Encoder

Gray Mapped 16QAM

Figure 5.4: Rate 1=2 16QAM SCTCM encoder

95

5.5. SCTCM Performance

10?

11

10?

12

4 state SCTCM 4 state SCCCC 8 state SCTCM 8 state SCCC 16 state SCCC and SCT