Difference between revisions of "Channel Coding"
From LNTwww
(34 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
+ | ===Brief summary=== | ||
− | Channel | + | {{BlueBox|TEXT=»'''Channel Coding'''« $($or »Error-Control Coding«$)$ includes both, »Error Detection« as well as »Forward Error Correction« |
− | + | *which first enables digital signal transmission when the channel is bad $($small SNR$)$ | |
− | * | ||
+ | *and leads to very small error rates when the channel is good enough $($large SNR$)$. | ||
− | |||
+ | Here are some keywords from the book content: | ||
+ | |||
+ | # Binary linear block codes: Generator matrix, parity-heck matrix and decoding. Examples: Single parity-check codes, repetition codes, Hamming codes. | ||
+ | # Error probability bounds: Minimum distance, Union bound, Shannon bound. Channel coding theorem and channel capacity: error rate vs. code rate. | ||
+ | # Reed-Solomon codes: Algebra fundamentals, Extension fields, code parameters, encoding– and decoding principle, Singleton bound, applications. | ||
+ | # Convolutional codes: Algebraic and polynomial description, state and trellis diagram, decoding using Viterbi and BCJR algorithm. | ||
+ | # Iterative decoding methods: Soft-in soft-out decoders, product codes, turbo codes and low-density parity-check $($LDPC$)$ codes. | ||
− | |||
− | + | <u>Notes:</u> | |
+ | |||
+ | *Knowledge of »[[Theory_of_Stochastic_Signals]]« and the »[[Information theory]]« is helpful, but not essential for understanding channel coding. | ||
+ | |||
+ | *Their mathematics is fundamentally different from that in other disciplines. However, analogies can often be seen, e.g., to [[Signal_Representation/The_Convolution_Theorem_and_Operation|»conventional convolution«]]. | ||
+ | |||
+ | * An encoding type with a different goal is the [[Information_Theory/General_Description|»Source coding«]] $($"Data compression"$)$. Here, redundancy is not added, but reduced. | ||
+ | |||
+ | * Another type of coding is [[Digital_Signal_Transmission/Basics_of_Coded_Transmission#Source_coding_.E2.80.93_Channel_coding_.E2.80.93_Line_coding|»Line Coding«]] with the aim, to adapt the transmitted signal spectrally to the transmission channel in the best possible way. | ||
+ | |||
+ | |||
+ | ⇒ Here first a »'''contents overview'''« based on the »'''four main chapters'''« with a total of »'''22 individual chapters'''« and »'''175 sections'''«.}} | ||
+ | |||
===Contents=== | ===Contents=== | ||
{{Collapsible-Kopf}} | {{Collapsible-Kopf}} | ||
− | {{Collapse1| header= | + | {{Collapse1| header=Binary Block Codes for Channel Coding | submenu= |
*[[/Objective of Channel Coding/]] | *[[/Objective of Channel Coding/]] | ||
*[[/Channel Models and Decision Structures/]] | *[[/Channel Models and Decision Structures/]] | ||
− | *[[/ | + | *[[/Examples of Binary Block Codes/]] |
− | *[[/ | + | *[[/General Description of Linear Block Codes/]] |
− | *[[/ | + | *[[/Decoding of Linear Block Codes/]] |
− | *[[/ | + | *[[/Limits for Block Error Probability/]] |
− | *[[/ | + | *[[/Information Theoretical Limits of Channel Coding/]] |
}} | }} | ||
− | {{Collapse2 | header=Reed–Solomon–Codes | + | {{Collapse2 | header=Reed–Solomon–Codes and Their Decoding |
|submenu= | |submenu= | ||
− | *[[/ | + | *[[/Some Basics of Algebra/]] |
− | *[[/ | + | *[[/Extension Field/]] |
− | *[[/Definition | + | *[[/Definition and Properties of Reed-Solomon Codes/]] |
− | *[[/ | + | *[[/Reed-Solomon Decoding for the Erasure Channel/]] |
− | *[[/ | + | *[[/Error Correction According to Reed-Solomon Coding/]] |
− | *[[/ | + | *[[/Error Probability and Application Areas/]] |
}} | }} | ||
− | {{Collapse3 | header= | + | {{Collapse3 | header=Convolutional Codes and Their Decoding |
|submenu= | |submenu= | ||
− | *[[/ | + | *[[/Basics of Convolutional Coding/]] |
− | *[[/ | + | *[[/Algebraic and Polynomial Description/]] |
− | *[[/ | + | *[[/Code Description with State and Trellis Diagram/]] |
− | *[[/ | + | *[[/Decoding of Convolutional Codes/]] |
− | *[[/ | + | *[[/Distance Characteristics and Error Probability Bounds/]] |
}} | }} | ||
− | {{Collapse4 | header=Iterative | + | {{Collapse4 | header=Iterative Decoding Methods |
|submenu= | |submenu= | ||
− | *[[/ | + | *[[/Soft-in Soft-Out Decoder/]] |
− | *[[/ | + | *[[/The Basics of Product Codes/]] |
− | *[[/ | + | *[[/The Basics of Turbo Codes/]] |
− | *[[/ | + | *[[/The Basics of Low-Density Parity Check Codes/]] |
}} | }} | ||
{{Collapsible-Fuß}} | {{Collapsible-Fuß}} | ||
+ | ===Exercises and multimedia=== | ||
+ | {{BlaueBox|TEXT= | ||
+ | In addition to these theory pages, we also offer exercises and multimedia modules on this topic, which could help to clarify the teaching material: | ||
− | + | $(1)$ [https://en.lntwww.de/Category:Channel_Coding:_Exercises $\text{Exercises}$] | |
− | |||
− | |||
− | |||
− | |||
+ | $(2)$ [[LNTwww:Learning_Videos_to_"Channel_Coding"|$\text{Learning videos}$]] | ||
− | $\text{ | + | $(3)$ [[LNTwww:Applets_to_"Channel_Coding"|$\text{Applets}$]] }} |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
+ | ===Further links=== | ||
+ | {{BlaueBox|TEXT= | ||
+ | $(4)$ [[LNTwww:Bibliography_to_"Channel_Coding"|$\text{Bibliography}$]] | ||
− | [[LNTwww: | + | $(5)$ [[LNTwww:Imprint_for_the_book_"Channel_Coding"|$\text{Impressum}$]]}} |
+ | <br><br> | ||
{{Display}} | {{Display}} |
Latest revision as of 18:17, 26 March 2023
Brief summary
»Channel Coding« $($or »Error-Control Coding«$)$ includes both, »Error Detection« as well as »Forward Error Correction«
- which first enables digital signal transmission when the channel is bad $($small SNR$)$
- and leads to very small error rates when the channel is good enough $($large SNR$)$.
Here are some keywords from the book content:
- Binary linear block codes: Generator matrix, parity-heck matrix and decoding. Examples: Single parity-check codes, repetition codes, Hamming codes.
- Error probability bounds: Minimum distance, Union bound, Shannon bound. Channel coding theorem and channel capacity: error rate vs. code rate.
- Reed-Solomon codes: Algebra fundamentals, Extension fields, code parameters, encoding– and decoding principle, Singleton bound, applications.
- Convolutional codes: Algebraic and polynomial description, state and trellis diagram, decoding using Viterbi and BCJR algorithm.
- Iterative decoding methods: Soft-in soft-out decoders, product codes, turbo codes and low-density parity-check $($LDPC$)$ codes.
Notes:
- Knowledge of »Theory of Stochastic Signals« and the »Information theory« is helpful, but not essential for understanding channel coding.
- Their mathematics is fundamentally different from that in other disciplines. However, analogies can often be seen, e.g., to »conventional convolution«.
- An encoding type with a different goal is the »Source coding« $($"Data compression"$)$. Here, redundancy is not added, but reduced.
- Another type of coding is »Line Coding« with the aim, to adapt the transmitted signal spectrally to the transmission channel in the best possible way.
⇒ Here first a »contents overview« based on the »four main chapters« with a total of »22 individual chapters« and »175 sections«.
Contents
Exercises and multimedia
In addition to these theory pages, we also offer exercises and multimedia modules on this topic, which could help to clarify the teaching material:
$(1)$ $\text{Exercises}$
$(2)$ $\text{Learning videos}$
$(3)$ $\text{Applets}$
Further links