Difference between revisions of "Aufgaben:Exercise 4.7: Several Parallel Gaussian Channels"

From LNTwww
 
Line 91: Line 91:
 
{{ML-Kopf}}
 
{{ML-Kopf}}
 
'''(1)'''  The parameter  $K$  is equal to the dimension of the signal space representation:
 
'''(1)'''  The parameter  $K$  is equal to the dimension of the signal space representation:
* For <u>ASK and BPSK</u>,&nbsp; $\underline{K=1}$.
+
* For&nbsp; <u>ASK and BPSK</u>,&nbsp; $\underline{K=1}$.
* For <u> constellations 3 to 5</u>,however,&nbsp; $\underline{K=2}$&nbsp; (orthogonal modulation with cosine and sine).
+
* For&nbsp; <u> constellations 3 to 5</u>,&nbsp; however,&nbsp; $\underline{K=2}$&nbsp; (orthogonal modulation with cosine and sine).
  
  
  
 
'''(2)'''&nbsp; Correct is the <u>proposed solution 2</u>:
 
'''(2)'''&nbsp; Correct is the <u>proposed solution 2</u>:
*For each of the channels&nbsp; $(1 &#8804; k &#8804; K)$&nbsp;, the channel capacitance is &nbsp; $C = 1/2 \cdot \log_2 \ \big[1 + (P_X/K) /P_N) \big]$.&nbsp; The total capacitance is then larger by a factor of&nbsp; $K$&nbsp;:
+
*For each of the channels&nbsp; $(1 &#8804; k &#8804; K)$,&nbsp; the channel capacitance is &nbsp; $C = 1/2 \cdot \log_2 \ \big[1 + (P_X/K) /P_N) \big]$.&nbsp;  
 +
*The total capacitance is then larger by a factor of&nbsp; $K$&nbsp;:
 
:$$C_K(P_X)  = \sum_{k= 1}^K \hspace{0.1cm}C_k = \frac{K}{2} \cdot  {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_X}{K \cdot P_N} \right )\hspace{0.05cm}.$$
 
:$$C_K(P_X)  = \sum_{k= 1}^K \hspace{0.1cm}C_k = \frac{K}{2} \cdot  {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_X}{K \cdot P_N} \right )\hspace{0.05cm}.$$
*The proposed solution 1 is too positive. This would apply when limiting the total power to&nbsp; $K &middot; P_X$&nbsp;.  
+
*The proposed solution 1 is too positive.&nbsp; This would apply when limiting the total power to&nbsp; $K &middot; P_X$&nbsp;.  
*Proposition 3 would imply that no capacity increase is achieved by using multiple independent channels, which is obviously not true.
+
*Proposition 3 would imply that no capacity increase is achieved by using multiple independent channels,&nbsp; which is obviously not true.
  
  
'''(3)'''&nbsp; The table shows the results for&nbsp; $K = 1$,&nbsp; $K = 2$&nbsp; and&nbsp; $K = 4$&nbsp; , and various signa&ndash;to&ndash;noise power ratios&nbsp; $\xi = P_X/P_N$. For&nbsp; $\xi = P_X/P_N = 15$&nbsp; (highlighted column), the result is:
+
[[File:P_ID2902__Inf_A_4_7c.png|right|frame|Channel capacity&nbsp; $C_K$&nbsp; of&nbsp; $K$&nbsp; parallel Gaussian channels for different&nbsp; $\xi = P_X/P_N$]]
[[File:P_ID2902__Inf_A_4_7c.png|right|frame|Channel capacity&nbsp; $C_K$&nbsp; of&nbsp; $K$&nbsp; parallel Gaussian channels for different&nbsp; $\xi = P_X/P_N$]]  
+
'''(3)'''&nbsp; The table shows the results for&nbsp; $K = 1$,&nbsp; $K = 2$&nbsp; and&nbsp; $K = 4$,&nbsp; and various signa&ndash;to&ndash;noise power ratios&nbsp; $\xi = P_X/P_N$. <br>For&nbsp; $\xi = P_X/P_N = 15$&nbsp; (highlighted column),&nbsp; the result is:
 +
   
 
*  $K=1$:&nbsp;&nbsp; $C_K = 1/2 &middot; \log_2 \ (16)\hspace{0.05cm}\underline{ = 2.000}$ bit,
 
*  $K=1$:&nbsp;&nbsp; $C_K = 1/2 &middot; \log_2 \ (16)\hspace{0.05cm}\underline{ = 2.000}$ bit,
 
* $K=2$:&nbsp;&nbsp; $C_K = 1/2 &middot; \log_2 \ (8.5)\hspace{0.05cm}\underline{ = 3.087}$ bit,
 
* $K=2$:&nbsp;&nbsp; $C_K = 1/2 &middot; \log_2 \ (8.5)\hspace{0.05cm}\underline{ = 3.087}$ bit,
Line 113: Line 115:
 
*We now write the channel capacity using the natural logarithm and the abbreviation&nbsp; $\xi = P_X/P_N$:
 
*We now write the channel capacity using the natural logarithm and the abbreviation&nbsp; $\xi = P_X/P_N$:
 
:$$C_{\rm nat}(\xi, K)  ={K}/{2} \cdot  {\rm ln}\hspace{0.05cm}\left ( 1 + {\xi}/{K} \right )\hspace{0.05cm}.$$
 
:$$C_{\rm nat}(\xi, K)  ={K}/{2} \cdot  {\rm ln}\hspace{0.05cm}\left ( 1 + {\xi}/{K} \right )\hspace{0.05cm}.$$
*Then, for large values of&nbsp; $K$i.e., for small values of the quotient&nbsp; $\varepsilon =\xi/K$&nbsp;jholds:
+
*Then,&nbsp; for large values of&nbsp; $K$&nbsp; i.e., for small values of the quotient&nbsp; $\varepsilon =\xi/K$&nbsp; holds:
 
:$${\rm ln}\hspace{0.05cm}\left ( 1 + \varepsilon \right )=  
 
:$${\rm ln}\hspace{0.05cm}\left ( 1 + \varepsilon \right )=  
 
\varepsilon - \frac{\varepsilon^2}{2} + \frac{\varepsilon^3}{3} - ...
 
\varepsilon - \frac{\varepsilon^2}{2} + \frac{\varepsilon^3}{3} - ...
Line 126: Line 128:
 
:$$C_{\rm bit}(\xi, K \rightarrow\infty)  = \frac{\xi}{2 \cdot {\rm ln}\hspace{0.1cm}(2)} =
 
:$$C_{\rm bit}(\xi, K \rightarrow\infty)  = \frac{\xi}{2 \cdot {\rm ln}\hspace{0.1cm}(2)} =
 
\frac{P_X/P_N}{2 \cdot {\rm ln}\hspace{0.1cm}(2)} \hspace{0.05cm}.$$
 
\frac{P_X/P_N}{2 \cdot {\rm ln}\hspace{0.1cm}(2)} \hspace{0.05cm}.$$
*For smaller values of&nbsp; $K$&nbsp; , the result is always a smalle&nbsp; $C$&ndash;value, since
+
*For smaller values of&nbsp; $K$,&nbsp; the result is always a smaller&nbsp; $C$&ndash;value, since
 
:$$\frac{\xi}{2K} > \frac{\xi^2}{3K^2}\hspace{0.05cm}, \hspace{0.5cm}   
 
:$$\frac{\xi}{2K} > \frac{\xi^2}{3K^2}\hspace{0.05cm}, \hspace{0.5cm}   
 
\frac{\xi^3}{4K^3} > \frac{\xi^4}{5K^4}  \hspace{0.05cm}, \hspace{0.5cm}  {\rm usw.}$$
 
\frac{\xi^3}{4K^3} > \frac{\xi^4}{5K^4}  \hspace{0.05cm}, \hspace{0.5cm}  {\rm usw.}$$
  
The last row of the table shows: &nbsp; With&nbsp;  $K = 4$&nbsp;one is still far from the theoretical maximum&nbsp; $($for $K &#8594; &#8734;)$&nbsp;for large &nbsp; $\xi$&ndash;values.
+
The last row of the table shows: &nbsp; With&nbsp;  $K = 4$&nbsp; one is still far away from the theoretical maximum&nbsp; $($for $K &#8594; &#8734;)$&nbsp; for large &nbsp; $\xi$&ndash;values.
  
  

Latest revision as of 14:33, 4 October 2021

Signal space points in digital modulation

The channel capacity of the AWGN channel with the indicator  $Y = X + N$  was given in the  theory section  as follows
(with the additional unit  "bit"):

$$C_{\rm AWGN}(P_X,\ P_N) = {1}/{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + {P_X}/{P_N} \right )\hspace{0.05cm}.$$

The quantities used have the following meaning:

  • $P_X$  is the transmission power   ⇒   variance of the random variable  $X$,
  • $P_N$  is the noise power   ⇒   variance of the random variable  $N$.


If  $K$  identical Gaussian channels are used in parallel, the total capacity is:

$$C_K(P_X,\ P_N) = K \cdot C_{\rm AWGN}(P_X/K, \ P_N) \hspace{0.05cm}.$$

Here it is considered that

  • in each channel the same interference power  $P_N$  is present,
  • thus each channel receives the same transmit power  $(P_X/K)$ ,
  • the total power is equal to  $P_X$ exactly as in the case  $K = 1$ .


In the adjacent graph, the signal space points for some digital modulation schemes are given:


At the beginning of this exercise, check which  $K$–parameter is valid for each method.





Hints:



Questions

1

Which parameters   $K$  are valid for the following modulation methods?

$K \ = \ $

$\text{ (ASK)}$
$K \ = \ $

$\text{ (BPSK)}$
$K \ = \ $

$\text{ (4-QAM)}$
$K \ = \ $

$\text{ (8-PSK)}$
$K \ = \ $

$\text{ (16-ASK/PSK)}$

2

What is the channel capacity  $C_K$  for  $K$  equal channels,  each with the noise power  $P_N$  and the transmission power  $P_X(K)$?

  $C_K = K/2 \cdot \log_2 \ \big[1 + P_X/P_N \big]$.
  $C_K = K/2 \cdot \log_2 \ \big[1 + P_X/(K \cdot P_N) \big]$.
  $C_K = 1/2 \cdot \log_2 \ \big[1 + P_X/P_N \big]$.

3

What are the capacities for  $P_X/P_N = 15$?

$K = 1\text{:} \ \ C_K \ = \ $

$\ \rm bit$
$K = 2\text{:} \ \ C_K \ = \ $

$\ \rm bit$
$K = 4\text{:} \ \ C_K \ = \ $

$\ \rm bit$

4

Is there a  (theoretical)  optimum with respect to the number  $K$  of channels?

Yes:   The largest channel capacity results for  $K = 2$.
Yes:   The largest channel capacity results for  $K = 4$.
No:   The larger  $K$, the larger the channel capacity.
The limit value for  $K \to \infty$  (in bit)  is  $C_K = P_X/P_N/2/\ln (2)$  in  "bit".


Solution

(1)  The parameter  $K$  is equal to the dimension of the signal space representation:

  • For  ASK and BPSK,  $\underline{K=1}$.
  • For  constellations 3 to 5,  however,  $\underline{K=2}$  (orthogonal modulation with cosine and sine).


(2)  Correct is the proposed solution 2:

  • For each of the channels  $(1 ≤ k ≤ K)$,  the channel capacitance is   $C = 1/2 \cdot \log_2 \ \big[1 + (P_X/K) /P_N) \big]$. 
  • The total capacitance is then larger by a factor of  $K$ :
$$C_K(P_X) = \sum_{k= 1}^K \hspace{0.1cm}C_k = \frac{K}{2} \cdot {\rm log}_2\hspace{0.05cm}\left ( 1 + \frac{P_X}{K \cdot P_N} \right )\hspace{0.05cm}.$$
  • The proposed solution 1 is too positive.  This would apply when limiting the total power to  $K · P_X$ .
  • Proposition 3 would imply that no capacity increase is achieved by using multiple independent channels,  which is obviously not true.


Channel capacity  $C_K$  of  $K$  parallel Gaussian channels for different  $\xi = P_X/P_N$

(3)  The table shows the results for  $K = 1$,  $K = 2$  and  $K = 4$,  and various signa–to–noise power ratios  $\xi = P_X/P_N$.
For  $\xi = P_X/P_N = 15$  (highlighted column),  the result is:

  • $K=1$:   $C_K = 1/2 · \log_2 \ (16)\hspace{0.05cm}\underline{ = 2.000}$ bit,
  • $K=2$:   $C_K = 1/2 · \log_2 \ (8.5)\hspace{0.05cm}\underline{ = 3.087}$ bit,
  • $K=4$:   $C_K = 1/2 · \log_2 \ (4.75)\hspace{0.05cm}\underline{ = 4.496}$ bit.


(4)  Propositions 3 and 4 are correct, as the following calculations show:

  • It is already obvious from the above table that the first proposed solution must be wrong.
  • We now write the channel capacity using the natural logarithm and the abbreviation  $\xi = P_X/P_N$:
$$C_{\rm nat}(\xi, K) ={K}/{2} \cdot {\rm ln}\hspace{0.05cm}\left ( 1 + {\xi}/{K} \right )\hspace{0.05cm}.$$
  • Then,  for large values of  $K$  i.e., for small values of the quotient  $\varepsilon =\xi/K$  holds:
$${\rm ln}\hspace{0.05cm}\left ( 1 + \varepsilon \right )= \varepsilon - \frac{\varepsilon^2}{2} + \frac{\varepsilon^3}{3} - ... \hspace{0.3cm}\Rightarrow \hspace{0.3cm} C_{\rm nat}(\xi, K) = \frac{K}{2} \cdot \left [ \frac{\xi}{K} - \frac{\xi^2}{2K^2} + \frac{\xi^3}{3K^3} - \text{...} \right ]$$
$$\hspace{0.3cm}\Rightarrow \hspace{0.3cm} C_{\rm bit}(\xi, K) = \frac{\xi}{2 \cdot {\rm ln}\hspace{0.1cm}(2)} \cdot \left [ 1 - \frac{\xi}{2K} + \frac{\xi^2}{3K^2} -\frac{\xi^3}{4K^3} + \frac{\xi^4}{5K^4} - \text{...} \right ] \hspace{0.05cm}.$$
  • For  $K → ∞$ , the proposed value is:
$$C_{\rm bit}(\xi, K \rightarrow\infty) = \frac{\xi}{2 \cdot {\rm ln}\hspace{0.1cm}(2)} = \frac{P_X/P_N}{2 \cdot {\rm ln}\hspace{0.1cm}(2)} \hspace{0.05cm}.$$
  • For smaller values of  $K$,  the result is always a smaller  $C$–value, since
$$\frac{\xi}{2K} > \frac{\xi^2}{3K^2}\hspace{0.05cm}, \hspace{0.5cm} \frac{\xi^3}{4K^3} > \frac{\xi^4}{5K^4} \hspace{0.05cm}, \hspace{0.5cm} {\rm usw.}$$

The last row of the table shows:   With  $K = 4$  one is still far away from the theoretical maximum  $($for $K → ∞)$  for large   $\xi$–values.