Professional Documents
Culture Documents
I. I NTRODUCTION
We consider a state-dependent orthogonal relay channel, in
which the channels connecting the source to the relay, and the
source and the relay to the destination are orthogonal, and are
governed by a state sequence, which is assumed to be known
only at the destination. We call this model the state-dependent
orthogonal relay channel with state information available at
the destination, and refer to it as the ORC-D model. See Figure
1 for an illustration of the ORC-D channel model.
Many practical communication scenarios can be modelled
by the ORC-D model. For example, consider a cognitive
network with a relay, in which the transmit signal of the
secondary user interferes simultaneously with the received
primary user signals at both the relay and the destination.
After decoding the secondary users message, the destination
obtains information about the interference affecting the sourcerelay channel, which can be exploited to decode the primary
transmitters message. Note that the relay may be oblivious
to the presence of the secondary user, and hence, may not
This paper was presented in part at the IEEE Information Theory Workshop
(ITW), Laussane, Switzerland, Sept. 2012 [1]. I. Estella Aguerri is with
the Mathematical and Algorithmic Sciences Laboratory, France Research
Center, Huawei Technologies Company, Ltd., Boulogne-Billancourt 92100,
France E-mail: inaki.estella@huawei.com. Deniz Gunduz is with
the Department of Electrical and Electronic Engineer, Imperial College
London, London SW7 2BT, U.K. E-mail: d.gunduz@imperial.ac.uk.
The research presented here was done during the Ph.D. studies of Inaki Estella
Aguerri at Imperial College London.
We derive an upper bound on the capacity of the ORCD model, and show that it is achievable by the pDCF
Fig. 1. Orthogonal state-dependent relay channel with channel state information available at the destination, called the ORC-D model.
(1)
{fr,i }ni=1
i = 1, ..., n,
(2)
(3)
M
1 X
Pr{g(Y1n , Y2n , Z n ) 6= w|W = w}.
M w=1
(4)
(6)
(7)
(9)
(10)
(8)
where the supremum is taken over all joint pmfs of the form
p(v)p(u|v)p(x|u)p(xR |v)p(y, yR |x, xR )p(
yR |xR , yR , u).
(11)
(12)
(13)
and
YR2 = X12 N2 ,
p(x1 ,u)
(14)
(15)
= min{R1 , 2 h2 ( ? pz ) h2 ()},
(16)
where ? , (1 ) + (1 ).
Following (14), the rate achievable by the CF scheme in the
parallel binary symmetric MRC-D is given by
RCF = max I(X11 X12 , YR |Z),
s.t. R1 I(YR1 YR2 ; YR |Z)
(17)
1
2
1 2
over p(z)p(x11 x21 )p(yR
|z, x11 )p(yR
|x2 )p(
yR |yR
yR ).
Let us define h1
2 (q) as the inverse of the entropy function
h2 (p) for q 0. For q < 0, we define h1
2 (q) = 0.
As we show in the next lemma, the achievable CF rate
in (17) is maximized by transmitting independent channel
inputs over the two parallel links to the relay by setting
X11 Ber(1/2), X12 Ber(1/2), and by independently
compressing each of the channel outputs YR1 and YR2 as
YR1 = YR1 Q1 and YR2 = YR2 Q2 , respectively, where
1
Q1 Ber(h1
2 (1 R1 /2)) and Q2 Ber(h2 (1 R1 /2)).
Note that for R1 2, the channel outputs can be compressed
errorlessly. The maximum achievable CF rate is given in the
following lemma.
1.2
0.8
0.6
0.4
0.2
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Fig. 5. Achievable rates and the cut-set upper bound for the parallel binary
symmetric MRC-D with respect to the binary noise parameter , for R1 = 1.2
and pz = 0.15.
(19)
(20)
(21)
RCF = 1 h2 ( ?
h1
2 (1
R1 ))),
(22)
h
()
R
),
or
equiv2
1
2
Lemma 7. For the binary symmetric MRC-D, pDCF with
alently, h1
2 (2 h2 (pz ) R1 ), the proposed pDCF is
outperformed by DF. In this regime, pDCF can achieve the binary (U, X1 , YR ) achieves the following rate.
same performance by decoding both channel inputs, reducing
RpDCF = max{RDF , RCF }
(23)
to DF.
(
1
min{R1 , 1 h2 ( ? pz )} if pz < h2 (1 R1 ),
Comparing the cut-set bound expression in (15) with RDF in
=
1
(16) and RCF in (18), we observe that DF achieves the cut-set
1 h2 ( ? h1
2 (1 R1 )) if pz h2 (1 R1 ).
bound if R1 2 h( ? pz ) h() while RCF coincides with
the cut-set bound if R1 2. On the other hand, the proposed
This result justifies the pDCF scheme proposed in Section
suboptimal pDCF scheme achieves the cut-set bound if R1 IV-A for the parallel binary symmetric MRC-D. Since the
2h2 (), i.e., for h1
channel p(y12 |x2 ) is independent of the channel state Z, the
2 (2R1 ). Hence, the capacity of the
parallel binary symmetric MRC-D in this regime is achieved largest rate is are achieved if the relay decodes X12 from YR2 .
by pDCF, while both DF and CF are suboptimal, as stated in However, for channel p(y11 |x1 , z), which depends on Z, the
the next lemma.
relay either decodes X11 , or compress YR1 , depending on pz .
Fig. 6. The multihop Gaussian relay channel with source-relay channel state
information available at the destination.
(28)
2
2R1
if 2
(1 + P ),
if 2 > 22R1 (1 + P ).
Fig. 7. Achievable rates and the cut-set upper bound for the multihop AWGN
relay channel with source-relay channel state information at the destination
for R1 = 1 and P = 0.3.
In Figure 7 the achievable rates are compared with the cutset bound. It is shown that DF achieves the best rate when
the correlation coefficient is low, i.e., when the destination
has low quality channel state information, while CF achieves
higher rates for higher values of . It is seen that pDCF
achieves the best of the two transmission schemes. Note also
that for = 0 DF meets the cut-set bound, while for = 1
CF meets the cut-set bound.
Although this example proves the suboptimality of the DF
scheme for the channel model under consideration, it does
not necessarily lead to the suboptimality of the CF scheme
as we have constrained the auxiliary random variables to be
Gaussian.
V. C OMPARISON WITH THE C UT-S ET B OUND
In the examples considered in Section IV, we have seen that
for certain conditions, the choice of certain random variables
allows us to show that the cut-set bound and the capacity
coincide. For example, we have seen that for the parallel binary
symmetric MRC-D the proposed pDCF scheme achieves the
cut-set bound for h1
2 (2 R1 ), or Gaussian random
variables meet the cut-set bound for = 0 or = 1 in
the Gaussian MRC-D. An interesting question is whether the
capacity expression in Theorem 1 always coincides with the
cut-set bound or not; that is, whether the cut-set bound is tight
for the relay channel model under consideration.
To address this question, we consider the multihop binary
channel in (19) for Z Ber(1/2). The capacity C of this
channel is given in the following lemma.
Lemma 9. The capacity of the binary symmetric MRC-D with
YR = X1 N Z, where N Ber() and Z Ber(1/2), is
achieved by CF and pDCF, and is given by
C = 1 h2 ( ? h1
2 (1 R1 )).
(29)
0.25
0.2
0.15
0.1
0.05
VI. C ONCLUSION
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Fig. 8. Achievable rates, capacity and cut-set upper bound for the multihop
binary relay channel with respect to for R1 = 0.25 and pz = 0.5.
bound. Note that for this setup, RDF = 0 and pDCF reduces
to CF, i.e., RpDCF = RCF . See Figure 8 for comparison of
the capacity with the cut-set bound for varying values.
CF suffices to achieve the capacity of the binary symmetric
MRC-D for Z Ber(1/2). While in general pDCF outperforms DF and CF, in certain cases these two schemes are
sufficient to achieve the cut-set bound, and hence, the capacity.
For the ORC-D model introduced in Section II, the cut-set
bound is given by
RCS = R2 + min{R1 , max I(X1 ; YR |Z)}.
p(x1 )
(30)
is achievable by CF.
Case 3) If maxp(x1 ) I(X1 ; YR ) R1 , the capacity, given by
C = R2 + R1 , is achievable by pDF.
Case 4) Let arg maxp(x1 ) I(X1 ; YR |Z) = p(x1 ). If R1 >
H(YR |Z) for YR induced by p(x1 ), the capacity, given
1 ; YR |Z), is achievable by CF.
by R2 + I(X
Proof. See Appendix H.
These cases can be observed in the examples from Section
IV. For example, in the Gaussian MRC-D with = 0, YR
is independent of Z, and thus, DF meets the cut-set bound
as stated in Case 1. Similarly, for = 1 CF meets the cutset bound since YR is a deterministic function of XR and Z,
which corresponds to Case 2.
For the parallel binary symmetric MRC-D in Section IV-A,
pDCF achieves the cut-set bound if h1
2 (2 R1 ) due
to the following reasoning. Since YR1 is independent of X11 ,
= I(X1 X2 ; Y2 YR |XR U Z)
(b)
= I(X2 XR ; Y1 Y2 |Z)
(e)
= I(YR ; YR |XR X1 X2 U Y2 Z)
(g)
= I(YR ; YR |U Z),
I(X2n ; Y2n |Z n ) =
(a)
(32)
n
X
i1
i1 n
H(Y2i |Z n Y21
) H(Y2i |Z n Y21
X2 )
i=1
n
X
i=1
n
X
i=1
(b)
(c)
and using the chain rule for the mutual information, the rate
achievable by pDCF is lower bounded by
R sup R2 + I(U ; YR ) + I(X1 ; YR |U Z)
P
(33)
(34)
(d)
nR2 ,
(a)
= I(U ; YR ) + I(YR ; YR |U Z)
I(YR ; YR |U Z),
(39)
n
I(XR
; Y1n |Z n ) nR1 .
(35)
(40)
(a)
(b)
(c)
A PPENDIX B
P ROOF OF L EMMA 2
(d)
(e)
(36)
(g)
(37)
such that n 0 as n .
First, we derive the following set of inequalities related to
the capacity of the source-destination channel.
nR = H(W )
(a)
(b)
(f )
(h)
(38)
n
+H(X1i |X1i+1
)
n
X
i1 n\i n
= nR2 +
I(YR1
Z X1i+1 ; YRi )
i=1
i1 n\i
i1 n\i n
n
I(X1i+1
; YRi |YR1
Z ) + H(X1i |YR1
Z X1i+1 )
i1 n\i
n
+I(X1i ; YR1
Z |X1i+1
) H(X1n |Y1n Z n ) + nn
n
X
(i)
i1 n\i n
= nR2 +
I(YR1
Z X1i+1 ; YRi )
i=1
i1 n\i n
+H(X1i |YR1
Z X1i+1 ) H(X1n |Y1n Z n ) + nn
10
= nR2 +
n
X
i=1
H(X1n |Y1n Z n )
n
X
(j)
nR2 +
(d)
I(Ui ; YRi ) + H(X1i |Ui )
+ nn
i=1
(f )
= nR2 +
I(Ui ; YRi ) + I(X1i ; YRi |Ui Zi ) + nn ,
i=1
i=1
nR +
n
X
i1 n
n
I(Y1i+1
; YRi |X1n YR1
Z ) nn
i=1
(g)
nR +
n
X
n
n i1 n
I(Y1i+1
; YRi |X1i
YR1 Z ) nn
i=1
= nR +
n
X
i=1
where (a) follows from (39) and (40); (b) is due to the fact that
conditioning reduces entropy; (c) is due to the Markov chains
(l)
n n
i1 n\i
i1
n
n
Z )(X1n X2n YRn Y2n );
I(X1i ; YR1
Z |X1i+1
) =
I(X1i ; YR1
|X1i+1
Z n\i ) Y2n (X2n Z n )(X1n YRn ) and Y1n (XR
(d) follows since conditioning reduces entropy; (e) is due
i=1
i=1
n
to the expression in (38); (f ) is due to the Markov chain
(m) X
i1 n\i
n
=
I(X1i+1
; YRi |YR1
Z ), (Y n Y n ) (X n Z n ) (X n Y n ) and; (g) is due to the Markov
1
2 2
R 1
i1
n
n i1 n
i=1
chain (Y1i+1
) (X1i
YR1 Z ) X11
.
n
n
A
single
letter
expression
can
be
obtained
by using the
where (l) is due to the independence of Z and X1 ; and (m)
usual
time-sharing
random
variable
arguments.
Let
Q be a time
is the conditional version of Csiszars equality [24]. Inequality
sharing
random
variable
uniformly
distributed
over
{1, ..., n},
(j) is due to the following bound,
independent of all the other random variables. Also, define a
n
X
set of random variables (X1Q , YRQ , UQ , YRQ , ZQ ) satisfying
n
H(X1n |Y1n Z n ) =
H(X1i |X1i+1
Z n Y1n )
n
X
n
X
i=1
n
X
i1 n
H(X1i |YR1
X1i+1 Z n Y1n )
i=1
(n)
n
X
i=1
n
X
for i = 1, ..., n.
i1 n
n
H(X1i |YR1
X1i+1 Z n Y1i+1
)
(41)
i=1
p(u, x1 , yR , z, yR )
= p(q, uQ , x1Q , yRQ , zQ , yRQ )
= p(q, uQ , x1Q )p(zQ yRQ yRQ |q, uQ x1Q )
= p(q, uQ , x1Q )p(zQ |q, uQ , x1Q )p(yRQ , yRQ |q, uQ , x1Q , zQ )
(a)
nR1 +nR2
p(
yRQ |q, uQ , x1Q , zQ , yRQ )
(a)
n
I(XR
; Y1n |Z n ) + I(X2n ; Y2n |Z n )
(b)
(b)
(c)
n n
H(Y1n |XR
Z )
(c)
11
R R2 +
1X
[I(Ui ; YRi ) + I(X1i ; YRi |Ui Zi )] + n
n i=1
> R(pe ),
(51)
and for = 0,
= R2 + I(U ; YR ) + I(X1 ; YR |U Z) + n ,
and
< R(pe ).
(52)
1X
I(YRi ; YRi |X1i Ui Zi ) nn
R1 + R2 R +
n i=1
A PPENDIX C
P ROOF OF L EMMA 3
(44)
(45)
(46)
(47)
(54)
(53)
(55)
(48)
(56)
= I(U ; YR ) + I(YR ; YR |U Z)
and
(
00
00
00
(U , X1 , YR ) =
(X1 , X1 , ) if B = 1,
(, , )
if B = 0.
(50)
12
A PPENDIX D
P ROOF OF L EMMA 4
Before deriving the maximum achievable rate by CF in
Lemma 4, we provide some definitions that will be used in
the proof.
Let X and Y be a pair of discrete random variables, where
X = {1, 2, ..., n} and Y = {1, 2, ..., m}, for n, m < . Let
pY m denote the distribution of Y , where k denotes
the (k 1)-dimensional simplex of probability k-vectors. We
define TXY as the n m stochastic matrix with entries
TXY (j, i) = Pr{X = j|Y = i}. Note that the joint distribution
p(x, y) is characterized by TXY and pY .
Next, we define the conditional entropy bound from [25],
which lower bounds the conditional entropy between two
variables. Note the relabeling of the variables in [25] to fit
our model.
Definition 1 (Conditional Entropy Bound). Let pY m
be the distribution of Y and TXY denote the channel matrix
relating X and Y . Then, for q m and 0 s H(Y ),
define the function
FTXY (q, s) ,
inf
p(w|y): XY W,
H(Y |W )=s, pY =q.
H(X|W ).
(57)
(58)
XY
H(X|W ) =
N
X
H(Xk |Xk1
, W ) H(Xk |Y1k1 , Xk1
,W)
1
1
=
H(Xk |Y1k1 , W ).
(59)
H(Xk |Xk1
, W ).
1
(60)
k=1
N
X
k=1
(61)
(62)
(63)
(64)
N
X
k=1
TXY =
.
(65)
1
When Y and X are related through channel TXY in (65),
FTXY (q, s) is characterized as follows [25].
Lemma 11. Let Y Ber(q), i.e., q = [q, 1 q], and TXY be
given as in (65). Then the conditional entropy bound is
FTXY (q, s) = h2 ( ? h1
2 (s)),
for 0 s h2 (q).
R , (Y 1 , Y 2 ), and
Let us define YR1 , X11 N1 and Y
R
R
the channel input X , (X11 , X12 ). Note that the distribution of
R , given by q(2) , determines the distribution of X via T (2) ,
Y
XY
the Kronecker product of TXY in (65). Then, we can rewrite
the achievable rate for CF in (17) as follows
RCF =
max
p(x)p(z)p(
yR |x)p(
yR |
yR ,z)
I(X, YR |Z)
R ; YR |Z).
s.t. R1 I(Y
H(Yk |Y1k1 , W ),
k=1
(66)
13
RCF =
max
p(x)p(z)p(
yR |x)p(
yR |
yR ,z)
H(X) H(X|Z YR )
R ) following p(x, y
R ) satisfy max{H(X), R1 } 2;
(X, Y
and (e) follows from defining H(X) , 2, for 0 1.
Then, for 2 R1 , we have
RCF
R |Z YR ) H(Y
R ) R1
s.t. H(Y
max
p(x)p(
yR |x)p(w|
yR )
H(X) H(X|W )
RCF
R |W ) H(Y
R ) R1
s.t. H(Y
=
max
(c)
p(w|
yR )
max
R)
p(x)p(
yR |x),0sH(Y
(d)
max
(e)
max
XY
R ) R1 )/2)],
[H(X) 2FTXY (q, (H(Y
p(x)p(
yR |x)
(68)
(69)
(a)
(b)
max
p(x)p(
yR |x)
max
p(x)p(
yR |x)
max[H(X) 2h2 ( ?
p(x)
= max[H(X) 2h2 ( ?
p(x)
d2 g()
= f 00 ( R1 /2),
d2
(72)
h1
2 ())]
h1
2 (max{H(X), R1 }
R ) we have
Thus, from (70) and (73), for R1 H(Y
= 2(1 h2 ( ? h1
2 (1 R1 /2))),
s.t. R1 max{H(X), R1 } 2 + R1
(d)
RCF 2(1 h2 ( ? h1
2 (1 R1 /2))).
h1
2 ())]
s.t. 0 h2 (q)
(c)
[2 2h2 ( ? h1
2 ( R1 /2))]. (71)
We define g() , h2 ( ? h1
2 ( R1 /2)), such that
RCF maxR1 /2<1 2g(). We have that g() is concave
in , since is a shifted version by , which is linear, of the
composition of the concave function f (u) and the affine
function R1 /2.
XY
R ) R1 )]
[H(X) FT (2) (q(2) , H(Y
p(x)p(
yR |x)
max
R1 /2<1
R ) R1
s.t. s H(Y
=
(70)
p(x)p(
yR |x)
R |W ) H(Y
R ) R1
s.t. H(Y
=
(b)
max
0R1 /2
R1 )/2))]
s.t. max{H(X), R1 } 2
(e)
= max [2 2h2 ( ? h1
2 ((max{2, R1 } R1 )/2))]
01
s.t. max{R1 , 2} 2,
where (a) follows from (68) and FTXY (q, s) being nondecreasing in s; equality (b) follows from the definition of
FTXY (q, s) for the binary symmetric channel; (c) follows since
h2 (q) 1, and we are enlarging the optimization domain; (d)
follows since there is no loss in generality by reducing the optimization set, since max{H(X), R1 } R1 and from (68), any
(74)
YR = YR Q2 , Q2 Ber(h1
2 (1 R1 /2)).
14
q2 f () ,
H(X11
N1
Q1 , X12
N2 Q2 ) H(Q1 , Q2 )
(a)
= 2 2h2 (),
(75)
(P + 1)(
P + 1 2 )
.
P + 1) (P + 1)
22R1 (
(76)
P
P
1+
G() , 1 +
P + 1
(1 2 ) + f ()
2R1
2 (1 + P )(1 2 +
P )
=
.
(79)
(1 2 )22R1 (1 +
P ) +
P (1 + P )
We take the derivative of G() with respect to :
22R1 P (1 + P ) 1 2 P + 1 22R1 2
0
G () ,
2 .
[P (1 + P )
+ 22R1 (1 +
P ) (1 2 )]
We note that if 2 2R1 (P +1), then G0 () < 0, and hence,
G() is monotonically decreasing. Achievable rate R is maximized by setting = 0. When
2 < 2R1 (P + 1), we
have
0
I(U ; YR ) I(X1 ; YR ) = 0,
A PPENDIX F
P ROOF OF L EMMA 8
(80)
P
R = log 1 +
1+
2
P + 1
(1 2 ) + q2
1
P
P + (1 2 )
s.t. R1 log 1 +
1+
.
2
P + 1
q2
(81)
15
(82)
p(u,x1 )
R1 h2 ().
and
(a)
I(YR ; YR |U Z) = I(Y ; YR |U Z)
X
(b)
= H(Y |U )
p(u)H(Y |YR Zu), (84)
u
s.t. R1 H(Y |U )
= max [H(X1 |U )
p(u,x1 ),
p(u)FTXY (qu , su )]
p(u)su , 0 su H(Yu )
= max [H(X1 |U )
p(u,x1 )
p(u)h2 ( ?
h1
2 (su ))]
s.t. R1 H(Y |U )
p(u)su , 0 su H(Yu ),
!!
(c)
max H(X1 |U ) h2
p(u,x1 )
s.t.
p(u)su max{H(X1 |U ), R1 } R1
max h2 ( ? h1
2 (max{, R1 } R1 )),
01
(85)
(86)
For R1 1, we have
C max h2 ( ? h1
2 ( R1 )).
R1 1
(87)
(88)
(89)
= I(X1 ; YR ),
s.t. R1 H(Y |U )
(b)
p(x1 ,u)
s.t.
(a)
h1
2
p(u)su
p(u)su H(Y |U ) R1 ,
H(YR |Z). CF achieves the capacity by letting X1 be distributed with p(x1 ), and choosing YR = YR . This choice is
always possible as the CF constraint
R1 I(YR ; YR |Z) = H(YR |Z) H(YR |Z, YR ) = H(YR |Z),
always holds. Then, the achievable rate for CF is RCF =
1 ; YR |Z) = R2 +I(X
1 ; YR |Z), which is the capacity.
R2 +I(X
16
R EFERENCES
[1] I. Estella-Aguerri and D. Gunduz, Capacity of a class of relay channels
with state, in Proc. of 2012 IEEE Information Theory Workshop (ITW),
Sep. 2012, pp. 277281.
[2] T. M. Cover and A. El Gamal, Capacity theorems for the relay channel,
IEEE Trans. on Information Theory, vol. 25, no. 5, pp. 572 584, Sep.
1979.
[3] A. El Gamal and S. Zahedi, Capacity of a class of relay channels with
orthogonal components, IEEE Trans. on Information Theory, vol. 51,
no. 5, pp. 1815 1817, May 2005.
[4] A. Gamal and M. Aref, The capacity of the semideterministic relay
channel (corresp.), IEEE Trans. on Information Theory, vol. 28, no. 3,
p. 536, May 1982.
[5] T. M. Cover and Y.-H. Kim, Capacity of a class of deterministic
relay channels, in Proc. of 2007 IEEE International Symposium on
Information Theory (ISIT), Jun. 2007, pp. 591 595.
[6] T. M. Cover and J. A. Thomas, Elements of Information Theory. WileyInterscience, 1991.
[7] M. Aleksic, P. Razaghi, and W. Yu, Capacity of a class of modulo-sum
relay channels, IEEE Trans. on Information Theory, vol. 55, no. 3, pp.
921 930, Mar. 2009.
[8] W. Kang and S. Ulukus, Capacity of a class of diamond channels,
IEEE Trans. on Information Theory, vol. 57, no. 8, pp. 4955 4960,
Aug. 2011.
[9] M. Khormuji and M. Skoglund, On cooperative downlink transmission
with frequency reuse, in Proc. of 2009 IEEE International Symposium
on Information Theory Proceedings (ISIT), Jun. 2009, pp. 849853.
[10] A. Zaidi, S. Shamai, P. Piantanida, and L. Vandendorpe, Bounds on the
capacity of the relay channel with noncausal state information at source,
in Proc. of 2010 IEEE International Symposium on Information Theory
Proceedings (ISIT), Jun. 2010, pp. 639 643.
[11] B. Akhbari, M. Mirmohseni, and M.-R. Aref, Achievable rate regions
for relay channel with private messages with states known at the source,
in Proc. of 2010 IEEE International Conference on Information Theory
and Information Security (ICITIS),, Dec. 2010, pp. 10801087.
[12] A. Zaidi, S. Shamai, P. Piantanida, and L. Vandendorpe, Bounds on the
capacity of the relay channel with noncausal state at the source, IEEE
Trans. on Information Theory, vol. 59, no. 5, pp. 26392672, May 2013.
[13] B. Akhbari, M. Mirmohseni, and M. Aref, Compress-and-forward
strategy for relay channel with causal and non-causal channel state
information, IET Communications, vol. 4, no. 10, pp. 11741186, Jul.
2010.
[14] M. Mirmohseni, B. Akhbari, and M.-R. Aref, Compress-and-forward
strategy for the relay channel with causal state information, in Proc. of
2009 IEEE Information Theory Workshop (ITW), Oct. 2009, pp. 426
430.
[15] M. Khormuji, A. El Gamal, and M. Skoglund, State-dependent relay
channel: Achievable rate and capacity of a semideterministic class,
IEEE Trans. on Information Theory, vol. 59, no. 5, pp. 26292638,
May 2013.
ur, and H. H. Permuter, Cooperative binning
[16] R. Kolte, A. Ozg
for semideterministic channels, CoRR, vol. abs/1508.05149, 2015.
[Online]. Available: http://arxiv.org/abs/1508.05149
[17] O. Simeone, D. Gunduz, and S. Shamai, Compound relay channel
with informed relay and destination, in Proc. of 47th Annual Allerton
Conference on Communication, Control, and Computing, Allerton 2009,
Oct. 2009, pp. 692 699.
[18] A. Behboodi and P. Piantanida, On the simultaneous relay channel with
informed receivers, in Proc. of 2009 IEEE International Symposium on
Information Theory (ISIT), Jul. 2009, pp. 1179 1183.
[19] K. Bakanoglu, E. Erkip, O. Simeone, and S. Shitz, Relay channel with
orthogonal components and structured interference known at the source,
IEEE Transactions on Comm., vol. 61, no. 4, pp. 12771289, Apr. 2013.
[20] K. Bakanoglu, E. Erkip, and O. Simeone, Half-duplex Gaussian diamond relay channel with interference known at one relay, in Proc. 2011
Conference Record of the Forty Fifth Asilomar Conference on Signals,
Systems and Computers (ASILOMAR), Nov. 2011, pp. 1353 1357.
[21] M. Li, O. Simeone, and A. Yener, Message and state cooperation
in a relay channel when only the relay knows the state, CoRR, vol.
abs/1102.0768, 2011. [Online]. Available: http://arxiv.org/abs/1102.0768
[22] A. Zaidi, S. Kotagiri, J. Laneman, and L. Vandendorpe, Cooperative
relaying with state available noncausally at the relay, IEEE Trans. on
Information Theory, vol. 56, no. 5, pp. 22722298, May 2010.
[23] Y.-H. Kim, Coding techniques for primitive relay channels, in Proc. of
Allerton Conf. Commun., Control, Comput., Monticello, IL, Sep. 2007.
[24] A. E. Gamal and Y.-H. Kim, Network Information Theory. Cambridge
University Press, 2011.
[25] H. S. Witsenhausen and A. D. Wyner, A conditional entropy bound
for a pair of discrete random variables, IEEE Trans. on Information
Theory, vol. 21, no. 5, pp. 493501, Sep. 1975.
[26] A. Wyner and J. Ziv, A theorem on the entropy of certain binary
sequences and applicationsi, IEEE Trans. on Information Theory,
vol. 19, no. 6, pp. 769772, Nov. 1973.
[27] A. El Gamal and S. Zahedi, Capacity of a class of relay channels with
orthogonal components, IEEE Trans. on Information Theory, vol. 51,
no. 5, pp. 18151817, May 2005.
Inaki
Estella Aguerri Inaki Estella Aguerri received his B.Sc. and M.Sc.
degrees in Telecommunication engineering from Universitat Polit`ecnica de
Catalunya (UPC), Barcelona, Spain, in 2008 and 2011, respectively. He
received his Ph.D. degree from Imperial College London, London, UK, in
January 2015. In November 2014, he joined the Mathematical and Algorithmic Sciences Lab, France Research Center, Huawei Technologies Co. Ltd.,
Boulogne-Billancourt, France. From 2008 to 2013, he was research assistant at
Centre Tecnolgic de Telecomunicacions de Catalunya (CTTC) in Barcelona.
His primary research interests are in the areas of information theory and
communication theory, with special emphasis on joint source-channel coding
and cooperative communications for wireless networks.
Deniz Gunduz Deniz Gunduz [S03-M08-SM13] received the B.S. degree
in electrical and electronics engineering from METU, Turkey in 2002, and
the M.S. and Ph.D. degrees in electrical engineering from NYU Polytechnic
School of Engineering in 2004 and 2007, respectively. After his PhD he
served as a postdoctoral research associate at the Department of Electrical
Engineering, Princeton University, and as a consulting assistant professor at
the Department of Electrical Engineering, Stanford University. He also served
as a research associate at CTTC in Barcelona, Spain. Since September 2012
he is a Lecturer in the Electrical and Electronic Engineering Department of
Imperial College London, UK. He also held a visiting researcher position at
Princeton University from November 2009 until November 2011.
Dr. Gunduz is an Associate Editor of the IEEE Transactions on Communications, and an Editor of the IEEE Journal on Selected Areas in
Communications (JSAC) Series on Green Communications and Networking.
He is the recipient of a Starting Grant of the European Research Council, the
2014 IEEE Communications Society Best Young Researcher Award for the
Europe, Middle East, and Africa Region, and the Best Student Paper Award
at the 2007 IEEE International Symposium on Information Theory (ISIT).
He is serving as a co-chair of the IEEE Information Theory Society Student
Committee, and the co-director of the Imperial College Probability Center.
He is the General Co-chair of the 2016 IEEE Information Theory Workshop.
He served as a Technical Program Chair of the Network Theory Symposium
at the 2013 and 2014 IEEE Global Conference on Signal and Information
Processing (GlobalSIP), and General Co-chair of the 2012 IEEE European
School of Information Theory (ESIT). His research interests lie in the areas of
communication theory and information theory with special emphasis on joint
source-channel coding, multi-user networks, energy efficient communications
and privacy in cyber-physical systems.