Professional Documents
Culture Documents
, where g
m,k
is the small scale fading
and is the path loss exponent. For simplicity, throughout this
paper and in simulation results we assume that R
1
= R
2
=
= R
M
.
The IEEE 802.22 WRAN standard recommends two
schemes for PU protection. These are listen-before-talk (spec-
trum sensing) and geo-location/database schemes [7]. In the
listen-before-talk scheme, the SU senses the presence of
primary network signals in order to select the channels that are
not in use. In geo-location/database scheme, the locations of
primary and secondary users are stored in a central database.
The central controller/spectrum manager (also called as base
station) of the SUs has the access to the location database. In
this paper, we assume that secondary networks base station
(BS) gets the location information of each PU from the central
database. We also assume that BS can estimate the active PUs
channel gains, perhaps via pilot power detection on a regular
basis. Our goal is to maximize the energy efciency of the
secondary users transmissions while meeting the interference
constraints due to the primary users. The energy efciency
(EE) metric we use in this paper is information bits per Joule
[8]. For our system model, we can express EE as
(p) =
K
k=1
C
k
p
c
+
K
k=1
p
k
(1)
where C
k
= log
_
1 +
p
k
h
k
N0
_
is the maximum theoretical
spectral efciency (SE) (bits/s/Hz) due to Shannon on the link
to kth SU. Mathematically, we can write the EE maximization
10
4
10
3
10
2
10
1
10
0
0
500
1000
Transmit power (watts)
E
E
(
b
i
t
s
/
j
o
u
l
e
/
H
z
)
10
4
10
3
10
2
10
1
10
0
0
5
10
S
E
(
b
i
t
s
/
s
/
H
z
)
EE
SE
h = 0.5
h = 1
Fig. 2. Plots for EE and SE as a function of transmit power.
problem for cognitive radio as:
max
p
(p)
subject to
C1 :
K
k=1
p
k
g
m,k
I
m
, m
C2 : p
k
0, k
(2)
In (2), the constraint C1 assures that interference to primary
users is less than a specied threshold. The EE maximization
problem in (2) is not a convex optimization problem. Fig. 2
shows two scenarios of the typical variation of EE and SE with
transmit power for the single user case. In the rst scenario,
channel gain between the SU and BS is set to 0.5 and in the
second scenario, channel gain between the SU and BS is set to
1. From Fig. 2, we can observe that EE achieves a maximum
while SE continues to increase with transmit power. We can
say that optimum SE solution does not mean optimum EE
solution.
In the next section, we will provide optimal power allocation
by transforming the non-convex EE maximization problem
into a concave maximization problem.
III. OPTIMAL POWER ALLOCATION
In this section, we rst introduce concave fractional program
(CFP) and then show that optimization problems (2) is a CFP.
Denition 1. Fractional programming : An optimization prob-
lem is a fractional program (FP) if the objective function is
the ratio of two functions. A FP problem can be expressed as:
max
xX
f(x)
g(x)
subject to
h
i
(x) 0, i = 1, 2, , N
(3)
where f , g, h
i
(i = 1, 2, , N) denote real-valued functions
which are dened on the set X of R
n
[9].
Denition 2. Concave fractional programming : The FP in
(3) is a concave fractional program (CFP) if it satises the
following two conditions:
1) f is concave and g is convex on X
2) f is positive on S if g is not afne
where S = {x X : h
i
(x) 0, i = 1, 2, , N} [9].
In a CFP, any local maximum is a global maximum and, in
a differentiable CFP, a solution of the Karush Kuhn Tucker
(KKT) conditions provide the maximum [10]. We note that in
(2), the function in the numerator is a concave function and
denominator is afne and all the constraints are afne.
We can also observe that the optimization problem (2) is
differentiable and satisfy the conditions of CFP. It means that
in optimization problems (2) any local maximum is a global
maximum and the KKT conditions give the optimal solution.
A CFP with afne denominator can be reduced to a concave
program with Charnes-Cooper Transformation (CCT) [10],
[11]
1
. The Lemma 1 presents a CCT of (3) and also shows that
the transformed problem is a concave optimization problem.
Lemma 1. A CFP with afne denominator can be reduced to
the following concave program
max
y
t
X,t>0
tf
_
y
t
_
subject to
th
i
_
y
t
_
0, i = 1, 2, , N
tg
_
y
t
_
= 1
(4)
where y = tx and t =
1
g(x)
.
Proof : The proof is given in the Appendix A.
Now we focus on our optimization problem (2). With Charnes-
Cooper Transformation (CCT), we can write equivalent con-
cave program for (2) as:
max
y,t>0
t
K
k=1
log
_
1 +
y
k
h
k
tN
0
_
subject to
C1 :
K
k=1
y
k
g
m,k
tI
m
0, m
C2 : tp
c
+
K
k=1
y
k
= 1
C3 : y
k
0, k
(5)
Theorem 1. The power prole for which the total energy
efciency is maximized for (5) is
p
k
=
_
y
k
t
_
=
_
1
k
_
+
, k (6)
1
Exercise problem 4.7 in [?]
where
k
= ln2
_
+
M
m=1
m
g
m,k
_
,
m
and are La-
grange multipliers that are yet to be determined, and
k
=
h
k
N0
,
y
k
= t
_
1
k
_
+
, and t
=
1
pc+
K
k=1
+
.
Proof : The proof is given in the Appendix B.
We can see that the solution of the above problem is similar
to the water-lling algorithm. However, (6) needs to determine
(M+1) Lagrangian multipliers that is much more complex
as compared to simple water-lling algorithm. We call this
water lling as energy efcient water-lling in cognitive radio
networks. Since (5) is a concave optimization problem, we can
determine the Lagrangian multipliers with dual optimization.
We can write the Lagrangian dual objective function as:
d
(, ) = max L(y
, t
, , , ) (7)
and the dual optimization problem is
min
0,
d
(, )
(8)
The dual function needs to be minimized over and to
obtain the optimal dual solutions
and
. The problem
(8) can be solved with any gradient algorithm [10]. In the
next section, we present -optimal algorithms for the above
optimization problem.
IV. ITERATIVE ALGORITHM
In this section, we present iterative algorithms to solve the
optimization problem (2). We show that the proposed algo-
rithm is -optimal algorithms
2
. The algorithms are based on
Dinkelbach method for fractional programming problems [12].
In this approach, the fractional objective is transformed into
a parametric optimization problem. Consider the following
optimization problem, where x R
n
and q R.
max
x
n
(x)
d
(x)
subject to
x S
(9)
The parametric problem associated with (9) can be written as
max
x
n
(x)
d
(x)
subject to
x S
(10)
The following theorem shows the relation between (9) and
(10).
Theorem 2.
=
n
(x
)/
d
(x
) = max{
n
(x)/
d
(x)|x
S} if and only if, max{
n
(x)
d
(x)|x S} =
n
(x
d
(x
) = 0.
Proof : The proof is given in the Appendix C.
2
For any > 0, -optimal algorithms guarantee the solution within of
optimal.
We convert our optimization problem given in (2) to one
resembling (10) and then propose iterative -optimal algorithm
to get power allocations The Algorithm 1 present the pseudo
code of the proposed iterative scheme. In Appendix D, we
prove the convergence of the algorithms.
Now, we describe iterative algorithm. At the start of
the algorithm, it initializes , and iteration counter i
where is the ratio between total throughput and total
power. We can dene the fractional objective function (2)
as: (p, )
K
k=1
C
k
_
p
c
+
K
k=1
p
k
_
. The algo-
rithm iteratively solves the convex optimization problem:
arg max
p
{(p, ) | C1 and C2 of (2)}. With successive iter-
ations of the algorithm, the value of decreases. For ev-
ery , the power vector p that maximizes
K
k=1
C
k
_
p
c
+
K
k=1
p
k
_
is found. The algorithm terminates when
is zero or less than the given value. According to the
Theorem 2, at this terminal point of the algorithm, the p value
is either optimal or -optimal. If the value of is zero, then
p is the optimal power allocation; otherwise, p is -optimal
power allocation.
Algorithm 1 : Iterative -optimal algorithm.
1: Initialization:
2: 0
3: 10
6
4: i 0
5: Convergence false
6: Dene:
7: (p, )
K
k=1
C
k
_
p
c
+
K
k=1
p
k
_
8: ALgorithm Execution:
9: while (Convergence = false) and (i MaxIter) do
10: p arg max
p
{(p, ) | C1 and C2 of (2)}
11: if (p, ) = 0 then
12: p
o
p
13: Convergence = true
14: else if (p, ) then
15: p
p
16: Convergence = true
17: else
18:
K
k=1
C
k
pc+
K
k=1
p
k
19: i i + 1
20: end if
21: end while
V. SIMULATION RESULTS
In this section, we present simulation results to demonstrate
the performance and convergence of the proposed iterative
schemes. The impact of network parameters (e.g., number
of SUs, number of PUs, interference threshold) is also in-
vestigated. Convex fractional programming is used to get the
optimal energy efcient power allocation. Iterative algorithm
is implemented for -optimal solution.
2 4 6 8 10 12 14
10
4
10
5
10
6
Number of Iterations
E
E
(
b
i
t
s
/
J
o
u
l
e
/
H
z
)
Optimal, K = 5,M = 1,I
m
= 100W
Iterative, K = 5,,M = 1,I
m
= 100W
Optimal, K = 35,,M = 1,I
m
= 100W
Iterative, K = 35,M = 1,I
m
= 100W
Optimal, K = 75,M = 1,I
m
= 100W
Iterative, K = 75,M = 1,I
m
= 100W
Fig. 3. Performance of optimal and iterative power allocation algorithms.
2 4 6 8 10 12 14
10
2
10
3
10
4
10
5
10
6
Number of Iterations
E
E
(
b
i
t
s
/
J
o
u
l
e
/
H
z
)
Optimal, K = 5,M = 1,I
m
= 100mW
Iterative, K = 5,,M = 1,I
m
= 100mW
Optimal, K = 5,,M = 41,I
m
= 100mW
Iterative, K = 5,M = 41,I
m
= 100mW
Fig. 4. Performance of optimal and iterative power allocation algorithms.
In all simulations, the channel gain h is modeled as h =
K
o
_
do
d
_
[13]. where K
o
is a constant that depends on the
antenna characteristic and average channel attenuation, d
o
is
the reference distance for the antenna far eld, d is the distance
between transmitter and receiver, is the path loss constant
and is the Rayleigh random variable. Since this formula is
not valid in the near eld, in all the simulation results, we
assume that d is greater than d
o
. In all the results, d
o
= 20m,
K
o
= 50, = 3 and N
o
= 1W/Hz. The PUs protected
distance R
m
is set to 10m. The total circuit power P
c
is set
to 10
6
W.
Figs. 3 and 4 present the energy efciency plots with
the number of iterations for different number of SUs, PUs
and interference thresholds. The parameters for Figs. 3,
and 4 are (K, M, I
m
, ) = ({5, 35, 75}, 1, 100W, 10
6
) and
(5, {1, 41}, 100mW, 10
6
) respectively. From the simulation
results, we can see that the iterative algorithm converges to the
optimal solution within 10 iterations for all different scenarios
(different SUs, PUs etc). We can also observe that the energy
efciency increases with the number of secondary users. This
is due to the fact that with more secondary users, there is
more freedom in power allocation. We also observe that the
energy efciency decreases with the increase in the number
of primary users, because the optimization problem has more
number of constraints to satisfy.
VI. CONCLUSION
In this paper, we considered the optimization problem
of nding the power allocation that maximizes the energy-
efciency in bits per Joule per Hz of a cognitive radio network.
We showed that this problem belongs to a special class
of problems called concave fractional programming. Using
Charnes-Cooper transformation, we showed that the problem
can be transformed into to a concave optimization problem that
can be solved numerically using standard convex optimization
algorithms/toolsdepends on the Lagrange multipliers. For non-
cooperative communication, we derived a water-lling-like
solution. We also presented -optimal solutions using iterative
methods, where the CFP is transformed into parametric form.
The performance of -optimal iterative method was compared
with the optimal solution for different system parameters, such
as, the number of primary users, the number of secondary users
and the interference threshold.
REFERENCES
[1] Albrecht Fehske, Gerhard Fettweis, Jens Malmodin and Gergely Biczk,
The Global Footprint of Mobile Communications: The Ecological and
Economic Perspective, IEEE Communications Magazine,, vol. 49, pp.
55-62, no. 8, August 2011.
[2] L. Herault,E. C. Strinati ,O. Blume ,D. Zeller, Muhammad A. Imran,
R. Tafazolli, Y. Jading ,J. Lunds and Michael Meyer, Green Commu-
nications: a Global Environmental Challenge, In proc. of 12th IEEE
WPMC, 2009.
[3] Grkan Gr and Fatih Alagz, Green Wireless Communications via Cog-
nitive Dimension: An Overview, IEEE Network Magazine, vol. 25, pp.
50-56, no. 2, March/April 2011.
[4] Tao Chen, Honggang Zhang, Zhifeng Zhao, and Xianfu Chen, Towards
Green Wireless Access Networks, In Proc. of IEEE CHINACOM, 2010.
[5] Cong Xiong, Geoffrey Ye Li, Shunqing Zhang, Yan Chen, and Shugong
Xu, Energy- and Spectral-Efciency Tradeoff in Downlink OFDMA
Networks, IEEE Transactions On Wireless Communications, vol. 10,
pp. 3874-3886, no. 11, Nov. 2011.
[6] Guowang Miao, Nageen Himayat, Geoffrey Ye Li, and Shilpa Talwar,
Low-Complexity Energy-Efcient Scheduling for Uplink OFDMA,
IEEE Transactions on Communications, vol. 60, pp. 112-120, no. 1,
Jan. 2012.
[7] Gwangzeen Ko, A. Antony Franklin, Sung-Jin You, Jin-Suk Pak,
Myung-Sun Song, and Chang-Joo Kim, Channel Management in IEEE
802.22 WRAN Systems, IEEE Communication Magazine, vol. 48, pp.
88-94, no.16, Sep. 2010.
[8] Geoffrey Ye Li, Zhikun Xu, Cong Xiong, Chenyang Yang, Shun-
qing Zhang, Yan Chen, and Shugong Xu, Energy-Efciet Wireless
Communications: Tutorial, Survey, and Open Issues, IEEE Wireless
Communications Magazine, vol. 18, pp. 28-35, no. 6, Dec. 2011.
[9] S. Schaible, Fractional programming, Zeitschrift fr Operations Re-
search vol. 27, pp. 39-54, 1983.
[10] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge Uni-
versity Press, 2004.
[11] A. Charnes, and W. Cooper, Programming with Linear Fractional
Functionals, Naval Research, 1962.
[12] W. Dinkelbach, On nonlinear fractional programming, Managemet
Sciences, vol. 13, pp. 492-498, 1967.
[13] A. Goldsmith, Wireless Communications, Cambridge University Press,
2005.
APPENDIX A
PROOF OF LEMMA 1
From (3), we can dene S = {x X : h
i
(x) 0, i}.
The transformation y = tx and t =
1
g(x)
is a one-to-one
mapping of S onto the feasible region of (4). Also
f(x)
g(x)
=
tf
_
y
t
_
for corresponding points x and (y, t). Hence (3) has
an optimal solution, if and only if (4) has optimal solution.
The two solutions are related to each other with the relation
y = tx and t =
1
g(x)
. Using the denition of convex
set and convex functions, we can show that the set X
=
{(y, t) R
n+1
:
y
t
, t > 0} is convex and th
i
_
y
t
_
is convex
on X
k=1
log
_
1 +
y
k
h
k
tN
0
_
=
_
K
k=1
log (t + y
k
k
)
_
Ktlogt
(11)
The Lagrangian function of (5) is given by
L(y, t, , , , ) =
_
t
K
k=1
log (t + y
k
k
)
_
Ktlogt
m=1
m
_
K
k=1
y
k
g
m,k
tI
m
_
_
tp
c
1 +
K
k=1
y
k
_
+
K
k=1
k
y
k
+ t,
(12)
where , , , are Lagrangian coefcients. Let y
k
and t
be
the optimal solutions. The KKT conditions can be written as
M
m=1
m
_
K
k=1
y
k
g
m,k
tI
m
_
= 0 (13)
_
tp
c
1 +
K
k=1
y
k
_
= 0 (14)
k
y
k
= 0k (15)
t = 0 (16)
k
0, k
m
0, m
0 (17)
1
ln2
t
k
t
+ y
k
+
k
M
m=1
m
g
m,k
= 0, k (18)
L
t
= 0 (19)
We can easily eliminate
k
and from the above KKT
conditions. The simplied KKT equations are
M
m=1
m
_
K
k=1
y
k
g
m,k
tI
m
_
= 0 (20)
_
tp
c
1 +
K
k=1
y
k
_
= 0 (21)
m
0, m (22)
1
ln2
t
k
t
+ y
m=1
m
g
m,k
= 0, k (23)
From (23) and constraints y
k
0, t
0, we get
_
y
k
t
_
=
_
_
1
ln2
_
+
M
m=1
m
g
m,k
_
1
k
_
_
+
, k. (24)
From the CCT, we have y
k
= p
k
t
k
=
_
y
k
t
_
=
_
_
1
ln2
_
+
M
m=1
m
g
m,k
_
1
k
_
_
+
, k.
(25)
APPENDIX C
PROOF OF THEOREM 2
We adapt the proof in [12], but modify it to make it
relevant to our problem here. Given
=
n
(x
)/
d
(x
) =
max{
n
(x)/
d
(x)|x S} implies that
n
(x
d
(x
) = 0
and
{
n
(x)/
d
(x)|x S}. The last inequality implies
n
(x)
d
(x) 0 for x S, or, max{
n
(x)
d
(x)|x
S} = 0. The equality
n
(x
d
(x
) = 0 shows that
one of the locations where this maximum occurs is x
. This
proves the forward direction. The other direction is similar and
straight forward.
APPENDIX D
PROOF OF CONVERGENCE
We simplify what is in [12], but make it relevant to
our algorithms. Let
n
(p) = C
k
,
d
(p) = p
c
+
k
p
k
and F() = max{(p, )|C1, C2 of (2)} = max{
n
(p)
d
(p)|C1, C2 of (2)}. In order to prove Theorem 2, we need
the following two Lemmas.
Lemma 2. such that F() = 0.
Proof: Using the denition of continuity we can prove
that F() = 0 is continuous in . Furthermore lim
x
F() =
and lim
x
F() = +. By the intermediate value
theorem (IVT), such that F() = 0.
Lemma 3. F() is decreasing in .
Proof: Take
1
<
2
and let p
maximize
n
(p)
d
(p)
subject to C1, C2 of (2). Then F(
2
) = max{
n
(p)
d
(p} =
n
(p
)
2
d
(p
) <
n
(p
)
1
d
(p
)
max{
n
(p)
1
d
(p)} = F(
1
). We shall now prove the
theorem. Note that it is sufcient to show that (p, ) becomes
smaller than with the number of iterations. Since F() =
max{(p, )}, we only need to show that F() becomes
smaller than . We now show that is non-increasing in suc-
cessive iterations of the algorithm. If we use the subscript n to
denote the values of variables on the nth iteration, we have 0 =
n
(p
n1
)
n
d
(p
n1
) max{
n
(p
n1
)
n
d
(p
n1
)} =
F(
n
) =
n
(p
n
)
n
d
(p
n
) =
n+1
d
(p
n
)
n
d
(p
n
) =
(
n+1
n
)
d
(p
n
). Now, it follows that
n+1
n
, because
d
(p
n
) > 0. By Lemma 2, F() is decreasing in and we
just proved that is non-increasing in successive iterations of
the algorithm. Therefore, F() is non-increasing in successive
iterations of the algorithm. By Lemma 3, F() does become
zero which follows that F() does become smaller than .