1. Introduction
Let
{
X
n
,
n
≥
1
}
be a sequence of random variables defined on a fixed probability space
(
Ω
,
ℱ
,
P
)
. Let
n
and
m
be positive integers. Write
ℱ
n
m
=
σ
(
X
i
,
n
≤
i
≤
m
)
. Given
σ
algebras
ℬ
,
ℛ
in
ℱ
, let
(1)
φ
(
ℬ
,
ℛ
)
=
sup
A
∈
ℬ
,
B
∈
ℛ
,
P
(
A
)
>
0

P
(
B
∣
A
)

P
(
B
)

.
Define the
φ
mixing coefficients by
(2)
φ
(
n
)
=
sup
k
≥
1
φ
(
ℱ
1
k
,
ℱ
k
+
n
∞
)
,
n
≥
0
.
A random variable sequence
{
X
n
,
n
≥
1
}
is said to be
φ
mixing if
φ
(
n
)
↓
0
as
n
→
∞
.
φ
(
n
)
is called mixing coefficient. A triangular array of random variables
{
X
n
k
,
k
≥
1
,
n
≥
1
}
is said to be an array of rowwise
φ
mixing random variables if, for every
n
≥
1
,
{
X
n
k
,
k
≥
1
}
is a
φ
mixing sequence of random variables. The notion of
φ
mixing random variables was introduced by Dobrushin [1] and many applications have been found. See, for example, Utev [2] for central limit theorem, Gan and Chen [3] for limit theorem, Peligrad [4] for weak invariance principle, Shao [5] for almost sure invariance principles, Chen and Wang [6], Shen et al. [7, 8], Wu [9], and Wang et al. [10] for complete convergence, Hu and Wang [11] for large deviations, and so forth. When these are compared with corresponding results of independent random variable sequences, there still remains much to be desired.
Definition 1.
A sequence of random variables
{
U
n
,
n
≥
1
}
is said to converge completely to a constant
a
if, for any
ɛ
>
0
,
(3)
∑
n
=
1
∞
P
(

U
n

a

>
ɛ
)
<
∞
.
In this case, one writes
U
n
→
a
completely. This notion was given first by Hsu and Robbins [12].
Definition 2.
Let
{
Z
n
,
n
≥
1
}
be a sequence of random variables and
a
n
>
0
,
b
n
>
0
, and
q
>
0
. If
(4)
∑
n
=
1
∞
a
n
E
{
b
n

1

Z
n


ɛ
}
+
q
<
∞
∀
ɛ
>
0
,
then the above result was called the complete moment convergence by Chow [13].
Let
{
X
n
k
,
k
≥
1
,
n
≥
1
}
be an array of rowwise
φ
mixing random variables with mixing coefficients
{
φ
(
n
)
,
n
≥
1
}
in each row, let
{
a
n
,
n
≥
1
}
be a sequence of positive real numbers such that
a
n
↑
∞
, and let
{
Ψ
k
(
t
)
,
k
≥
1
}
be a sequence of positive even functions such that
(5)
Ψ
k
(

t

)

t

q
↑
,
Ψ
k
(

t

)

t

p
↓
as

t

↑
for some
1
≤
q
<
p
and each
k
≥
1
. In order to prove our results, we mention the following conditions:
(6)
E
X
n
k
=
0
,
k
≥
1
,
n
≥
1
,
(7)
∑
n
=
1
∞
∑
k
=
1
n
E
Ψ
k
(
X
n
k
)
Ψ
k
(
a
n
)
<
∞
,
(8)
∑
n
=
1
∞
(
∑
k
=
1
n
E
(
X
n
k
a
n
)
2
)
v
/
2
<
∞
,
where
v
≥
p
is a positive integer.
The following are examples of function
Ψ
k
(
t
)
satisfying assumption (5):
Ψ
k
(
t
)
=

t

β
for some
q
<
β
<
p
or
Ψ
k
(
t
)
=

t

q
log
(
1
+

t

p

q
)
for
t
∈
(

∞
,
+
∞
)
. Note that these functions are nonmonotone on
t
∈
(

∞
,
+
∞
)
, while it is simple to show that, under condition (5), the function
Ψ
k
(
t
)
is an increasing function for
t
>
0
. In fact,
Ψ
k
(
t
)
=
(
Ψ
k
(
t
)
/

t

q
)
·

t

q
,
t
>
0
, and

t

q
↑
as

t

↑
; then we have
Ψ
k
(
t
)
↑
.
Recently Gan et al. [14] obtained the following complete convergence for
φ
mixing random variables.
Theorem A.
Let
{
X
n
,
n
≥
1
}
be a sequence of
φ
mixing mean zero random variables with
∑
n
=
1
∞
φ
1
/
2
(
n
)
<
∞
, let
{
a
n
,
n
≥
1
}
be a sequence of positive real numbers with
a
n
↑
∞
, and let
{
Ψ
n
(
t
)
,
n
≥
1
}
be a sequence of nonnegative even functions such that
Ψ
n
(
t
)
>
0
as
t
>
0
and
(
Ψ
n
(

t

)
/

t

)
↑
and
(
Ψ
n
(

t

)
/

t

p
)
↓
as

t

↑
∞
, where
p
≥
2
. If the following conditions are satisfied:
(9)
∑
n
=
1
∞
∑
k
=
1
n
E
Ψ
k
(
X
k
)
Ψ
k
(
a
n
)
<
∞
,
(10)
∑
n
=
1
∞
[
∑
k
=
1
n
E

X
k

r
a
n
r
]
s
<
∞
,
where
0
<
r
≤
2
,
s
>
0
, then
(11)
1
a
n
max
1
≤
j
≤
n

∑
k
=
1
j
X
k

⟶
0
c
o
m
p
l
e
t
e
l
y
.
For more details about this type of complete convergence, one can refer to Gan and Chen [3], Wu et al. [15], Wu [16], Huang et al. [17], Shen [18], Shen et al. [19, 20], and so on. The purpose of this paper is extending Theorem A to the complete moment convergence, which is a more general version of the complete convergence, and making some improvements such that the conditions are more general. In this work, the symbol
C
always stands for a generic positive constant, which may vary from one place to another.
3. Main Results and Their Proofs
Let
{
X
n
k
,
k
≥
1
,
n
≥
1
}
be an array of rowwise
φ
mixing random variables and let
φ
n
(
·
)
be the mixing coefficient of
{
X
n
k
,
k
≥
1
}
for any
n
≥
1
. Our main results are as follows.
Theorem 4.
Let
{
X
n
k
,
k
≥
1
,
n
≥
1
}
be an array of rowwise
φ
mixing random variables satisfying
sup
n
≥
1
∑
k
=
1
∞
φ
n
1
/
2
(
k
)
<
∞
and let
{
a
n
,
n
≥
1
}
be a sequence of positive real numbers such that
a
n
↑
∞
. Also, let
{
Ψ
k
(
t
)
,
k
≥
1
}
be a positive even function satisfying (5) for
1
≤
q
<
p
≤
2
. Then under conditions (6) and (7), one has
(14)
∑
n
=
1
∞
a
n

q
E
{
max
1
≤
j
≤
n

∑
k
=
1
j
X
n
k


ɛ
a
n
}
+
q
<
∞
,
∀
ɛ
>
0
.
Proof.
Firstly, let us prove the following statements from conditions (5) and (7).
(i) For
r
≥
1
,
0
<
u
≤
q
,
(15)
∑
n
=
1
∞
(
∑
k
=
1
n
E

X
n
k

u
I
(

X
n
k

>
a
n
)
a
n
u
)
r
≤
∑
n
=
1
∞
(
∑
k
=
1
n
E

X
n
k

q
I
(

X
n
k

>
a
n
)
a
n
q
)
r
≤
∑
n
=
1
∞
(
∑
k
=
1
n
E
Ψ
k
(
X
n
k
)
Ψ
k
(
a
n
)
)
r
≤
(
∑
n
=
1
∞
∑
k
=
1
n
E
Ψ
k
(
X
n
k
)
Ψ
k
(
a
n
)
)
r
<
∞
.
(ii) For
v
≥
p
,
(16)
∑
n
=
1
∞
∑
k
=
1
n
E

X
n
k

v
I
(

X
n
k

≤
a
n
)
a
n
v
≤
∑
n
=
1
∞
∑
k
=
1
n
E

X
n
k

p
I
(

X
n
k

≤
a
n
)
a
n
p
≤
∑
n
=
1
∞
∑
k
=
1
n
E
Ψ
k
(
X
n
k
)
Ψ
k
(
a
n
)
<
∞
.
For
n
≥
1
, denote
M
n
(
X
)
=
max
1
≤
j
≤
n

∑
k
=
1
j
X
n
k

. It is easy to check that
(17)
∑
n
=
1
∞
a
n

q
E
{
M
n
(
X
)

ɛ
a
n
}
+
q
=
∑
n
=
1
∞
a
n

q
∫
0
∞
P
{
M
n
(
X
)

ɛ
a
n
>
t
1
/
q
}
d
t
=
∑
n
=
1
∞
a
n

q
(
∫
0
a
n
q
P
{
M
n
(
X
)
>
ɛ
a
n
+
t
1
/
q
}
d
t
+
∫
a
n
q
∞
P
{
M
n
(
X
)
>
ɛ
a
n
+
t
1
/
q
}
d
t
∫
0
a
n
q
P
{
M
n
(
X
)
>
ɛ
a
n
+
t
1
/
q
}
d
t
)
≤
∑
n
=
1
∞
P
{
M
n
(
X
)
>
ɛ
a
n
}
+
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
P
{
M
n
(
X
)
>
t
1
/
q
}
d
t
≐
I
1
+
I
2
.
To prove (14), it suffices to prove that
I
1
<
∞
and
I
2
<
∞
. Now let us prove them step by step. Firstly, we prove that
I
1
<
∞
.
For all
n
≥
1
, define
(18)
X
k
(
n
)
=
X
n
k
I
(

X
n
k

≤
a
n
)
,
T
j
(
n
)
=
1
a
n
∑
k
=
1
j
(
X
k
(
n
)

E
X
k
(
n
)
)
,
then for all
ɛ
>
0
, it is easy to have
(19)
P
(
max
1
≤
j
≤
n

1
a
n
∑
k
=
1
j
X
n
k

>
ɛ
)
≤
P
(
max
1
≤
j
≤
n

X
n
k

>
a
n
)
+
P
(
max
1
≤
j
≤
n

T
j
(
n
)

>
ɛ

max
1
≤
j
≤
n

1
a
n
∑
k
=
1
j
E
X
k
(
n
)

)
.
By (5), (6), (7), and (15) we have
(20)
max
1
≤
j
≤
n

1
a
n
∑
k
=
1
j
E
X
k
(
n
)

=
max
1
≤
j
≤
n

1
a
n
∑
k
=
1
j
E
X
n
k
I
(

X
n
k

≤
a
n
)

=
max
1
≤
j
≤
n

1
a
n
∑
k
=
1
j
E
X
n
k
I
(

X
n
k

>
a
n
)

≤
∑
k
=
1
n
E

X
n
k

I
(

X
n
k

>
a
n
)
a
n
⟶
0
as
n
⟶
∞
.
From (19) and (20), it follows that, for
n
large enough,
(21)
P
(
max
1
≤
j
≤
n

1
a
n
∑
k
=
1
j
X
n
k

>
ɛ
)
≤
∑
k
=
1
n
P
(

X
n
k

>
a
n
)
+
P
(
max
1
≤
j
≤
n

T
j
(
n
)

>
ɛ
2
)
.
Hence we only need to prove that
(22)
I
≐
∑
n
=
1
∞
∑
k
=
1
n
P
(

X
n
k

>
a
n
)
<
∞
,
I
I
≐
∑
n
=
1
∞
P
(
max
1
≤
j
≤
n

T
j
(
n
)

>
ɛ
2
)
<
∞
.
For
I
, it follows by (15) that
(23)
I
=
∑
n
=
1
∞
∑
k
=
1
n
E
I
(

X
n
k

>
a
n
)
≤
∑
n
=
1
∞
∑
k
=
1
n
E

X
n
k

q
I
(

X
n
k

>
a
n
)
a
n
q
<
∞
.
For
I
I
, take
r
≥
2
. Since
p
≤
2
,
r
≥
p
, we have by Markov inequality, Lemma 3,
C
r
inequality, and (16) that
(24)
I
I
≤
∑
n
=
1
∞
(
ɛ
2
)

r
E
max
1
≤
j
≤
n

T
j
(
n
)

r
≤
C
∑
n
=
1
∞
(
ɛ
2
)

r
1
a
n
r
[
∑
k
=
1
n
E

X
k
(
n
)

r
+
(
∑
k
=
1
n
E

X
k
(
n
)

2
)
r
/
2
]
≤
C
∑
n
=
1
∞
∑
k
=
1
n
E

X
k
(
n
)

r
a
n
r
+
C
∑
n
=
1
∞
(
∑
k
=
1
n
E

X
k
(
n
)

2
a
n
2
)
r
/
2
≤
C
∑
n
=
1
∞
∑
k
=
1
n
E

X
n
k

p
I
(

X
n
k

≤
a
n
)
a
n
p
+
C
∑
n
=
1
∞
(
∑
k
=
1
n
E

X
n
k

p
I
(

X
n
k

≤
a
n
)
a
n
p
)
r
/
2
≤
C
∑
n
=
1
∞
∑
k
=
1
n
E

X
n
k

p
I
(

X
n
k

≤
a
n
)
a
n
p
+
C
(
∑
n
=
1
∞
∑
k
=
1
n
E

X
n
k

p
I
(

X
n
k

≤
a
n
)
a
n
p
)
r
/
2
<
∞
.
Next we prove that
I
2
<
∞
. Denote
Y
n
k
=
X
n
k
I
(

X
n
k

≤
t
1
/
q
)
,
Z
n
k
=
X
n
k

Y
n
k
, and
M
n
(
Y
)
=
max
1
≤
j
≤
n

∑
k
=
1
j
Y
n
k

. Obviously,
(25)
P
{
M
n
(
X
)
>
t
1
/
q
}
≤
∑
k
=
1
n
P
{

X
n
k

>
t
1
/
q
}
+
P
{
M
n
(
Y
)
>
t
1
/
q
}
.
Hence,
(26)
I
2
≤
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
∫
a
n
q
∞
P
{

X
n
k

>
t
1
/
q
}
d
t
+
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
P
{
M
n
(
Y
)
>
t
1
/
q
}
d
t
≐
I
3
+
I
4
.
For
I
3
, by (15), we have
(27)
I
3
=
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
∫
a
n
q
∞
P
{

X
n
k

I
(

X
n
k

>
a
n
)
>
t
1
/
q
}
d
t
≤
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
∫
0
∞
P
{

X
n
k

I
(

X
n
k

>
a
n
)
>
t
1
/
q
}
d
t
=
∑
n
=
1
∞
∑
k
=
1
n
E

X
n
k

q
I
(

X
n
k

>
a
n
)
a
n
q
<
∞
.
Now let us prove that
I
4
<
∞
. Firstly, it follows by (6) and (15) that
(28)
max
t
≥
a
n
q
max
1
≤
j
≤
n
t

1
/
q

∑
k
=
1
j
E
Y
n
k

=
max
t
≥
a
n
q
max
1
≤
j
≤
n
t

1
/
q

∑
k
=
1
j
E
Z
n
k

≤
max
t
≥
a
n
q
t

1
/
q
∑
k
=
1
n
E

X
n
k

I
(

X
n
k

>
t
1
/
q
)
≤
∑
k
=
1
n
a
n

1
E

X
n
k

I
(

X
n
k

>
a
n
)
≤
∑
k
=
1
n
E

X
n
k

q
I
(

X
n
k

>
a
n
)
a
n
q
⟶
0
as
n
⟶
∞
.
Therefore, for
n
sufficiently large,
(29)
max
1
≤
j
≤
n

∑
k
=
1
j
E
Y
n
k

≤
t
1
/
q
2
,
t
≥
a
n
q
.
Then for
n
sufficiently large,
(30)
P
{
M
n
(
Y
)
>
t
1
/
q
}
≤
P
{
max
1
≤
j
≤
n

∑
k
=
1
j
(
Y
n
k

E
Y
n
k
)

>
t
1
/
q
2
}
,
t
≥
a
n
q
.
Let
d
n
=
[
a
n
]
+
1
. By (30), Lemma 3, and
C
r
inequality, we can see that
(31)
I
4
≤
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
t

2
/
q
E
(
max
1
≤
j
≤
n

∑
k
=
1
j
(
Y
n
k

E
Y
n
k
)

)
2
d
t
≤
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
t

2
/
q
∑
k
=
1
n
E
(
Y
n
k

E
Y
n
k
)
2
d
t
≤
C
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
∫
a
n
q
∞
t

2
/
q
E
Y
n
k
2
d
t
=
C
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
∫
a
n
q
∞
t

2
/
q
E
X
n
k
2
I
(

X
n
k

≤
d
n
)
d
t
+
C
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
∫
d
n
q
∞
t

2
/
q
E
X
n
k
2
I
(
d
n
<

X
n
k

≤
t
1
/
q
)
d
t
≐
I
41
+
I
42
.
For
I
41
, since
q
<
2
, we have
(32)
I
41
=
C
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
E
X
n
k
2
I
(

X
n
k

≤
d
n
)
∫
a
n
q
∞
t

2
/
q
d
t
≤
C
∑
n
=
1
∞
∑
k
=
1
n
E
X
n
k
2
I
(

X
n
k

≤
d
n
)
a
n
2
=
C
∑
n
=
1
∞
∑
k
=
1
n
E
X
n
k
2
I
(

X
n
k

≤
a
n
)
a
n
2
+
C
∑
n
=
1
∞
∑
k
=
1
n
E
X
n
k
2
I
(
a
n
<

X
n
k

≤
d
n
)
a
n
2
≐
I
41
′
+
I
41
′′
.
Since
p
≤
2
, by (16), it implies
I
41
′
<
∞
. Now we prove that
I
41
′′
<
∞
. Since
q
<
2
and
(
a
n
+
1
)
/
a
n
→
1
as
n
→
∞
, by (15) we have
(33)
I
41
′′
≤
C
∑
n
=
1
∞
∑
k
=
1
n
d
n
2

q
a
n
2
E

X
n
k

q
I
(
a
n
<

X
n
k

≤
d
n
)
≤
C
∑
n
=
1
∞
∑
k
=
1
n
(
a
n
+
1
a
n
)
2

q
E

X
n
k

q
I
(

X
n
k

>
a
n
)
a
n
q
≤
C
∑
n
=
1
∞
∑
k
=
1
n
E

X
n
k

q
I
(

X
n
k

>
a
n
)
a
n
q
<
∞
.
Let
t
=
u
q
in
I
42
. Note that, for
q
<
2
,
(34)
∫
d
n
∞
u
q

3
E
X
n
k
2
I
(
d
n
<

X
n
k

≤
u
)
d
u
=
∫
d
n
∞
u
q

3
E
X
n
k
2
I
(

X
n
k

>
d
n
)
·
I
(

X
n
k

≤
u
)
d
u
=
E
[
X
n
k
2
I
(

X
n
k

>
d
n
)
∫

X
n
k

∞
u
q

3
I
(

X
n
k

≤
u
)
d
u
]
=
E
[
X
n
k
2
I
(

X
n
k

>
d
n
)
∫

X
n
k

∞
u
q

3
d
u
]
≤
C
E

X
n
k

q
I
(

X
n
k

>
d
n
)
.
Then by (15) and
d
n
>
a
n
, we have
(35)
I
42
=
C
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
∫
d
n
∞
u
q

3
E
X
n
k
2
I
(
d
n
<

X
n
k

≤
u
)
d
u
≤
C
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
E

X
n
k

q
I
(

X
n
k

>
a
n
)
<
∞
.
This completes the proof of Theorem 4.
Theorem 5.
Let
{
X
n
k
,
k
≥
1
,
n
≥
1
}
be an array of rowwise
φ
mixing random variables satisfying
sup
n
≥
1
∑
k
=
1
∞
φ
n
1
/
2
(
k
)
<
∞
and let
{
a
n
,
n
≥
1
}
be a sequence of positive real numbers such that
a
n
↑
∞
. Also, let
{
Ψ
k
(
t
)
,
k
≥
1
}
be a positive even function satisfying (5) for
1
≤
q
<
p
and
p
>
2
. Then conditions (6)–(8) imply (14).
Proof.
Following the notation, by a similar argument as in the proof of Theorem 4, we can easily prove that
I
1
<
∞
,
I
3
<
∞
and that (19) and (20) hold. To complete the proof, we only need to prove that
I
4
<
∞
.
Let
δ
≥
p
and
d
n
=
[
a
n
]
+
1
. By (30), Markov inequality, Lemma 3, and the
C
r
inequality we can get
(36)
I
4
≤
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
t

δ
/
q
E
max
1
≤
j
≤
n

∑
k
=
1
j
(
Y
n
k

E
Y
n
k
)

δ
d
t
≤
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
t

δ
/
q
[
∑
k
=
1
n
E

Y
n
k

δ
+
(
∑
k
=
1
n
E
Y
n
k
2
)
δ
/
2
]
d
t
=
C
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
∫
a
n
q
∞
t

δ
/
q
E

Y
n
k

δ
d
t
+
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
t

δ
/
q
(
∑
k
=
1
n
E
Y
n
k
2
)
δ
/
2
d
t
≐
I
43
+
I
44
.
For
I
43
, we have
(37)
I
43
=
C
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
∫
a
n
q
∞
t

δ
/
q
E

X
n
k

δ
I
(

X
n
k

≤
d
n
)
d
t
+
C
∑
n
=
1
∞
∑
k
=
1
n
a
n

q
∫
d
n
q
∞
t

δ
/
q
E

X
n
k

δ
I
(
d
n
<

X
n
k

≤
t
1
/
q
)
d
t
≐
I
43
′
+
I
43
′
′
.
By a similar argument as in the proof of
I
41
<
∞
and
I
42
<
∞
(replacing the exponent
2
by
δ
), we can get
I
43
′
<
∞
and
I
43
′
′
<
∞
.
For
I
44
, since
δ
>
2
, we can see that
(38)
I
44
=
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
t

δ
/
q
(
∑
k
=
1
n
E
X
n
k
2
I
(

X
n
k

≤
a
n
)
+
∑
k
=
1
n
E
X
n
k
2
I
(
a
n
<

X
n
k

≤
t
1
/
q
)
)
δ
/
2
d
t
≤
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
t

δ
/
q
(
∑
k
=
1
n
E
X
n
k
2
I
(

X
n
k

≤
a
n
)
)
δ
/
2
d
t
+
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
(
t

2
/
q
∑
k
=
1
n
E
X
n
k
2
I
(
a
n
<

X
n
k

≤
t
1
/
q
)
)
δ
/
2
d
t
≐
I
44
′
+
I
44
′
′
.
Since
δ
≥
p
>
q
, from (8) we have
(39)
I
44
′
=
C
∑
n
=
1
∞
a
n

q
(
∑
k
=
1
n
E
X
n
k
2
I
(

X
n
k

≤
a
n
)
)
δ
/
2
∫
a
n
q
∞
t

δ
/
q
d
t
≤
C
∑
n
=
1
∞
(
∑
k
=
1
n
E
X
n
k
2
I
(

X
n
k

≤
a
n
)
a
n
2
)
δ
/
2
≤
C
∑
n
=
1
∞
(
∑
k
=
1
n
E
X
n
k
2
a
n
2
)
δ
/
2
<
∞
.
Next we prove that
I
44
′
′
<
∞
. To start with, we consider the case
1
≤
q
≤
2
. Since
δ
>
2
, by (15), we have
(40)
I
44
′
′
≤
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
(
t

1
∑
k
=
1
n
E

X
n
k

q
I
(
a
n
<

X
n
k

≤
t
1
/
q
)
)
δ
/
2
d
t
≤
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
(
t

1
∑
k
=
1
n
E

X
n
k

q
I
(

X
n
k

>
a
n
)
)
δ
/
2
d
t
=
C
∑
n
=
1
∞
a
n

q
(
∑
k
=
1
n
E

X
n
k

q
I
(

X
n
k

>
a
n
)
)
δ
/
2
∫
a
n
q
∞
t

δ
/
2
d
t
≤
C
∑
n
=
1
∞
(
∑
k
=
1
n
E

X
n
k

q
I
(

X
n
k

>
a
n
)
a
n
q
)
δ
/
2
<
∞
.
Finally, we prove that
I
44
′
′
<
∞
in the case
2
<
q
<
p
. Since
δ
>
q
and
δ
>
2
, we have by (15) that
(41)
I
44
′
′
≤
C
∑
n
=
1
∞
a
n

q
∫
a
n
q
∞
(
t

2
/
q
∑
k
=
1
n
E
X
n
k
2
I
(

X
n
k

>
a
n
)
)
δ
/
2
d
t
=
C
∑
n
=
1
∞
a
n

q
(
∑
k
=
1
n
E
X
n
k
2
I
(

X
n
k

>
a
n
)
)
δ
/
2
∫
a
n
q
∞
t

δ
/
q
d
t
≤
C
∑
n
=
1
∞
(
∑
k
=
1
n
E
X
n
k
2
I
(

X
n
k

>
a
n
)
a
n
2
)
δ
/
2
<
∞
.
Thus we get the desired result immediately. The proof is completed.
Corollary 6.
Let
{
X
n
k
,
k
≥
1
,
n
≥
1
}
be an array of rowwise
φ
mixing mean zero random variables with
sup
n
≥
1
∑
k
=
1
∞
φ
n
1
/
2
(
k
)
<
∞
,
q
≥
1
. If, for some
α
>
0
and
v
≥
2
,
(42)
max
1
≤
k
≤
n
E

X
n
k

v
=
O
(
n
α
)
,
where
(
v
/
q
)

α
>
max
{
v
/
2
,
2
}
,
v
≥
2
, then, for any
ɛ
>
0
,
(43)
∑
n
=
1
∞
n

1
E
{
max
1
≤
j
≤
n

∑
k
=
1
j
X
n
k


ɛ
n
1
/
q
}
+
q
<
∞
.
Proof.
Put
Ψ
k
(

t

)
=

t

v
,
p
=
v
+
δ
,
δ
>
0
, and
a
n
=
n
1
/
q
.
Since
v
≥
2
,
(
v
/
q
)

α
>
max
{
v
/
r
,
2
}
, then
(44)
Ψ
k
(

t

)

t

q
=

t

v

q
↑
,
Ψ
k
(

t

)

t

p
=

t

v

t

p
=
1

t

δ
↓
as

t

↑
∞
.
It follows by (42) and
(
v
/
q
)

α
>
2
that
(45)
∑
n
=
1
∞
∑
k
=
1
n
E
Ψ
k
(
X
n
k
)
Ψ
k
(
a
n
)
=
∑
n
=
1
∞
∑
k
=
1
n
E

X
n
k

v
n
v
/
q
≤
C
∑
n
=
1
∞
1
n
(
v
/
q
)

α

1
<
∞
.
Since
v
≥
2
, by Jensen's inequality it follows that
(46)
∑
k
=
1
n
E

X
n
k

2
n
2
/
q
≤
∑
k
=
1
n
(
E

X
n
k

v
)
2
/
v
n
2
/
q
≤
C
1
n
(
2
/
q
)

(
2
α
/
v
)

1
.
Clearly
(
2
/
q
)

(
2
α
/
v
)

1
>
0
. Take
s
>
p
such that
(
s
/
2
)
(
(
2
/
q
)

(
2
α
/
v
)

1
)
>
1
. Therefore,
(47)
∑
n
=
1
∞
[
∑
k
=
1
n
E

X
n
k

2
n
2
/
q
]
s
/
2
<
∞
.
Combining Theorem 5 and (45)–(47), we can prove Corollary 6 immediately.
Remark 7.
Noting that in this paper we consider the case
1
≤
q
≤
p
, which has a more wide scope than the case
q
=
1
,
p
≥
2
in Gan et al. [14]. In addition, compared with
φ
mixing random variables, the arrays of
φ
mixing random variables not only have many related properties, but also have a wide range of application. So it is very significant to study it.
Remark 8.
Under the condition of Theorem 4, we have
(48)
∞
>
∑
n
=
1
∞
a
n

q
E
{
max
1
≤
j
≤
n

∑
k
=
1
j
X
n
k


ɛ
a
n
}
+
q
=
∑
n
=
1
∞
a
n

q
∫
0
∞
P
{
max
1
≤
j
≤
n

∑
k
=
1
j
X
n
k


ɛ
a
n
>
t
1
/
q
}
d
t
≥
∑
n
=
1
∞
a
n

q
∫
0
ɛ
q
a
n
q
P
{
max
1
≤
j
≤
n

∑
k
=
1
j
X
n
k


ɛ
a
n
>
ɛ
a
n
}
d
t
=
ɛ
q
∑
n
=
1
∞
P
{
max
1
≤
j
≤
n

∑
k
=
1
j
X
n
k

>
2
ɛ
a
n
}
.
Then we can obtain (11) directly. In this case, condition (10) is not needed. Especially, for
p
=
2
, the conditions of Theorem 4 are weaker than Theorem A. So Theorem 4 generalizes and improves it.
Remark 9.
Note that Theorem A only considers
q
=
1
, while Theorem 5 considers
q
≥
1
. In addition, (14) implies (11), so Theorem 5 generalizes the corresponding result of Theorem A.