L2-convergence of Yosida approximation for semi-linear backward stochastic differential equation with jumps in infinite dimension

Hani Abidi (Department of Computer Science and Applied Mathematics, Esprit School of Business, Tunis, Tunisia)
Rim Amami (Department of Basic Sciences, Deanship of Preparatory Year and Supporting Studies, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia)
Roger Pettersson (Department of Mathematics, Linnaeus University, Vaxjo, Sweden)
Chiraz Trabelsi (Department of Sciences and Technologies, Centre Universitaire de Mayotte, Mayotte, France) (IMAG Montpellier, Montpellier, France)

Arab Journal of Mathematical Sciences

ISSN: 1319-5166

Article publication date: 18 January 2024

199

Abstract

Purpose

The main motivation of this paper is to present  the Yosida approximation of a semi-linear backward stochastic differential equation in infinite dimension. Under suitable assumption and condition, an L2-convergence rate is established.

Design/methodology/approach

The authors establish a result concerning the L2-convergence rate of the solution of backward stochastic differential equation with jumps with respect to the Yosida approximation.

Findings

The authors carry out a convergence rate of Yosida approximation to the semi-linear backward stochastic differential equation in infinite dimension.

Originality/value

In this paper, the authors present the Yosida approximation of a semi-linear backward stochastic differential equation in infinite dimension. Under suitable assumption and condition, an L2-convergence rate is established.

Keywords

Citation

Abidi, H., Amami, R., Pettersson, R. and Trabelsi, C. (2024), "L2-convergence of Yosida approximation for semi-linear backward stochastic differential equation with jumps in infinite dimension", Arab Journal of Mathematical Sciences, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/AJMS-09-2023-0024

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Hani Abidi, Rim Amami, Roger Pettersson and Chiraz Trabelsi

License

Published in the Arab Journal of Mathematical Sciences. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this license may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Backward stochastic differential equation (BSDE) was performed first by Pardoux and Peng [1] who proved the existence and uniqueness of adapted solutions, under suitable square-integrability assumptions, on the coefficients and on the terminal condition. Later, several authors have been attracted to this area and have provided many applications such as in stochastic games and optimal control [2–4], partial differential equations [5] and numerical approximation [6].

The main motivation of this paper is to carry out a convergence rate of the Yosida approximation to the semi-linear backward stochastic differential equation with jumps in infinite dimension. More precisely, let H be a separable Hilbert space with inner product ⟨,⟩H and H* its dual space. Let V be a uniformly convex Banach space, such that VH continuously and densely. For its dual space V*, it follows that H* ⊂ V* continuously and densely. Then by the identification of H and H* via the Riesz isomorphism, we get

(1)VHV*.
(V, H, V*) is called a Gelfand triple.

Following [7], we introduce A which is a linear bounded operator such that A: D(A) = VV*, where D(A) = {v ∈ V, Av ∈ H}. Using [8], we introduce the Yosida approximation Aλ, λ > 0 of A defined as

(2)Aλx1λJ(xJλx),
where J: VV* is the duality mapping defined by Definition 2.1, and Jλ: VV is the resolvent of the operator A is defined by
(3)Jλx(J+λA)1Jx.

This Yosida approximation is used to approximate the following semi-linear backward stochastic differential equation in infinite dimension:

(4)dYt=AYtdt+f(t,Yt,Zt,Qt)dt+ZtdWt+EQt(x)Ñ(dt,dx)YT=XH.
where W is a cylindrical Wiener process, and Ñ is the compensated Poisson random measure. Using the following family of approximating equations:
dYtλ=AλYtλdt+f(t,Ytλ,Ztλ,Qtλ)dt+ZtλdWt+EQtλ(x)Ñ(dt,dx)YT=XH,
where λ > 0, and and Aλ is the Yosida approximation, we establish the existence and uniqueness of the solution of (4).

Many authors have been devoted to the case of BSDE in infinite dimensional spaces such as [9–11].

Hu and Peng [10] proved the existence and the uniqueness of the solution (Y, Z) of this semi-linear backward stochastic evolution equations. This kind of equation appears in many topics as those by Bensoussan [12, 13] and Hu and Peng [14] for the case with no jumps who have studied the maximum principles for stochastic control systems in infinite dimensional spaces and the theory of optimal control and controllability for stochastic partial differential equations.

Existence and uniqueness of a strong solution of (4) was obtained in Ref. [7] by considering a special case of a backward stochastic evolution equation for Hilbert space valued processes. This, in turn, is studied by taking finite dimensional projections and then taking the limit. This is the Galerkin approximation method which has been used by several authors (See, e.g. Ref. [15]).

Yosida approximations of stochastic differential equations in infinite dimension have been studied in Refs. [16–20]. The authors consider Yosida approximations of various classes of stochastic differential equations with Poisson jumps.

The authors in Ref. [21] prove the existence and uniqueness of a solution for a class of backward stochastic differential equations driven by a geometric Brownian motion with a sub-differential operator by means of the Moreau-Yosida approximation method (see Ref. [22] for this used method). Using approximation tools, the authors provide a probabilistic interpretation for the viscosity solutions of a kind of non-linear variational inequalities.

In the same area, the authors in Ref. [23] deal with a class of mean-field backward stochastic differential equations, with sub-differential operator corresponding to a lower semi-continuous convex function. Using Yosida approximation tools, the authors establish the existence and uniqueness of the solution. As an application, they give a probability interpretation for the viscosity solutions of a class of non-local parabolic variational inequalities.

The authors in Ref. [24] propose and analyze multivalued stochastic differential equations (MSDEs) with maximal monotonous operators driven by semimartingales with jumps. They introduce some methods of approximation of solutions of MSDEs based on discretization of processes and Yosida approximation of the monotonous operator. Their paper studies the general problem of stability of solutions of MSDEs with respect to the convergence of driving semimartingales.

Bahlali et al. [25] deal with reflected backward stochastic differential equation (RBSDE) with both monotone and locally monotone coefficient and squared integrable terminal data. Existence and uniqueness of the solution are established with a polynomial growth condition on the coefficient and using Yosida approximation tools. An application to the homogenization of multivalued partial differential equations is given by the authors.The aim of our paper differs from the one proposed in Ref. [26], as it concentrates on BSDEs instead of SDEs. Additionally, it differs from the approach described in Ref. [7] by integrating the idea of L2-convergence of Yosida approximation. This integration offers a possible technique for solving multivalued differential equations.

This paper is composed of four sections. Section 2 introduces some notations, the Yosida approximation approach and preliminaries results. Section 3 establishes a result concerning the L2-convergence rate of the solution of backward stochastic differential equation with jumps with respect to the Yosida approximation. In Section 4, we carry out a convergence rate of the Yosida approximation to the semi-linear backward stochastic differential equation in infinite dimension.

2. Preliminaries and notations

Let (Ω,F,P) be a probability space with filtration (Ft)t[0,T]F. Let Ξ, H be two separable Hilbert spaces, and H* be the dual space of H. Let V be a Banach space dense in H. Let us assume that V is uniformly convex with uniformly convex dual V*. It follows that H* ⊂ V* continuously and densely. Then, by the identification of H and H* via the Riesz isomorphism, we get

VHV*.
The Milman-Pettis theorem (see, e.g. Yosida [[27], p. 127]) states that every uniformly convex Banach space is reflexive. So, V is a reflexive Banach space.

Following [28], we introduce a cylindrical Wiener process in Ξ as a family (W(t), t ≥ 0), of linear applications Ξ → L2(Ω) such that:

  1. For every h ∈ Ξ, {W(t)h, t ≥ 0} is a real (continuous) Wiener process,

  2. For every h, k ∈ Ξ and t, s ≥ 0, E(W(t)h, W(t)k) = (ts)(h,k)Ξ.

Let (E,B(E)) be a measurable space, where E is a topological vector space. Furthermore, let ξ(t) be a Lévy process on E and be denoted by ν(dx), the Lévy measure of ξ. Denote by L2(ν) the L2-space of square integrable H − valued measurable functions associated with ν.

Set p(t) = Δξ(t) = ξ(t) − ξ(t − ). Then p = {p(t), t ∈ Dp} is a stationary Poisson point process on E with characteristic measure ν. Denote by N(dt, dx) the Poisson counting measure associated with the Lévy process, N(t,A)=sDpstIA(p(s)). Denote by Ñ(dt,dx)=N(dt,dx)dtν(dx) the compensated Poisson random measure. The filtration is defined as follows

Ft=σ(Ws,N(s,A),AB(E),st),t0.

We denote by P the predictable σ − field on Ω × [0, T]. Introduce now the following spaces:

  1. L2(0, T, H): the set of all Ftprogressively measurable processes takes its values in H, such that

x=E0T|x(t)|2dt12<.
  1. L2(Ξ, H): the set of the Hilbert-Schmidt operators from Ξ to H, that is,

L2(Ξ,H)=ψL(Ξ,H)|i=1|ψen|H2<
where {en}n=1 is an orthonormal basis on Ξ. The set L2(Ξ, H) is a Hilbert space.
  1. L2(ν): L2 − space of square integrable H − valued measurable functions Q: HH associated with ν, that is,

|Q|L2(ν)2=0t|Qs|H2dν(s)<.

Moreover, beside the same hypotheses on the cylindrical Wiener process, we have:

  1. A positive number T > 0;

  2. A map f: [0, T] ×Ω × V × L2(Ξ, H) × L2(ν) → H.

  3. A final data XL2(Ω,FT,H).

  4. A bounded linear operator A: D(A) = VV*, where D(A) = {v ∈ V, Av ∈ H}. We assume that the operator A is monotone, meaning:

(5)v,AvV*V0,vD(A).

Now, we assume the following useful hypothesis denoted by Hyp.1:

  1. f is measurable from P×B(H)×B(L2(Ξ,H)×L2(ν)) to B(H) and E0T|f(s,0,0,0)|H2ds<+.

  2. There exists a constant C > 0, such that P almost surely for almost every t ∈ [0, T], the following holds for all Y1, Y2 ∈ H, Z1, Z2 ∈ L2(Ξ, H) and Q1, Q2 ∈ L2(ν):

|f(t,Yt1,Zt1,Qt1)f(t,Yt2,Zt2,Qt2)|HC|Yt1Yt2|H+Zt1Zt2L2(Ξ,H)+Qt1Qt2Lν2.

In most cases, the duality mapping defined here is multivalued.

Definition 2.1.

The duality mapping J: V → V* is defined by:

(6) J(x)={x*V*|x*(y)x,yH},yV.

Under hypotheses of V and V*, we get the following result:

Theorem 2.2.

[20] Let V be a Banach space. If V* is strictly convex, then the duality mapping J: V → V* is single-valued.

For the detailed proof, see Theorem 1.2 in Ref. [8].

Definition 2.3.

The inverse mapping J−1: V* → V is defined by:

(7) J1(x*)={yVsuchthatz,yH=z,xH},zV.

The inverse mapping J−1: V* → V is single-valued. For the proof, see [[29], Proposition 32.22] and [[20], Proposition 3.13.].

We will now provide an approximation of the operator A, as mentioned in Ref. [8].

Definition 2.4.

For every x ∈ V and λ > 0, the Yosida approximation of A is defined by the operator Aλ: V → V* as

(8) Aλx1λJ(xJλx),
where the resolvent Jλ: V → V of the operator A is defined by Jλx = xλ, with xλ as a unique solution to the equation:
(9) 0=J(xλx)+λAxλ.

The uniqueness of xλ was proved by [20] [Proposition 3.17. p. 36]. According to [8] [Proposition 1.3], Aλ is single-valued, monotone, bounded on bounded subsets and semi-continuous from V to V*. The resolvent can be written as

(10)Jλx(J+λA)1Jx.
Lemma 2.5.

Equation (8) can be reformulated as:

(11) Aλ(x)=(A1+λJ1)1x,xV.

Proof. Let x ∈ V and Jλ(x) be the resolvents of the operator A defined by equation (10). By the definition of the Yosida approximation and the homogeneity of J−1 (see Ref. [20]), Equation (8) can be written as

Jλ(x)=xλJ1(Aλ(x)).

Using the fact that Aλ(x) = A(Jλ(x)) for all x ∈ V ([[20], Proposition 3.19]) and inserting this into the resolvent equation (9), we obtain Aλ(x) = A(x − λJ−1(Aλ(x))) or equivalently, x = (A−1 + λJ−1)(Aλ(x)). Since Aλ is single-valued, we conclude (11).

3. Yosida approximation

Let H be a separable Hilbert space and V a Banach space such that the space VH is reflexive and dense in H. We identify H with its dual space H*, and V with its dual space V*. Then, we get

VHV*

We denote by |⋅|V, |⋅|V*, |⋅|H, the norms in V, V* and H, respectively, and by ⟨, ⟩ the duality product between V and V*. We introduce the following application:

A:ΩL(V,V*),
which verifies the following coercivity condition (L1):

There exist c1 ≥ 0, c2R such that for all v ∈ V, t ∈ [0, T], we have

(L1)2V*Av,vV+c1|v|H2c2|v|V2.

In this section, we are interested in the Yosida approximation of the following semi-linear backward stochastic differential equation in infinite dimension:

(12)dYt=AYtdt+f(t,Yt,Zt,Qt)dt+ZtdWt+EQt(x)Ñ(dt,dx)YT=XH.

Let us consider the family of approximating equations of (12)

(13)dYtλ=AλYtλdt+f(t,Ytλ,Ztλ,Qtλ)dt+ZtλdWt+EQtλ(x)Ñ(dt,dx),λ>0YT=XH.
Remark 3.1.

Note that, for all λ > 0, the operator Aλ being linear and bounded [[8], Proposition 2.2], it is checked by the standard Picard−Lindelof iteration methods [7] that the triplet (Yλ, Zλ, Qλ) is a classical solution of (13), and it verifies for all t ∈ [0, T], that

(14) Ytλ=XtTAλYuλ+f(u,Yuλ,Zuλ,Quλ)dutTZuλdWutTEQuλ(x)Ñ(du,dx).

The following result establishes the existence and the uniqueness of the solution of (12).

Theorem 3.2.

[[7], Theorem 4.1] Assume that XL2(Ω,FT,H). Under Hypothesis Hyp.1 and Condition (L1), equation (12) has a unique progressively measurable process solution (Y, Z, Q) ∈ H × L2(Ξ, H) × L2(ν) such that:

  1. E[0T|Yt|H2dt]<, E[0T|Zt|L2(Ξ,H)2dt]<, E[0T|Qt|L2(ν)2dt]<.

  2. Yt=XtTAYs+f(s,Ys,Zs,Qs)dstTZsdWstTEQs(x)Ñ(ds,dx).

The following results will be used to prove our main result about the L2 convergence rate.

Remark 3.3.

The coercivity condition (L1) of the operator A is transferred to its Yosida approximation Aλ, which follows directly from [[17], Lemma 3.10] and [[17], Proof of Proposition 5.1]. There exist c̃10, c̃2>0 such that for all v ∈ V, t ∈ [0, T]

2V*Av,vV+c̃1|v|H2c̃2|v|V2.

Lemma 3.4.

Under Conditions (L1), and Hyp.1, there exists C > 0, such that for all λ > 0, we have

(15) supt[0,T]E|Ytλ|H2+tTEZsλL2(Ξ,H)2ds+tTE[|Ytλ|V2]ds+tTE|Qsλ|L2(ν)2dsC.

Proof. For fixed λ > 0, we can apply the Itô formula to |Ytλ|H2 and we obtain:

E|Ytλ|H2+tTEZsλL2(Ξ,H)2ds+tTE|Qsλ|L2(ν)2dsEYT|H22tTEV*AλYsλ,YsλVds2tTEf(s,Ysλ,Zsλ,Qsλ),YsλHds.
Then, by using the coercivity condition (L1) of Aλ and Cauchy-Schwartz inequality for α1 > 0, we get
E|Ytλ|H2+EtTZsλL2(Ξ,H)2ds+tTE|Qsλ|L2(ν)2dsE|YT2|H+1α1tTE|f(s,Ysλ,Zsλ,Qsλ)|H2ds+α1tTE(|Ysλ|H2)ds+tT[c̃2E|Ysλ|V2+c̃1E|Ysλ|H2]ds.
Then, by using Hyp.1, we obtain:
E|Ytλ|H2+tTEZsλL2(Ξ,H)2ds+tTE|Qsλ|L2(ν)2dsE|YT|H2+Cα1tTE(|Ysλ|H2+|Zsλ|L2(Ξ,H)2+|Qsλ|L2(ν)2)ds+tTE[c̃2YsλV2+c̃1|Ysλ|H2]ds+Cα1tTE|f(s,0,0,0)|H2ds+α1tTE|Ysλ|H2ds.

Therefore, for α1 large enough, we obtain:

E|Ytλ|H2+1Cα1tTEZsλL2(Ξ,H)2ds+c̃2tTE|Ysλ|V2ds+tTE|Qsλ|L2(ν)2dsE|YT|H2+C3tTE|Ysλ|H2ds+C3tT[f(s,0,0,0)2]ds
where C3=α1+Cα1 is independent of λ. By the Gronwall lemma, we finally obtain the expression (15).

The following remark plays a fundamental role in the convergence rate of Yosida approximation.

Remark 3.5.

According to [[8], proposition 2.2], Aλ verifies the boundedness condition

AλxV*AxV*,
for all x ∈ D(A) on [0, T] and by using the fact that D(A) = V and we get
AλYsλV*2CYsλ2.
under Condition (L1) and Hyp.1, we then obtain by applying lemma 3.4:
(16) lim supλ0tTE[|AλYsλ|V*2]ds<.

4. Convergence of Yosida approximation

In this section, we prove a convergence rate of Yosida approximation to the following semi-linear backward stochastic differential equation in infinite dimension:

(17)dYt=AYtdt+f(t,Yt,Zt,Qt)dt+ZtdWt+EQt(x)Ñ(dt,dx)YT=XH.
Proposition 4.1.

Let Yλ be the solution to the backward stochastic differential equation (12), and assume that Hyp.1 holds. Let λ, μ > 0, then there exists D > 0, such that:

supt[0,T]E|YλYμ|H2+tTEZsλZsμL2(Ξ,H)2ds+tTE|QsλQsμ|L2(ν)2dsD(λ+μ).

Proof. Let us denote by Ytλ and Ytμ two Yosida approximation to

(18) dYt=AYtdt+f(t,Yt,Zt,Qt)dt+ZtdWt+EQt(x)Ñ(dt,dx)YT=XH.
by Itô formula, then the expectation, we get
E|YtλYtμ|H2+tTEZsλZsμL2(Ξ,H)2ds+tTE|QsλQsμ|L2(ν)2ds=2tTEVYsλYsμ,(AλYsλAμYsμ)V*ds2tTEYsλYsμ,f(t,Ysλ,Zsλ,Qsλ)f(t,Ysμ,Zsμ,Qsμ)Hds.

By definition of Aλ and the bijectivity of Jλ, we have I = Jλ + J−1(λAλ). Hence:

VYsλYsμ,AλYsλAμYsμV*=V(JλYsλ+λJ1AλYsλ)(JμYsμ+μJ1AμYsμ),(AλYsλAμYsμ)V*=JλYsλJμYsμ,(AλYsλAμYsμ)V*V+J1(λAλYsλ)J1(μAμYsμ),(AλYsλAμYsμ)V*V.

So by using Lemma 2.5, we obtain Aλ = AJλ and Aμ = AJμ. Then the monotonicity of A (5) and the fact that J−1 is the duality map from V* to V** = V, the first aforementioned term is positive, so we get

VYsλYsμ,AλYsλAμYsμV*J1(λAλYsλ)J1(μAμYsμ),(AλYsλAμYsμ)V*V
=1λJ1(λAλYsλ),λAλYsλV*V1μJ1(μAμYsμ),μAμYsμV*V
+J1(λAλYsλ),AμYsμV*V+J1(μAμYsμ),AλYsλV*V
λ|AλYsλ|V*2μ|AμYsμ|V*2+μ|AμYsμ|V*|AλYsλ|V*+λ|AμYsμ|V*|AλYsλ|V*λ+μ2|AλYsλ|V*2+|AμYsμ|V*2,
where we have used the elementary inequality 2ab ≤ a2 + b2. Here, by applying the expectation and Lipschitz condition Hyp.1 of f, we get
E|YtλYtμ|H2+tTZsλZsμL2(Ξ,H)2ds+tTE|QsλQsμ|L2(ν)2dsλ+μ2tTE|AλYsλ|V*2ds+tTE|AμYsμ|V*2ds2tTYsλYsμ,f(t,Ysλ,Zsλ,Qsλ)f(t,Ysμ,Zsμ,Qsμ)HdsαtTE|YsλYsμ|H2ds+1αtTE[f(t,Ysλ,Zsλ,Qsμ)f(t,Ysμ,Zsμ,Qsμ)]H2ds+λ+μ2tTE|AλYsλ|V*2ds+tTE|AμYsμ|V*2dsαtTE|YsλYsμ|H2ds+CαtTE[|YsλYsμ|H2+|ZsλZsμ|L2(Ξ,H)2+|QsλQsμ|L2(ν)2]ds+λ+μ2tTE|AλYsλ|V*2ds+tTE|AμYsμ|V*2dsα+CαtTE|YsλYsμ|H2ds+CαtTE[|ZsλZsμ|L2(Ξ,H)2+|QsλQsμ|L2(ν)2]ds+λ+μ2tTE|AλYsλ|V*2ds+tTE|AμYsμ|V*2ds.

Then, we obtain

(19) E|YtλYtμ|H2E|YtλYtμ|H2+tTEZsλZsμL2(Ξ,H)2ds+tTE|QsλQsμ|L2(ν)2dsα+CαtTE|YsλYsμ|H2ds+Bλ,μ,

where

Bλ,μ=CαtTE[|ZsλZsμ|L2(Ξ,H)2+|QsλQsμ|L2(ν)2]ds+λ+μ2tTE|AλYsλ|V*2ds+tTE|AμYsμ|V*2ds.
Using Gronwall lemma, this shows that E|YtλYtμ|H2Bλ,μeCα(Tt), which plugged in the inequality (19) provides
E|YtλYtμ|H2+tTEZsλZsμL2(Ξ,H)2ds+tTE|QsλQsμ|L2(ν)2dsBλ,μ(1+C1(Tt)e(C1(Tt)))Bλ,μ(1+C2(Tt)),
where C1=(α+Cα) and C2=C1e(C1(Tt)). Then, we have
tTEZsλZsμL2(Ξ,H)2ds+tTE|QsλQsμ|L2(ν)2dsE|YtλYtμ|H2+tTEZsλZsμL2(Ξ,H)2ds+tTE|QsλQsμ|L2(ν)2ds(1+C2(Tt))CαtTE[ZsλZsμL2(Ξ,H)2+|QsλQsμ|L2(ν)2]ds+λ+μ2tTE|AλYsλ|V*2ds+tTE|AμYsμ|V*2ds.

By subtraction, we have:

1(1+C2(Tt))CαtTEZsλZsμL2(Ξ,H)2+QsλQsμL2(ν)2dsλ+μ2tTE|AλYsλ|V*2ds+tTE|AμYsμ|V*2ds.

For α larger than (1 + C2(T − t))C, this provides that there exists D > 0, such that

tTEZsλZsμL2(Ξ,H)2+|QsλQsμ|L2(ν)2dsDλ+μ2tTE|AλYsλ|V*2ds+E|AμYsμ|V*2ds.
By using the same idea for the jump part and plugging in (19), we deduce that
E|YtλYtμ|H2Dλ+μ2tTE|AλYsλ|V*2ds+tTE|AμYsμ|V*2ds.
Using that Aλ and Aμ , we verify the boundedness condition introduced in Remark 3.5, and the result holds.

Remark 4.2.

By using Lemma 3.4, for λ goes to 0, we prove that (Yλ, Zλ, Qλ) goes to the triplet (Y, Z, Q) in the space L2(Ω, H) × L2(Ξ, H) × L2(ν).

The following theorem shows that the limit (Y, Z, Q) is a solution of equation (12).

Theorem 4.3.

Under Hyp.1, we have

(20) supt[0,T]E|YtYtλ|H2+tTEZsZsλL2(Ξ,H)2ds+tTE|QsλQs|L2(ν)2ds0.
where (Y, Z, Q) ∈ L2(Ω, H) × L2(Ξ, H) × L2(ν) is the unique solution of (12).

Proof. Proposition 4.1 yields (Yλ)λ0 and (Zλ)λ0 which are predictable Cauchy family approximating equations in complete spaces L2(Ω, H) and L2(Ξ, H) and (Qλ)λ0, a progressively measurable Cauchy family approximating equations in L2(ν); then there exists a predictable processes Y, Z and Q, respectively F-progressively measurable such that the sequences (Yλ)λ0 and (Zλ)λ0 and (Qλ)λ0 converge, respectively, toward Y in L2(Ω, H) Z in L2(Ξ, H) and Q in L2(ν).

Now, it is sufficient to prove that this triplet (Y, Z, Q) coincides with the solution of (12). Therefore,

EYtX+tTAYu+f(u,Yu,Zu)du+tTZudWu+tTEQuÑ(du,dx)H22E|YtYtλ|H2+2EYtλX+tTAYu+f(u,Yu,Zu)du+tTZudWu+tTEQs(x)Ñ(ds,dx)H28EtTAYuAλYuλduH2+EtT(f(u,Yu,Zu,Qu)f(u,Yuλ,Zuλ,Quλ))duH2+EtT(ZuZuλ)dWuH2+EtTE(QuQuλ)Ñ(du,dx)H2+2E|YtYtλ|H2=8[I1+I2+I3+I4]+2I5.

We estimate each term separately. First note that, thanks to Hille Yosida approximation and [[17], Lemma 3.9], we have

(21) limλ0Aλx=Ax,forallxD(A).
then
I12EtTAλ(YuYuλ)H2du+E|tT(AAλ)YuH2dutT[c̃2E|YuYuλ|V2+c1E|YuYuλ|H2]dsCtTE|YuYuλ|H2ds0
when λ → 0 using Proposition 4.1. The term I2 is estimated by applying the Lipschitz condition with respect to Cauchy Schwartz inequality, and this yields
I22EtT(f(u,Yu,Zu,Qu)f(u,Yuλ,Zuλ,Quλ))duH2(Tt)EtTf(u,Yu,Zu,Qu)f(u,Yuλ,Zuλ,Quλ)H2duC(Tt)EtT|YuYuλ|H2+|ZuZuλ|L2(Ξ,H)2+|QuQuλ|L2(ν)2du0
where λ → 0.

Finally, the terms I3, I4 and I5 are covered by Proposition 4.1. Then the results holds.

Corollary 4.4.

Assume that Hyp.1 holds, then there exists a unique triplet (Y, Z, Q) ∈ L2(Ω, H) × L2(Ξ, H) × L2(ν) which satisfies (12), such that:

(22) supt[0,T]E|YtYtλ|H2+tTEZsZsλL2(Ξ,H)2ds+tTEE|QsλQs|L2(ν)2dsCλ.

Proof. Thanks to Proposition 4.1, we compute:

supt[0,T]E|YtλYt|H2+tTEZsZsλL2(Ξ,H)2ds+tTE|QsλQs|L2(ν)2ds2supt[0,T]E|YtλYtμ|H2+2supt[0,T]E|YtμYt|H2+2tTEZsμZsL2(Ξ,H)2ds+2tTEZsλZsμL2(Ξ,H)2ds+2tTE|QsμQs|L2(ν)2ds+2tTE|QsλQsμ|L2(ν)2ds2D(λ+μ)+2supt[0,T]E|YtμYt|H2+2tTEZsμZsL2(Ξ,H)2ds+2tTE|QsμQs|L2(ν)2ds.

Then, when μ goes to zero, applying Lebesgue dominated convergence theorem yields:

supt[0,T]E|YtYtλ|H2+tTEZsZsλL2(Ξ,H)2ds+tTE|QsλQs|L2(ν)2ds2Dλ.

Example 4.5.

Let an open set ΛRd, and denote by C0(Λ) the set of all infinitely differentiable real valued functions defined on Λ with compact support. For uC0(Λ) let us define

u1,2(|u(ξ)|2+|Δu(ξ)|2)dξ12.

Let us define H01,2(Λ) by the completion of C0(Λ) with respect to ‖ ⋅‖1,2. Then, for A = −Δ and H01,2L2(H01,2)*, A satisfies (L1).

Proof. For the detailed proof, we refer to [28] [p. 62].

Example 4.6.

[27] For the case V = H = V*. If A is a Lipschitz function, the Yosida approximation [8] [Proposition 2.3] is given by

(23) Aλx=1λ(xJλx),xH.
where the resolvent Jλ of A is defined on H by
(24) Jλ=(I+λA)1.
A satisfies (L1).

Proof. For more details, we refer to [[28], p. 59].

Example 4.7.

[28] Let p > 2, Γ ∈ Rn, let V ≔ Lp(Γ), H ≔ L2(Γ) and V*(Lpp1(Γ)), we define A: D(A) = V → V*, by Au≔ − u|u|p−2, u ∈ V. Then, A satisfies (L1).

Proof. For a detailed proof, we refer to [28] [p. 61] [].

References

1Pardoux E, Peng S. Adapted solution of a backward stochastic differential equation. Syst Control Lett. 1990; 14(1): 55-61. doi: 10.1016/0167-6911(90)90082-6.

2Abidi H, Amami R, Pontier M. Infinite horizon impulse control problem with jumps and continuous switching costs. Arab J Math Sci. 2022; 28(1): 2-36. doi: 10.1108/ajms-10-2020-0088.

3Hamadene S, Ouknine Y. Reflected backward stochastic differential equation with jumps and random obstacle. Electron J Probab. 2003; 8–2: 1-20. doi: 10.1214/ejp.v8-124.

4Hamadene S, Hassani M. BSDEs with two reflecting barriers driven by a Brownian and a Poisson noise and related Dynkin game. Electron J Probab. 2006; 11: 121-45. Paper No. 5. doi: 10.1214/ejp.v11-303.

5Pardoux E, Peng S. Backward stochastic differential equations and quasilinear parabolic partial differential equations. Stochastic Differential Equations and their Applications. Lect Not Cont Inf Sci. 1992; 176: 200-17. Springer.

6Abidi H, Pettersson R. Spatial convergence for semi-linear backward stochastic differential equations in Hilbert space: a mild approach. Comput Appl Mathematics. 2020; 39(2): 1-11. doi: 10.1007/s40314-020-1121-0.

7Oksendal B, Proske F, Zhang T. Backward stochastic partial differential equation with jump and application to optimal control of random jump fields. Stochastics. 2006; 77(5): 381-99. doi: 10.1080/17442500500213797.

8Barbu V. Analysis and control of nonlinear infinite dimensional systems. Mathematics Sci Eng. 1993; 190: 1-476.

9Guayyeri G, Tessitore G. On the backward stochastic Riccati equation in infinite dimensions. SIAM J Control Optimisation. 2005; 44(1): 159-94. doi: 10.1137/s0363012903425507.

10HuPeng YS. Adapted solution of a backward semiliear stochastic evolution equation. Stochastic Anal Appl. 1991; 9(4): 445-59. doi: 10.1080/07362999108809250.

11Tessitore G. Existence, uniqueness and space regularity of the adapted solution of a backward SPDE. Stochastic Anal Appl. 1996; 14(4): 461-86. doi: 10.1080/07362999608809451.

12Bensoussan A. Stochastic maximum principle for distributed parameter systems. J Franklin Inst. 1983; 315(5-6): 387-406. doi: 10.1016/0016-0032(83)90059-5.

13Bensoussan A. Lectures on Stcohastic control. In: Mitter SK, Moro A (Eds). Nonlinear filtering and stochastic control. Springer; 1982. Lecture Notes in Mathemtics; 972.

14Hu Y, Peng S. Maximum principle for semilinear stochastic evolution control systems. Stochastics Stochastic Rep. 1990; 33(3-4): 159-80. doi: 10.1080/17442509008833671.

15Bensoussan A. Stochastic maximum principle for systems with partial information and application to the separation principle. Applied stochastic analysis. Amsterdam: Gordon and Breach; 1991. 157-72.

16Govindan TE. Yosida approximations of stochastic differential equations in infinite dimensions and applications. Book, Springer; 2016.

17Liu W, Stephan M. Yosida approximations for multivalued stochastic partial differential equations driven by Lévy noise on a Gelfand triple. J Math Anal Appl. 2014; 410(1): 158-78. doi: 10.1016/j.jmaa.2013.08.016.

18Pettersson R. Yosida approximation for multivalued SDES. Stochastics Stoch Rep. 1995; 52(1-2): 107-20. doi: 10.1080/17442509508833965.

19Pettersson R. Existence theorem and Wong-zakai approximation for multivalued SDEs. Prob Math Stat. 1997; 17: 22-45.

20Stephan M. Yosida approximations for multivalued stochastic differential equations on Banach spaces via a Gelfand triple. Thesis University Bielefeld; 2012.

21Yang F, Ren Y, Hu L. Multi-valued backward stochastic differential equations driven by G-Brownian motion and its applications. Math Methods Appl Sci. 2017; 40(13): 4696-708. doi: 10.1002/mma.4335.

22Hintermuller M, Schiela A, Wollner W. The length of the primal-dual path in Moreau–Yosida-based path-following methods for state constrained optimal control. SIAM J Optimization. 2014; 24(1): 108-26. doi: 10.1137/120866762.

23Lu W, Ren Y, Hu L. Mean-field backward stochastic differential equations with subdifferential operator and its applications. Stat Probab Lett. 2015; 106: 73-81. doi: 10.1016/j.spl.2015.06.022.

24Maticiuc L, Rascanu A, Slominski L. Multivalued monotone stochastic differential equations with jumps. Stochastics Dyn. 2017; 17(3): 1750018. doi: 10.1142/s0219493717500186.

25Bahlali K, Essaky EH, Ouknine Y. Reflected backward stochastic differential equation with locally Monotone coefficient. Stochastic Anal Appl. 2004; 22(4): 939-70. doi: 10.1081/sap-120037626.

26Wei L, Matthias S. Yosida approximations for multivalued stochastic partial differential equations driven by Lévy noise on a Gelfand triple. J Math Anal Appl 2014; 410(2014): 158-78.

27Brezis H, Crandall MG. Uniqueness of solutions of the initial-value problem for ut − Δϕ(u) = 0. J Math Pures Appl. 1979; 58: 153-63.

28Prévôt C, Röckner M. A concise course on stochastic partial differential equations. In Lectures notes in mathematics. Springer-Verlag Berlin Heindelberg; 2007.

29Zeidler E. Nonlinear functional analysis and its applications. II/A. New York: Springer-Verlag; 1990.

Acknowledgements

To the memory of Professor Habib Ouerdiane (1953-2023).

Corresponding author

Rim Amami can be contacted at: rabamami@iau.edu.sa

Related articles