Current surrogate modeling methods for time-dependent reliability analysis implement a double-loop procedure, with the computation of extreme value response in the outer loop and optimization in the inner loop. The computational effort of the double-loop procedure is quite high even though improvements have been made to improve the efficiency of the inner loop. This paper proposes a single-loop Kriging (SILK) surrogate modeling method for time-dependent reliability analysis. The optimization loop used in current methods is completely removed in the proposed method. A single surrogate model is built for the purpose of time-dependent reliability assessment. Training points of random variables and over time are generated at the same level instead of at two separate levels. The surrogate model is refined adaptively based on a learning function modified from time-independent reliability analysis and a newly developed convergence criterion. Strategies for building the surrogate model are investigated for problems with and without stochastic processes. Results of three numerical examples show that the proposed single-loop procedure significantly increases the efficiency of time-dependent reliability analysis without sacrificing the accuracy.

## Introduction

Time-dependent reliability analysis computes the probability that a system or component fulfills its intended function over a time interval of interest without failures. Since it is related to system performance degradation [1,2], lifetime cost estimation [1], maintenance [3], lifetime testing [4], and system resilience [5], time-dependent reliability analysis has gained a lot of attention during the past decades. The biggest challenge in time-dependent reliability analysis is how to efficiently and accurately perform reliability analysis and account for the correlation of the response over time during the analysis.

This paper focuses on component reliability, which has been extensively studied in recent years. For instance, a PHI2 method has been developed to compute the time-variant reliability by estimating the upcrossing rate at each time instant [6]; Singh and Mourelatos [7] proposed a composite limit-state function method and investigated importance sampling [8], Markov Chain Monte Carlo simulation (MCS) [9], subset simulation [10], and total probability theory [11] for this estimation; Hagen and Tvedt [12,13] proposed a parallel system approach to solve time-dependent problems with binomial distributions; Li and Chen developed a reliability analysis method for dynamic response using a new probability density evolution approach [14]; Along with these methods, Zhang and Du [15] derived analytical expressions for the upcrossing rate of function generator mechanisms; Du [16] proposed an envelope function method based on first-order approximation; Hu and Du investigated the upcrossing rate method [17], joint upcrossing rate method [18], and the first-order sampling approach [19]; Wang and Wang [20,21] presented a nested extreme value response method to estimate the time-dependent reliability; and Jiang et al. [22] studied the strategy of time-dependent reliability assessment through stochastic process discretization.

From the above literature review, it is found that most of the current methods rely on a first-order approximation of the limit state, which is inaccurate for problems with nonlinear responses and multimodal statistical properties. In that situation, surrogate modeling-based methods become more promising than the methods based on linearization at the most probable point (MPP). In terms of surrogate model-based time-dependent reliability analysis, a representative method is the nested extreme value response method proposed by Wang and Wang [20,21]. In the nested method, the efficient global optimization (EGO) method [23] is embedded in an extreme value surrogate model to identify the extreme values. There are two loops in the nested method. The inner loop is the identification of global extremes of the response and the outer loop is used to build the extreme value surrogate model. This method is then improved by Hu and Du [24] by developing a mixed EGO method in the inner loop to improve the efficiency of identifying the extreme values and implementing an adaptive sampling approach in the outer loop to reduce the number of training points required to build the extreme value surrogate model. Using adaptive sampling in the outer loop for the extreme value, surrogate modeling has also been investigated by Wang and Wang [25]. The surrogate modeling method for time-dependent reliability analysis has also been studied by Schoefs et al. [26] and Wang et al. [27] using polynomial chaos expansion.

Analyzing the current surrogate modeling methods for time-dependent reliability analysis, it is found that current methods implement a double-loop procedure. As discussed above, the outer loop constructs the extreme value surrogate model for the latent extreme value response of the limit state function as a function of random variables and the inner loop performs the global optimization for the limit state function over the time duration of interest under given realization of the random variables. The double-loop procedure has two main drawbacks: (1) The accuracy of global optimization in the inner loop will affect the accuracy of the extreme value surrogate model in the outer loop. (2) Identifying the extreme value in the inner loop for problems with stochastic processes over a long time period is computationally expensive since the realization of the stochastic process may have numerous peaks.

This paper develops a SILK surrogate modeling method for time-dependent reliability analysis, in order to overcome the above drawbacks. The global optimizations in the inner loop of current methods are completely removed in the proposed method. It is shown that global optimization in surrogate model-based time-dependent reliability analysis is not indispensable. The surrogate modeling strategy is investigated for problems with and without stochastic processes, which makes the proposed method applicable to general time-dependent reliability analysis problems instead of a special group of problems. Thus, the contributions of this paper are summarized as: (1) a novel perspective for surrogate modeling-based time-dependent reliability analysis; (2) a strategy for surrogate modeling for time-dependent reliability analysis with stochastic processes; (3) a modified learning function for training the surrogate model.

The rest of the paper is organized as follows. Section 2 provides background concepts in time-dependent reliability analysis and related surrogate modeling approaches. Section 3 presents the proposed SILK surrogate method. Three numerical examples are used to demonstrate the proposed method in Sec. 4. Concluding remarks are provided in Sec. 5.

## Background

### Time-Dependent Reliability Analysis.

where $[t0,\u2009te]$ is the time interval of interest, $t0$ is the initial time instant, $te$ is the last time instant, “ $Pr{\u22c5}$ ” is probability, and “ $\u2203$ ” means “there exists.”

in which $R(t0)$ is the reliability at the initial time instant and $v+(t)$ is the upcrossing rate at time instant $t$. $v+(t)$ is often estimated using the first-order reliability method (FORM) based on Rice's formula [17].

The upcrossing rate method based on Rice's formula is accurate when the failure probability is low and FORM is accurate for time-instantaneous reliability analysis. When FORM is not accurate due to the nonlinear response of the system, surrogate modeling-based method is a promising way.

### Surrogate Modeling for Time-Dependent Reliability Analysis.

Since the extreme value response is an unknown function, a surrogate model $g\u0302max(X)$ is built and the time-dependent reliability is estimated based on that. The key issue is how to build $g\u0302max(X)$. Due to the requirement of the extreme value responses, the time-dependent reliability estimate based on Eqs. (3) and (4) is a double-loop procedure, which is summarized as below.

*Outer loop*: Build a surrogate model $g\u0302max(X)$.

Next, we will briefly review the surrogate modeling methods, which implement the double-loop procedure.

#### Double-Loop Procedure With Independent Maxima.

The double-loop procedure with independent maxima refers to the method presented in Ref. [20], which identifies the maxima independently in the inner loop using the EGO method [23]. In the rest of this paper, we call it the independent EGO method for the sake of illustration. The basic idea of the independent EGO method is summarized as follows:

*Outer loop*: Generate training points $xs=[x(1),\u2009x(2),\u2009\cdots ,\u2009x(nt)]$ for $X$ and build a surrogate model $g\u0302max(X)$ based on $x(i)$ and $g\u0302max(x(i))$, $i=1,\u20092,\u2009\cdots ,\u2009nt$.*Inner loop*: For each training point $x(i)$, build a surrogate model $g\u0302i(t)$ to identify $g\u0302max(x(i))$ independently.

#### Double-Loop Procedure With Simultaneous Maxima.

The double-loop procedure with simultaneous maxima, which is also called the mixed EGO method in Ref. [24], refers to the method that identifies the maxima simultaneously in the inner loop. In the mixed EGO method, an adaptive sampling approach is employed to reduce the number of training points required to build the surrogate model, $g\u0302max(X)$ in the outer loop. In the inner loop, a surrogate model $g\u0302(X,\u2009t)$ is built to identify the extreme values of all the training points simultaneously. Every time a new training point is added in the outer loop, the surrogate model $g\u0302(X,\u2009t)$ is updated in the inner loop to identify the corresponding extreme value response of the new training point. The basic framework of the mixed EGO method is summarized as

*Outer loop*: Build a surrogate model $g\u0302max(X)$ using the adaptive sampling approach.*Inner loop*: Build a surrogate model $g\u0302(X,\u2009t)$ to identify $g\u0302max(x(i)),\u2009\u2200i=1,\u20092,\u2009\cdots ,\u2009nt$, simultaneously.

#### Drawback Analysis of the Double-Loop Procedure.

Even if both the independent EGO and mixed EGO methods are more accurate and efficient than the upcrossing rate methods, as discussed in Sec. 1, there are two main drawbacks (explained as below) for all the double-loop procedure methods.

- (1)
Since the extreme values identified in the inner loop are used to build the extreme value response $g\u0302max(X)$ in the outer loop, the accuracy of the identified extreme values will affect the accuracy of predictions in the outer loop at the untrained sample points, which will in turn affect the accuracy of time-dependent reliability analysis.

- (2)
Current surrogate modeling method for time-dependent reliability analysis is mainly developed for problems with response function $G(t)=g(X,\u2009t)$. These methods can be extended to problems with stochastic processes by using the Karhunen–Loeve (KL) expansion [29,30]. Although the extension is easy and possible, the number of function (NOF) evaluations will increase significantly for problems with stochastic processes over a long time interval of interest. The reason is that there may be numerous peaks (extremes) in the stochastic process, which creates a large computation demand on the global optimization in the inner loop.

In Sec. 3, we propose a single-loop procedure for time-dependent reliability analysis, which completely removes the optimization inner loop in the double-loop procedure.

## Proposed SILK Method

### Basic Principle of Surrogate Modeling for Time-Dependent Reliability Analysis.

The basic idea of SILK is to build a single surrogate model $g\u0302(X,\u2009t)$ to perform time-dependent reliability analysis, instead of using a double-loop procedure. The main principle of SILK is explained as below.

where “ $\u2200$ ” means “for all.”

The above equations imply that the accuracy of time-dependent reliability analysis is mainly affected by $It(x(i))$, which is actually a one-dimensional classification problem for given $X=x(i)$ (as shown in Eqs. (7) and (8)). As shown in Fig. 2, the accuracy of the one-dimensional classification problem is mainly affected by the accuracy near the crossing points [31,32]. Or in other words, we may not really need the extreme value as shown in Fig. 1.

Further analysis shows that Eq. (6) sometimes does not require high accuracy near the crossing points. This is different from the one-dimensional classification problem which requires accurate classification at each sample point. The time-dependent failure indicator function (Eq. (6)) can be accurately determined if there exists a failed point on the trajectory and the sign of the failed point is accurately classified no matter the signs of crossing points are accurately classified or not. Equation (6) can be further divided into the following two cases:

Case 1: $g(x(i),\u2009t)\u22640$ for all the time instant over $[t0,\u2009te]$. In this case, the sign of surrogate model prediction $g\u0302(x(i),\u2009t(j))$ for all $t(j)$, $j=1,\u20092,\u2009\cdots ,\u2009Nt$ need to be accurately classified to get an accurate estimate of $It(x(i))$.

Case 2: There exists a time instant $t*\u2208[t0,\u2009te]$ such that $g(x(i),\u2009t*)>0$. In this case, if the sign of any $g\u0302(x(i),\u2009t*)>0$ is correctly classified, Eq. (6) will be accurately estimated.

### Surrogate Modeling of ĝ(X, t)

#### Initial Surrogate Modeling of ĝ(X, t).

where $xi(j)$ is the *j*th training point of the *i*th random variable.

in which $N(\u22c5,\u2009\u22c5)$ stands for normal distribution, and $g\u0302(x,\u2009t)$ and $\sigma g\u03022(x,\u2009t)$ are the expected value and variance of the prediction, which are obtained from Eqs. (A4) and (A5), respectively.

#### Stopping Criterion for Training.

In order to determine whether the surrogate model is well trained or not, for any given $X=x(i)$, we first need to determine whether the conditions given in Eq. (9) are satisfied.

There are two types of conditions presented in Eq. (9). The first condition checks the sign, whether $g\u0302(x(i),\u2009t(j))>0$ or $g\u0302(x(i),\u2009t(j))\u22640$. The second condition checks whether the sign of $g\u0302(x(i),\u2009t(j))>0$ or $g\u0302(x(i),\u2009t(j))\u22640$ is accurately determined or not. In other words, the first condition performs the classification and the second condition checks the accuracy of the classification. The first condition can be easily checked based on the surrogate model prediction. For the second condition, the widely used *U* function defined in the adaptive Kriging Monte Carlo simulation (AK-MCS) method [32] for time-independent reliability analysis is employed.

in which $\Phi (\u22c5)$ is the cumulative density function of a standard normal variable.

If $Umin(x(i))>2$, it means that the trajectory $g(x(i),\u2009t)$, $t\u2208[t0,\u2009te]$ does not need to be trained. Otherwise, a new training point is needed to refine the trajectory. The $Umin(x(i))$ indicator is just for one trajectory under given $X=x(i)$. Next, we will discuss how to connect this indicator with the stopping criterion of the surrogate modeling.

where $Nf1=\u2211It(xg1MCS)$, $Nf2=\u2211It(xg2MCS)$, and $It(xg1MCS)$ and $It(xg2MCS)$ are obtained using Eq. (6) based on the mean prediction of the surrogate model $g\u0302(X,\u2009t)$.

The training of $g\u0302(X,\u2009t)$ stops if $\epsilon rmax<5%$. The above procedure determines whether a surrogate model is well trained or not (i.e., stopping criterion). Next, we will investigate how to identify a new training point to refine the surrogate model if the surrogate model is not well trained.

#### Identification of New Training Point.

If the stopping criterion given in Eq. (23) is not satisfied, a potential new training point needs to be identified to refine the surrogate model $g\u0302(X,\u2009t)$. Since $Umin(x(i))$ corresponds to the point with the highest probability of making a mistake on the sign of $g(x(i),\u2009t)$, we identify the new training point by minimizing the $Umin(x(i))$. Similar strategy has been implemented in surrogate model-based time-independent reliability analysis in the AK-MCS method [32].

After the new training point is identified, the response function is evaluated and $g\u0302(X,\u2009t)$ is trained again and the stopping criterion given in Eq. (23) is checked for all iterations until the stopping criterion is satisfied.

- (a)
Additional criterion for the selection of new training point

In addition to the above criterion for the selection of new training points, in this paper, we also introduce another criterion based on the correlation analysis between the potential new training points and the current training points.

Considering that the *U* function given in Eq. (16) is used to check the conditions given in Eq. (17) and the new training point is selected based on that, there may be clustering of the training points when the surrogate model is not well trained (i.e., $\sigma g\u0302(x(i),\u2009t(j))$ are large and close to each other for different samples) [35]. The clustering of training points refers to the phenomenon that the selected training points are very close to each other. When this phenomenon happens, the correlation matrix used in the prediction of Kriging model will be ill-conditioned. In that case, some of the clustered training points will have negligible effect on the accuracy improvement of the surrogate model (i.e., some of the training points are not useful). An example of the clustering of training points is given in example 2 in Sec. 4. In order to avoid the clustering issue, we define a correlation condition for the refinement of the surrogate model.

where $\rho (xnew,\u2009xs)$ is computed by substituting the normalized $xnewt$ and $xs$ into Eq. (A4). The above correlation constraint avoids the situation that the new training point is too close to current training points. Note that the above correlation condition is only applied to new training points to make sure the new training points are not clustered with current training points. An empirical range is suggested for the correlation threshold as [0.95, 1] based on our numerical examples. A high value of the threshold (i.e., 1) implies a loose correlation condition. On the other hand, a low value of threshold means a strict correlation condition.

Until now, we have discussed the basic idea for the refinement of $g\u0302(X,\u2009t)$. Table 1 gives a detailed procedure for the identification of a new training point based on the conditions given in Eqs. (24)–(27).

Step | Description | |
---|---|---|

1 | Assume that the MCS samples as $x(i),\u2009i=1,\u20092,\u2009\cdots ,\u2009N$ and discretize $[t0,\u2009te]$ into $t(j)$, $j=1,\u20092,\u2009\cdots ,\u2009Nt$ | |

2 | For each $x(i),\u2009i=1,\u20092,\u2009\cdots ,\u2009N$ | |

(a) | Compute $U(x(i),\u2009t(j))$ and $\rho max(t(j))=max{\rho (xtemp(j),\u2009xs)}$, $\u2200j=1,\u20092,\u2009\cdots ,\u2009Nt$, where $xtemp(j)=[x(i),\u2009t(j)]$ and $xs$ are current training points | |

(b) | Identify indices $id_t=find\u2200j=1,\u20092,\u2009\cdots ,\u2009Nt\u2009{\rho max(t(j))\u22650.95}$ and let $Unew=U$, where $U$ is obtained from the last step | |

(c) | Replace $Unew(x(i),\u2009t(id_t))$ with a large number (i.e., 100) | |

(d) | Obtain $Umin(x(i))$ by $Umin(x(i))=minj{Unew(x(i),\u2009t(j))}$ and the associated time instants by $tmin(i)=arg\u2009minj=1,\u20092,\u2009\cdots ,\u2009Nt{Unew(x(i),\u2009t(j))}$ | |

(d) | Check whether there exist a time instant $t(j)$ such that $g\u0302(x(i),\u2009t(j))>0$ and $U(x(i),\u2009t(j))\u22652$. If there exists such a $t(j)$, we set $Umin(x(i))=ue$, where $ue$ is any number larger than 2 | |

End | ||

3 | Identify the new training point as $[x(imin),\u2009tmin(imin)]$, where $imin=arg\u2009mini=1,\u20092,\u2009\cdots ,\u2009N{Umin(x(i))}$ |

Step | Description | |
---|---|---|

1 | Assume that the MCS samples as $x(i),\u2009i=1,\u20092,\u2009\cdots ,\u2009N$ and discretize $[t0,\u2009te]$ into $t(j)$, $j=1,\u20092,\u2009\cdots ,\u2009Nt$ | |

2 | For each $x(i),\u2009i=1,\u20092,\u2009\cdots ,\u2009N$ | |

(a) | Compute $U(x(i),\u2009t(j))$ and $\rho max(t(j))=max{\rho (xtemp(j),\u2009xs)}$, $\u2200j=1,\u20092,\u2009\cdots ,\u2009Nt$, where $xtemp(j)=[x(i),\u2009t(j)]$ and $xs$ are current training points | |

(b) | Identify indices $id_t=find\u2200j=1,\u20092,\u2009\cdots ,\u2009Nt\u2009{\rho max(t(j))\u22650.95}$ and let $Unew=U$, where $U$ is obtained from the last step | |

(c) | Replace $Unew(x(i),\u2009t(id_t))$ with a large number (i.e., 100) | |

(d) | Obtain $Umin(x(i))$ by $Umin(x(i))=minj{Unew(x(i),\u2009t(j))}$ and the associated time instants by $tmin(i)=arg\u2009minj=1,\u20092,\u2009\cdots ,\u2009Nt{Unew(x(i),\u2009t(j))}$ | |

(d) | Check whether there exist a time instant $t(j)$ such that $g\u0302(x(i),\u2009t(j))>0$ and $U(x(i),\u2009t(j))\u22652$. If there exists such a $t(j)$, we set $Umin(x(i))=ue$, where $ue$ is any number larger than 2 | |

End | ||

3 | Identify the new training point as $[x(imin),\u2009tmin(imin)]$, where $imin=arg\u2009mini=1,\u20092,\u2009\cdots ,\u2009N{Umin(x(i))}$ |

Note: In step 2 (c), $Unew$ is replaced with a larger number due to the application of correlation condition (Eq. (27)) to control the distance between current training points and new training points.

The above training of the surrogate model and stopping criterion are for problems without stochastic processes. The basic idea and main procedures are applicable to problems with and without stochastic processes. In Sec. 3.3, we will investigate the extension of the proposed method to general problems with both random variables and stochastic processes.

### Extension to Problems With Stochastic Processes.

in which $\mu Yi(t)$ and $\sigma Yi(t)$ are the mean and standard deviation of the stochastic process, $\xi j$, $j=1,\u20092,\u2009\cdots ,\u2009ne$ are independent random variables, $\lambda j$ and $fj(t)$ are the eigenvalues and eigenvectors of the covariance function of $Yi(t)$, and $ne$ is the number of eigenvectors used to represent the stochastic process.

After the KL expansion, the response function for problems with stochastic processes becomes $G(t)=g(X,\u2009\xi ,\u2009t)$, where $\xi $ is a vector of random variables used to represent the stochastic processes. Based on the transformation of the response function, if the double-loop procedure is applied to time-dependent reliability analysis, an extreme value surrogate model $g\u0302max(X,\u2009\xi )$ needs to be constructed. Since the number of random variables in $\xi $ is usually very large (i.e., the terms in KL expansion will increase with the time duration of interest), the dimension of $g\u0302max(X,\u2009\xi )$ is high. Constructing a high-dimensional surrogate model usually requires a large number of training points and is difficult for current surrogate modeling methods. In the proposed method, a surrogate model is directly built for $G(t)=g(X,\u2009Y(t),\u2009t)$.

Based on the initial training points, an initial surrogate model $g\u0302(X,\u2009Y,\u2009t)$ is built. In order to perform time-dependent reliability analysis based on $g\u0302(X,\u2009Y,\u2009t)$, $N$ samples are first generated for $X$ and $\xi $ using MCS, and $[t0,\u2009te]$ is discretized into $Nt$ time instants. The MCS samples of $\xi $ are then converted into MCS samples of $Y$ based on Eq. (28) and the discretized time instants $t(j),\u2009$$j=1,\u20092,\u2009\cdots ,Nt$. In order to determine the new training points to refine $g\u0302(X,\u2009Y,\u2009t)$, the procedure presented in Table 1 is modified for problems with stochastic processes. Table 2 gives the procedure of identifying a new training point for $g\u0302(X,\u2009Y,\u2009t)$.

Step | Description | |
---|---|---|

1 | Assume the MCS samples as $x(i)\u2009and\u2009\xi (i),\u2009i=1,\u20092,\u2009\cdots ,\u2009N$ and discretize the interval $[t0,\u2009te]$ into $t(j)$, $j=1,\u20092,\u2009\cdots ,\u2009Nt$ | |

2 | For each $x(i)\u2009and\u2009\xi (i),\u2009i=1,\u20092,\u2009\cdots ,\u2009N$ | |

(a) | Convert $\xi (i)$ to $y(t(1)),\u2009y(t(2)),\u2009\cdots \u2009,\u2009y(t(Nt))$ | |

(b) | Compute $U(x(i),\u2009y(t(j)),\u2009t(j))$ based on surrogate model $g\u0302(X,\u2009Y,\u2009t)$ and Eq. (16), $\u2200j=1,\u20092,\u2009\cdots ,\u2009Nt$, and $\rho max(t(j))=max{\rho (xtemp(j),\u2009xs)}$ where $xtemp(j)=[x(i),\u2009y(t(j)),\u2009t(j)]$ and $xs$ are current training points | |

(c) | Identify indices $id_t=find\u2200j=1,\u20092,\u2009\cdots ,\u2009Nt\u2009{\rho max(t(j))\u22650.95}$ and let $Unew=U$, where $U$ is obtained from the last step | |

(d) | Replace $Unew(x(i),\u2009y(t(id_t)),\u2009t(id_t))$ with a large number (i.e., 100) | |

(e) | Obtain $Umin(x(i),\u2009\xi (i))$ by $Umin(x(i),\u2009\xi (i))=minj{Unew(x(i),\u2009y(t(j)),\u2009t(j))}$ and the associated time instants by $tmin(i)=argminj{Unew(x(i),\u2009y(t(j)),\u2009t(j))}$ | |

(f) | Check whether there exists a time instant $t(j)$ such that $g\u0302(x(i),\u2009y(t(j)),\u2009t(j))>0$ and $U(x(i),\u2009y(t(j)),\u2009t(j))\u22652$. If there exists such a $t(j)$, set $Umin(x(i),\u2009\xi (i))=ue$, where $ue$ is any number larger than 2 | |

End | ||

3 | Identify the new training point as $[x(imin),\u2009\xi (imin),\u2009tmin(imin)]$, where $imin=argmini=1,\u20092,\u2009\cdots ,\u2009N{Umin(x(i),\u2009\xi (i))}$, and convert $\xi (imin)$ into $y(imin)$ based on Eq. (28) and $tmin(imin)$. Obtain the new training point $[x(imin),\u2009y(imin),\u2009tmin(imin)]$ |

Step | Description | |
---|---|---|

1 | Assume the MCS samples as $x(i)\u2009and\u2009\xi (i),\u2009i=1,\u20092,\u2009\cdots ,\u2009N$ and discretize the interval $[t0,\u2009te]$ into $t(j)$, $j=1,\u20092,\u2009\cdots ,\u2009Nt$ | |

2 | For each $x(i)\u2009and\u2009\xi (i),\u2009i=1,\u20092,\u2009\cdots ,\u2009N$ | |

(a) | Convert $\xi (i)$ to $y(t(1)),\u2009y(t(2)),\u2009\cdots \u2009,\u2009y(t(Nt))$ | |

(b) | Compute $U(x(i),\u2009y(t(j)),\u2009t(j))$ based on surrogate model $g\u0302(X,\u2009Y,\u2009t)$ and Eq. (16), $\u2200j=1,\u20092,\u2009\cdots ,\u2009Nt$, and $\rho max(t(j))=max{\rho (xtemp(j),\u2009xs)}$ where $xtemp(j)=[x(i),\u2009y(t(j)),\u2009t(j)]$ and $xs$ are current training points | |

(c) | ||

(d) | Replace $Unew(x(i),\u2009y(t(id_t)),\u2009t(id_t))$ with a large number (i.e., 100) | |

(e) | Obtain $Umin(x(i),\u2009\xi (i))$ by $Umin(x(i),\u2009\xi (i))=minj{Unew(x(i),\u2009y(t(j)),\u2009t(j))}$ and the associated time instants by $tmin(i)=argminj{Unew(x(i),\u2009y(t(j)),\u2009t(j))}$ | |

(f) | Check whether there exists a time instant $t(j)$ such that $g\u0302(x(i),\u2009y(t(j)),\u2009t(j))>0$ and $U(x(i),\u2009y(t(j)),\u2009t(j))\u22652$. If there exists such a $t(j)$, set $Umin(x(i),\u2009\xi (i))=ue$, where $ue$ is any number larger than 2 | |

End | ||

3 | Identify the new training point as $[x(imin),\u2009\xi (imin),\u2009tmin(imin)]$, where $imin=argmini=1,\u20092,\u2009\cdots ,\u2009N{Umin(x(i),\u2009\xi (i))}$, and convert $\xi (imin)$ into $y(imin)$ based on Eq. (28) and $tmin(imin)$. Obtain the new training point $[x(imin),\u2009y(imin),\u2009tmin(imin)]$ |

Similar to the problems without stochastic processes, the stopping criterion in each iteration is computed using Eq. (22) and $Umin(x(i),\u2009\xi (i))$, $i=1,\u20092,\u2009\cdots ,\u2009N$. Since in the proposed method, the surrogate model is constructed directly for $g\u0302(X,\u2009Y,\u2009t)$ instead of $g\u0302max(X,\u2009\xi )$, the dimensionality of $g\u0302(X,\u2009Y,\u2009t)$ is much lower than that of $g\u0302max(X,\u2009\xi )$. For example, suppose there are $nX$ random variables ($X$) and $nY$ stochastic processes ($Y(t)$), the dimension of $g\u0302(X,\u2009Y,\u2009t)$ is $nX+nY+1$. If $n\xi $ standard normal variables ($\xi $) are used in the KL expansion of each stochastic process, the dimension of $g\u0302max(X,\u2009\xi )$ will be $nX+nYn\xi +1$, where $n\xi $ is problem dependent and will increase with time duration of interest. The value of $n\xi $ is usually larger than 5 and can go up to a very large number (i.e., 1000). For example 3 presented in Sec. 4, the dimensions of $g\u0302(X,\u2009Y,\u2009t)$ and $g\u0302max(X,\u2009\xi )$ are 5 and 10, respectively. This property makes the proposed method applicable to problems with and without stochastic processes.

### Implementation Procedure.

We now summarize the implementation procedure of SILK. Table 3 gives the main steps of SILK by integrating Tables 1 and 2 into the implementation framework.

Step | Description |
---|---|

1 | Perform KL expansion if stochastic processes are present in the problem |

2 | Generate initial training points $xs$ (Eq. (10) or (28)) and evaluate the response at the training points |

Set $q=1$ and $xMCS=\u2205$ | |

3 | While $q=1$ or $Covpf>0.02$ do |

Set $p=1$ | |

4 | Generate $N$ samples of $X$ or $X,\u2009\xi $ for problems with stochastic processes and add the generated samples into $xMCS$ |

5 | While $p=1$ or $\epsilon rmax>0.05$ do |

$p=p+1$ | |

6 | Construct surrogate model $g\u0302(X,\u2009t)$ or $g\u0302(X,\u2009Y,\u2009t)$ |

7 | Compute $Umin(x(i))$ or $Umin(x(i),\u2009\xi (i))$, $i=1,\u20092,\u2009\cdots ,\u2009NMCS$, where $NMCS$ is the number of samples in $xMCS$ using the algorithms presented in Table 1 or 2 |

8 | Compute $\epsilon rmax$ using Eq. (23) |

9 | Identify new training point $[x(imin),\u2009tmin(imin)]$ or $[x(imin),\u2009y(imin),\u2009tmin(imin)]$ using procedures given in Table 1 or 2. Update the training points |

End while | |

10 | Compute $pf(t0,\u2009te)$ using surrogate model $g\u0302(X,\u2009t)$ or $g\u0302(X,\u2009Y,\u2009t)$ |

11 | Compute the coefficient of variation, $Covpf=(1\u2212pf(t0,\u2009te))/(pf(t0,\u2009te))/NMCS$, where $NMCS=qN$ is the total number of MCS samples |

$q=q+1$ | |

End while |

Step | Description |
---|---|

1 | Perform KL expansion if stochastic processes are present in the problem |

2 | Generate initial training points $xs$ (Eq. (10) or (28)) and evaluate the response at the training points |

Set $q=1$ and $xMCS=\u2205$ | |

3 | While $q=1$ or $Covpf>0.02$ do |

Set $p=1$ | |

4 | Generate $N$ samples of $X$ or $X,\u2009\xi $ for problems with stochastic processes and add the generated samples into $xMCS$ |

5 | While $p=1$ or $\epsilon rmax>0.05$ do |

$p=p+1$ | |

6 | Construct surrogate model $g\u0302(X,\u2009t)$ or $g\u0302(X,\u2009Y,\u2009t)$ |

7 | Compute $Umin(x(i))$ or $Umin(x(i),\u2009\xi (i))$, $i=1,\u20092,\u2009\cdots ,\u2009NMCS$, where $NMCS$ is the number of samples in $xMCS$ using the algorithms presented in Table 1 or 2 |

8 | Compute $\epsilon rmax$ using Eq. (23) |

9 | Identify new training point $[x(imin),\u2009tmin(imin)]$ or $[x(imin),\u2009y(imin),\u2009tmin(imin)]$ using procedures given in Table 1 or 2. Update the training points |

End while | |

10 | Compute $pf(t0,\u2009te)$ using surrogate model $g\u0302(X,\u2009t)$ or $g\u0302(X,\u2009Y,\u2009t)$ |

11 | Compute the coefficient of variation, $Covpf=(1\u2212pf(t0,\u2009te))/(pf(t0,\u2009te))/NMCS$, where $NMCS=qN$ is the total number of MCS samples |

$q=q+1$ | |

End while |

Note that the $Covpf$ criterion is chosen as 0.02 in this paper. Other threshold values, such as 0.05, can be used as well according to the requirement of the decision maker. This threshold is used to account for the statistical uncertainty in MCS.

## Numerical Examples

In this section, three numerical examples are used to demonstrate the proposed method. Examples 1 and 2 do not have stochastic process in the limit state function while example 3 has both random variables and stochastic process. The NOF evaluations and the percentage error of time-dependent failure probability estimate of the proposed method (i.e., SILK) are compared with the following four methods in the three examples.

Rice: the upcrossing rate method based on the Rice's formula as reviewed in Sec. 2.1 and presented in Refs. [15] and [17]

independent EGO: the double-loop procedure with independent EGO (Sec. 2.2.1) [20,21]

mixed EGO: the double-loop procedure with mixed EGO (Sec. 2.2.2) [24]

MCS: the brute force MCS performed on the original limit-state function

where $pfMCS(t0,\u2009te)$ is the time-dependent failure probability estimate obtained from MCS.

### A Mathematical Example.

where $X$ is a random variable following a normal distribution $X\u223cN(10,\u20090.52)$.

We first solve this example using SILK. The time interval $[1,\u20092.5]$ is discretized into 100 time instants. Ten initial training points are generated for $X$ and $t$ using the Hammersley sampling approach [34]. The initial training points of $X$ are generated in the interval $[7.5,\u200912.5]$. A Kriging model with a constant trend function is used to build the initial surrogate model. The surrogate model $g\u0302(X,\u2009t)$ is then refined using SILK. Table 4 gives the results comparison between SILK, Rice formula-based upcrossing rate method, double-loop procedure with independent EGO, double-loop procedure with mixed EGO, and MCS. Note that the NOF of the proposed method is not an integer because SILK is run for 20 times and the average results are reported to reduce the uncertainty in the MCS samples. The results of Rice, independent EGO, and mixed EGO are taken from Ref. [24]. It shows that the proposed SILK method is much more efficient than the other methods. However, the accuracy of SILK is a little bit lower than the mixed EGO method even though it satisfies the accuracy requirement defined in Table 3 ($\epsilon rmax\u22640.05$). Further analysis shows that SILK requires **22** NOF to achieve the same accuracy level as mixed EGO.

### A Function Generator Mechanism.

A function generator mechanism as shown in Fig. 3 is used as our second example. This example is taken from Ref. [15].

in which $K1=\u22122L2L4\u2009sin(\theta \pi /180)$, $K2=2L4(L1\u2212L2\u2009cos(\theta \pi /180))$, and $K3=L12+L22+L42\u2212L32\u22122L1L2\u2009cos(\theta \pi /180)$.

Table 5 gives the random variables of example 2.

Variable (mm) | Mean | Standard deviation | Distribution |
---|---|---|---|

$L1$ | $100$ | 0.1 | Normal |

$L2$ | 55.5 | 0.1 | Normal |

$L3$ | 144.1 | 0.1 | Normal |

$L4$ | 72.5 | 0.1 | Normal |

Variable (mm) | Mean | Standard deviation | Distribution |
---|---|---|---|

$L1$ | $100$ | 0.1 | Normal |

$L2$ | 55.5 | 0.1 | Normal |

$L3$ | 144.1 | 0.1 | Normal |

$L4$ | 72.5 | 0.1 | Normal |

We then perform time-dependent reliability analysis for this example using SILK, rice, independent EGO, mixed EGO, and MCS. Table 6 gives the results comparison of different methods with two different failure thresholds (i.e., $\theta e=0.9\u2009deg$ and $\theta e=0.95\u2009deg$). Similar to example 1, the SILK method is run for 20 times and the average results are reported. The results indicate that SILK is more efficient and accurate than the other methods.

Case $\theta e=0.85\u2009deg$ | Case $\theta e=0.9\u2009deg$ | |||||
---|---|---|---|---|---|---|

Method | NOF | $pf(t0,\u2009ts)$ | Error (%) | NOF | $pf(t0,\u2009ts)$ | Error (%) |

SILK | 23.13 | 0.1195 | 1.10 | 22.4 | 0.0553 | 1.78 |

Rice | 8221 | 0.1315 | 11.25 | 8365 | 0.0546 | 3.02 |

Independent EGO | 142 | 0.1132 | 4.23 | 113 | 0.0579 | 2.84 |

Mixed EGO | 72 | 0.1140 | 3.55 | 74 | 0.0581 | 3.20 |

MCS | 2 × 10^{8} | 0.1182 | N/A | 2 × 10^{8} | 0.0563 | N/A |

Case $\theta e=0.85\u2009deg$ | Case $\theta e=0.9\u2009deg$ | |||||
---|---|---|---|---|---|---|

Method | NOF | $pf(t0,\u2009ts)$ | Error (%) | NOF | $pf(t0,\u2009ts)$ | Error (%) |

SILK | 23.13 | 0.1195 | 1.10 | 22.4 | 0.0553 | 1.78 |

Rice | 8221 | 0.1315 | 11.25 | 8365 | 0.0546 | 3.02 |

Independent EGO | 142 | 0.1132 | 4.23 | 113 | 0.0579 | 2.84 |

Mixed EGO | 72 | 0.1140 | 3.55 | 74 | 0.0581 | 3.20 |

MCS | 2 × 10^{8} | 0.1182 | N/A | 2 × 10^{8} | 0.0563 | N/A |

- (a)
Parameter study

We also investigated the effects of the correlation threshold given in Eq. (27) on the results of time-dependent reliability analysis. Figure 4 shows the training points identified by SILK with and without the correlation condition. It shows that the correlation condition successfully avoids the clustering of training points in SILK.

We also performed SILK with different correlation threshold to study the effect of correlation condition on the accuracy and efficiency of time-dependent reliability analysis. Table 7 gives the NOF and percentage error of SILK (average results of 20 runs) with different values of correlation threshold for the $\theta e=0.9\u2009deg$ case. It shows that by controlling the correlation between new training point and current training points, the efficiency can be improved while the effect on the accuracy is minor if the threshold is not set too low. Since the correlation is related to the distance between the training points in Kriging surrogate model, it also means controlling the distance between new and current training points helps to avoid the clustering issue.

Correlation threshold ($\rho max$) | 0.90 | 0.93 | 0.95 | 0.97 | 0.98 | 0.99 | 0.999 |
---|---|---|---|---|---|---|---|

NOF | 18.8 | 20.1 | 22.4 | 26.6 | 28.3 | 30.5 | 47.9 |

Error (%) | 4.97 | 3.55 | 1.78 | 2.13 | 0.18 | 1.78 | 0.71 |

Correlation threshold ($\rho max$) | 0.90 | 0.93 | 0.95 | 0.97 | 0.98 | 0.99 | 0.999 |
---|---|---|---|---|---|---|---|

NOF | 18.8 | 20.1 | 22.4 | 26.6 | 28.3 | 30.5 | 47.9 |

Error (%) | 4.97 | 3.55 | 1.78 | 2.13 | 0.18 | 1.78 | 0.71 |

In addition to the correlation threshold, we studied the effects of the number of initial training points and the number of discretization points of the time interval on the accuracy and efficiency of time-dependent reliability analysis. Tables 8 and 9 give the average results of 20 runs of SILK with different values of $nin$ and $Nt$. It shows that the effect of $nin$ and $Nt$ on the results of time-dependent reliability analysis is very small. However, very small values of $nin$ and $Nt$ may result in large error of reliability analysis. A recommended value of $nin$ is given in Refs. [31] and [32] as $nin=(nr+1)(nr+2)/2$, where $nr$ is the number of random variables. For the value of $Nt$, the results indicate that the larger the better.

### A Beam Subjected to Stochastic Load and Degradation.

A corroded beam (Fig. 5) subjected to stochastic load is adopted from Ref. [24] as our third example.

where $\xi i$, $i=1,\u20092,\u2009\cdots ,\u20097$ are independent random variables, $aij$, $bij$, $cij$, ∀*i*, *j* = 1,2,…,7 are coeffiecients of the sine wave basis functions. Details of Eq. (37) can be found in Ref. [24].

Table 10 gives the random variables of this example and Table 11 gives the results comparison of different methods. Similar conclusions can be obtained as examples 1 and 2. Thus SILK achieves efficiency and accuracy in both problems with and without stochastic processes.

Variable | Mean | Standard deviation | Distribution |
---|---|---|---|

$\sigma u$ (Pa) | $2.4\xd7108$ | $2\xd7107$ | Normal |

$a0$ (m) | 0.2 | 0.01 | Normal |

$b0$ (m) | 0.04 | $4\xd710\u22123$ | Normal |

$\xi 1$ | 0 | 100 | Normal |

$\xi 2$ | 0 | 50 | Normal |

$\xi 3$ | 0 | 98 | Normal |

$\xi 4$ | 0 | 121 | Normal |

$\xi 5$ | 0 | 227 | Normal |

$\xi 6$ | 0 | 98 | Normal |

$\xi 7$ | 0 | 121 | Normal |

Variable | Mean | Standard deviation | Distribution |
---|---|---|---|

$\sigma u$ (Pa) | $2.4\xd7108$ | $2\xd7107$ | Normal |

$a0$ (m) | 0.2 | 0.01 | Normal |

$b0$ (m) | 0.04 | $4\xd710\u22123$ | Normal |

$\xi 1$ | 0 | 100 | Normal |

$\xi 2$ | 0 | 50 | Normal |

$\xi 3$ | 0 | 98 | Normal |

$\xi 4$ | 0 | 121 | Normal |

$\xi 5$ | 0 | 227 | Normal |

$\xi 6$ | 0 | 98 | Normal |

$\xi 7$ | 0 | 121 | Normal |

## Conclusion

For problems with nonlinear response, the MPP-based time-dependent reliability analysis methods may have large error in the reliability estimate due to linearization of response function at the MPP. In that situation, a surrogate model-based method is more promising. Current surrogate model methods for time-dependent reliability analysis implement a double-loop procedure, which is computationally expensive and unaffordable for problems with stochastic processes over a long time period.

This paper develops a SILK surrogate model method for time-dependent reliability analysis with and without stochastic processes. The proposed method generates training points and builds the surrogate model for random variables, stochastic processes, and time at the same level instead of at two levels. The global optimization in current methods is completely removed in the proposed method. The single-loop surrogate model is refined adaptively based on criteria developed according to the properties of the time-dependent problem. Three numerical examples demonstrated the effectiveness of the proposed method.

There are several advantages for the removal of the global optimization loop (inner loop) in the time-dependent reliability analysis. First, in the independent EGO and mixed EGO methods, the NOF evaluations will increase with the number of initial samples of random variables in the outer loop. In the SILK method, a single surrogate model is trained and no global optimization is required. The NOF of SILK method therefore will not increase with the number of initial random variable samples. Second, the removal of global optimization enables the surrogate model-based time-dependent reliability analysis method to be applied to problems with stochastic processes and over a long time period.

Since the proposed method makes use of the Kriging surrogate modeling method, it also has the limitations of the Kriging surrogate model. When the Kriging surrogate model cannot effectively model the time-dependent problem or the limit-state functions are highly nonlinear, it will result in the ineffectiveness of the proposed method. The basic idea of the proposed method is general and applicable with different types of surrogate models. Extension of the proposed method to other kinds of surrogate models for time-dependent reliability analysis may be investigated in future.

## Acknowledgment

The research reported in this paper was supported by the Air Force Office of Scientific Research (Grant No. FA9550-15-1-0018, Technical Monitor: Dr. David Stargel). The support is gratefully acknowledged.

### Appendix: Kriging Surrogate Model

where $\upsilon =[\upsilon 1,\u2009\upsilon 2,\u2009\cdots ,\u2009\upsilon p]T$ is a vector of unknown coefficients, $h(x)=[h1(x),\u2009h2(x),\u2009\cdots ,\u2009hp(x)]T$ is a vector of regression functions, $h(x)T\upsilon $ is the trend of prediction, and $\epsilon (x)$ is assumed to be a Gaussian process with zero mean and covariance $Cov[\epsilon (xi),\u2009\epsilon (xj)]$.

in which $\sigma \epsilon 2$ is the process variance and $R(\u22c5,\u2009\u22c5)$ is the correlation function.

Where $R$ is the correlation matrix with element, $R(xi,\u2009xj)$, $i,\u2009j=1,\u20092,\u2009\cdots ,\u2009ns$, $H=[h(x1)T,\u2009h(x2)T,\u2009\cdots ,\u2009h(xns)T]T$, and $g=[g(x1),\u2009g(x2),\u2009\cdots ,\u2009g(xns)]T$.

where $r(x)=[R(x,\u2009x1),\u2009R(x,\u2009x2),\u2009\cdots ,\u2009R(x,\u2009xns)]$