Abstract
To identify the underlying mechanisms of human motor control, parametric models are utilized. One approach of employing these models is the inferring the control intent (estimating motor control strategy). A well-accepted assumption is that human motor control is optimal; thus, the intent is inferred by solving an inverse optimal control (IOC) problem. Linear quadratic regulator (LQR) is a well-established optimal controller, and its inverse LQR (ILQR) problem has been used in the literature to infer the control intent of one subject. This implementation used a cost function with gain penalty, minimizing the error between LQR gain and a preliminary estimated gain. We hypothesize that relying on an estimated gain may limit ILQR optimization capability. In this study, we derive an ILQR optimization with output penalty, minimizing the error between the model output and the measured output. We conducted the test on 30 healthy subjects who sat on a robotic seat capable of rotation. The task utilized a physical human–robot interaction with a perturbation torque as input and lower and upper body angles as output. Our method significantly improved the goodness of fit compared to the gain-penalty ILQR. Moreover, the dominant inferred intent was not statistically different between the two methods. To our knowledge, this work is the first that infers motor control intent for a sample of healthy subjects. This is a step closer to investigating control intent differences between healthy subjects and subjects with altered motor control, e.g., low back pain.
1 Introduction
Applying system identification to the study of human motor control has been investigated over the last few decades. System identification methods enabled researchers to determine physiological parameters, control gains, and control bandwidths for various motor control tasks [1–4]. One subfield of system identification is inverse optimal control (IOC) theory. In IOC, a stabilization feedback control law is first constructed for a given plant. Then, meaningful cost functions are retrieved based on state variables and control inputs [5–8]. These cost functions determine how much weight the controller assigns to various states compared to the control effort. Several previous studies have estimated optimal control cost functions from human motion data in an effort to explain human control intent [8–11]. Unlike the general potential cost functions used in these studies, we propose the use of a control theoretical method that uses a linear quadratic regulator (LQR) framework [12].
The inverse LQR (ILQR) problem received some attention with some results for potentially unstable controllers [13]. Since controller stabilization is important when examining engineering or biological systems, we focus on the stable LQR problem. In addition, when the cross term S of the LQR cost function is included, any controller K is optimal for some cost function [14]. However, for various reasons, we chose to exclude S from our LQR cost function. It is rarely used in the design of LQR controllers for real system. The inverse results, including cross terms, provide less meaningful information about the control intent than the results from the direct separation of control and state costs (e.g., the principle of parsimony). As a result, in the remainder of this paper, we will focus on the stable LQR problem with S = 0.
There have been some studies on ILQR [15–17]. These studies (based on Ref. [15], gain-penalty ILQR) use two optimization steps where the estimated control gain K from the first step is used in constructing cost function for the second step. This is a relatively complex process, and errors between the two steps can accumulate. Moreover, the optimization capability may be limited because the experimental data is not used in the second step. Therefore, this study aims at overcoming these limitations. Other studies presented alternative approaches as well [18,19]. They used particle swarm optimization and an off-the-shelf genetic algorithm, respectively, which do not provide an exact solution.
This study proposes an output-penalty ILQR method that uses the experimental data in the ILQR cost function. We applied normalization and auto-annealing techniques [20,21] to improve performance and speed of operation. We hypothesize that the output-penalty ILQR will yield a better fit since it uses the experimental data. A better fit is highly desirable to increase trust in the model and hence the inferred intent. We are the first who build a framework intended to investigate the feedback control aspects of low back pain. Our long-term clinical question is what differences (in terms of feedback control) exist between healthy subjects and subjects with low back pain. We are approaching this question from an optimality perspective, i.e., each subject tries to optimize a quantity while doing the task. This is a step closer to investigating control intent differences between healthy subjects and subjects with altered motor control, e.g., low back pain patients.
2 Materials and Methods
2.1 Subjects.
Thirty healthy subjects participated in this study, with an average age of 33.2±12.0 yr (age range = 18–59 yr), height of 168.2±9.2 cm, and weight of 76.3±14.2 kg. The group consisted of 11 males and 19 females. All participants had no neurological or musculoskeletal disorders affecting motor control. Ethical approval for the study was obtained from the MSU internal review board, and subjects provided informed consent prior to enrollment.
2.2 Data Collection.
The seated balance test was performed by each subject during three laboratory visits, and each test was identical. For this test, the subject sat on a back-drivable robotic seat capable of rotation about an axis perpendicular to the coronal plane (Figs. 1 and 2). The robotic seat was driven by a motor (C062C-13-3305, Kollmorgen, Radford, VA). Subjects were securely strapped to the seat with their knees and hips positioned at 90 deg, and their arms crossed on their chests. They were instructed to balance on the seat, while the robot provided both spring–damper action and torque perturbations. Additional details regarding the test setup can be found in Ref. [22].
Data acquisition during the test was conducted using a data acquisition board (PCI-DAS6036, Measurement Computing, Norton, MA) and a quadrature encoder board (PCI-QUAD04, Measurement Computing, Norton, MA) at a rate of 1000 samples/s. The recorded data included seat angle (), upper body angle (), robot stiffness (), robot damping (), perturbation torque (u), total robot torque (), and the subject's weight and height. Robot stiffness (), damping (, and the amplitude of the perturbation torque were tuned for each subject to normalize test difficulty between subjects before testing [22,23]. For instance, each subject was asked to balance on the seat using a high stiffness value, and then the operator would decrease it gradually until the subject would not be able to balance anymore. After fine-tuning the estimate of this critical stiffness, the experiment stiffness was set to twice this value. More details are in Ref. [22]. In less demanding settings, achieving balance provides flexibility in control, a factor that might not be contingent on health status but rather could denote individual inclinations. To reduce ambiguity in evaluation, it is advisable to subject trunk neuromuscular control to maximal challenges [24]. Perturbation torque () was designed as a pseudorandom ternary sequence with a power spectral density up to approximately 1.6 Hz. The measurement of was obtained from an encoder in the motor. Previous studies have claimed that encoders accurately reflect lower body angles [22]. was measured using two string potentiometers (SP2-50, Celesco, Chatsworth, CA) attached to the fourth thoracic (T4) and L4 level. Prior studies have demonstrated the reliability of the measurements obtained from the seated balance test (intraclass correlation coefficients: 0.98 for and 0.89 for in time domain) [22].
2.3 Model and Preliminary Estimation.
where is gravity. The lower body and robot seat below the fourth lumbar (L4) vertebra are lumped into a single rigid element with mass and moments of inertia with respect to the center of mass (COM). COM is at distance relative to the pivot O of the seat. Similarly, the upper body above the L4 vertebra is lumped into a rigid element with a mass and a moment of inertia relative to COM. The distance between the COM and L4 vertebrae of the upper body is . The L4 vertebra itself is at a distance relative to the seat pivot O. The subjects can apply a control torque for the L4 vertebrae and possess an intrinsic lumbar stiffness and damping coefficients . In addition to the torque disturbance u, a virtual stiffness and a virtual damping for the pivot point O are applied via feedback. The sum of these torques produces the total robot torque for the pivot point O, where .
Notation | Description |
---|---|
Angle of the lower body from vertical | |
Angle of the upper body from vertical | |
Mass of the subject's lower body below the fourth lumbar vertebrae (L4) and the seat | |
Moment of inertia of the subject's lower body and the seat about its center of mass | |
Distance between the pivot point and center of mass | |
Mass of the subject's upper body above L4 | |
Moment of inertia of the subject's upper body about its center of mass | |
Distance between L4 and center of mass | |
Distance between the pivot and L4 | |
Human control torque about L4 | |
Intrinsic rotational stiffness about L4 | |
Intrinsic rotational damping about L4 | |
Perturbation torque about the pivot | |
Robot stiffness about the pivot | |
Robot damping about the pivot | |
Total robot torque about the pivot point, i.e., |
Notation | Description |
---|---|
Angle of the lower body from vertical | |
Angle of the upper body from vertical | |
Mass of the subject's lower body below the fourth lumbar vertebrae (L4) and the seat | |
Moment of inertia of the subject's lower body and the seat about its center of mass | |
Distance between the pivot point and center of mass | |
Mass of the subject's upper body above L4 | |
Moment of inertia of the subject's upper body about its center of mass | |
Distance between L4 and center of mass | |
Distance between the pivot and L4 | |
Human control torque about L4 | |
Intrinsic rotational stiffness about L4 | |
Intrinsic rotational damping about L4 | |
Perturbation torque about the pivot | |
Robot stiffness about the pivot | |
Robot damping about the pivot | |
Total robot torque about the pivot point, i.e., |
The dynamical model of physical human–robot interaction system was formulated as a closed-loop feedback control system. The block diagram of the closed-loop system is shown in Fig. 3. The plant P represents the linearized mechanical dynamics with respect to the upright equilibrium point of the system in Fig. 2 [25]. The kinematic vector represents the states of the plant P. Active human motor control was modeled as a gain feedback control law, (Fig. 3), where the kinematic vector represents the output measurement in the seated balance test.
where is the output of the model using , is the experimental measurement (), and is a diagonal matrix with the maximum absolute value of (used for normalization). Twenty initial points selected by a Latin hypercube method were used for estimating .
2.4 Inverse Linear Quadratic Regulator.
Our ILQR problem is to estimate the weighting matrices of LQR using input–output measurements of the seated balance test. Here, the estimated weight matrices are optimal where the information of the experimental data is used as much as possible. We will solve the general case of this ILQR problem when both the weighting matrices and are unknown.
The objective can be stated as follows.
The previous formulation of the ILQR [15] used a gain penalty where the objective was minimizing . In this study, we opt to use an output penalty by minimizing in order to achieve better fitting between the model with LQR optimal feedback and the experimental measurements.
2.5 Gradient-Descent-Based Solution.
with . is white and noise uncorrelated in time. is the true model parameter vector in Eq. (4).
where . We now have a nonrecursive solution for all time steps given . Note that the first element in is .
By solving this Sylvester equation, we get . Then, we determine for all elements of . This is a directional derivative that minimizes the cost in Eq. (11).
which is iterated a preset number of times. We introduced a projection rule for to maintain and . If the update of compromise these conditions, the rule adjusts to the nearest matrix complying with them.
2.6 Primary Outcomes and Comparisons.
The first primary outcome of interest is the human control intent. However, clear information about the intent may not be readily apparent from the estimated weighting matrix itself. To address this, we applied a similarity transformation to and the state to obtain a diagonal matrix which provides a clear representation of the intent. The transformed state is a linear combination of the elements of . Let be an orthogonal matrix whose columns are the eigenvectors of , and let be a square diagonal matrix whose diagonal elements are the eigenvalues corresponding to each eigenvector. Then, to satisfy , the transformation is given by and . The eigenvector corresponding to the largest eigenvalue represents the most dominant linear combination of body angles and velocities during the task. In other words, it represents the most dominant human motor control intent.
A higher VAF value indicates a better fit, with 100% indicating a perfect match between the estimated model output and the measured output .
We set a VAF threshold of 50% to remove any case or subject from further analysis. This threshold was selected to balance the tradeoff between how well the model describes the data and how many cases/subjects would be excluded from the analysis. Although this is not the topic of this article, we like to elaborate for future endeavors. Most balance biomechanics studies model the body as a single degree-of-freedom (DoF) with VAF value close to our [27,28]. In our group, we started with a two-degree-of-freedom model without reporting the VAF of a fit measure [3]. In general, it is expected that since we demonstrated that the reliability measures of were better than [22]. One reason for the low is that this DoF was not perturbed; therefore, its response was more confounded by other physiological or nonlinear control factors, such as muscle co-activation or gain scheduling.
To investigate the differences between the two ILQR methods (output penalty versus gain penalty), we employed repeated-measures multivariate analysis of variance (MANOVA) [29]. Details about MANOVA's procedures and statistical calculations can be found in Ref. [30].This allowed us to compare the multivariable weights in dominant as well as the goodness of fit between the two methods treating them as the repeated measure. In cases where MANOVA showed a significant difference, a univariate repeated-measures ANOVA was performed. In our study, we opted for a multivariate repeated-measures ANOVA instead of a paired t-test due to the nonscalar nature of the variables under consideration. Repeated-measures ANOVA is typically used when multiple measurements are taken on the same subjects across various conditions or time points. While most literature studies apply repeated-measures ANOVA when running the same method at different time points on the same subject, here, the repeated measures are the different methods applied to the same subject/trial. Statistical significance was set at p ≤ 0.05. We conducted this statistical analysis using spss version 26 (IBM, Armonk, NY).
For ILQR, computations of and in Eqs. (15) and (17) were to be done several times per iteration, which causes high computational loads. Therefore, we decided to downsample the data from 30,000 to 300 samples per trial. The VAF difference before and after downsampling was up to 5%, indicating no significant difference in the fitting.
3 Comparative Results and Discussion
We analyzed a total of 90 cases (30 subjects × 3 visits/subject) for this study. Of these, two subjects (six cases) were excluded due to invalid height or weight records. After conducting preliminary estimation of the model parameters and applying ILQR algorithm, 33 cases (19 subjects) remained for further analysis. We opted to draw meaningful insights from subjects that align well with the model's descriptions. Essentially, including subjects with poor fit could potentially obscure valuable observations obtained from subjects with good fit. We recognize that the inability to achieve reasonable goodness of fit in certain subjects represents a limitation of the current model. We can speculate on reasons why the model fails to explain data from all subjects. One plausible explanation is being trapped in a local minimum, as depicted in Fig. 4 where the weights in the initial point show minimal change from the final solution. Thus, we acknowledge that the current inverse LQR model exhibits limited generalizability across subjects. The analysis results for all subjects excluding those with invalid records can be found in the Supplemental Material on the ASME Digital Collection.
The output-penalty ILQR yielded an average VAF of 88.31% for and 66.92% for , while the gain-penalty ILQR resulted in lower values of 87.40% for and 61.54% for . (For reference, the VAF for was 85.89 and 56.47, respectively.) Our model does not account for physiological nonlinearities and other unmodeled dynamics. Given that the VAF of is approximately 88%, the model may have reached its maximum capacity to describe the data, indicating limited potential for further improvement. Van Drunen et al. [27] implemented various biomechanical models for a 1DoF trunk task, reporting a maximum VAF of 89.5% in their relax task. Our VAF for showed an improvement four times greater than that of , suggesting that there is still considerable room for enhancing the fit of .
Regarding the first primary outcome, which involved comparing the weights in dominant , the MANOVA analysis showed no effect for the method (p = 0.38) (Fig. 4). This suggests that there is no significant difference in the inferred (dominant) intent between the two ILQR methods. To investigate this further, we included the dominant from the starting point of the ILQR algorithm (). By comparing among three methods (two ILQR methods and ), repeated-measures MANOVA showed no significant effect for the method (p = 0.26) (Fig. 4). This indicates that the gradient-descent formulation of ILQR renders the inferred intent dependent on the starting point of the algorithm regardless of the cost function. A detailed example of the first primary outcome can be found in the Supplemental Material on the ASME Digital Collection. We observed in Fig. 4 that the weights (, , ) exhibited much less variation compared to which can also take on negative values. Consequently, we hypothesized that is associated with different balance strategies within healthy subjects, and further analysis of this will be addressed in future studies.
In contrast, the second primary outcome, which involved the VAF, showed a significant effect for the method (p = 0.004) in the MANOVA analysis. A posthoc univariate ANOVA on and was also significant (p = 0.007 and 0.001, respectively) (Fig. 5). This confirms that the goodness of fit is better in the output-penalty ILQR, supporting our initial hypothesis. This outcome is expected since the cost function in the output-penalty ILQR directly minimizes the output fit error.
Notably, the fitting of was significantly better than that of within each method (Fig. 5). This result is consistent with a reliability study of this experiment [22], which showed higher reliability measures for than for . A potential explanation for this difference lies in the fact that is an actuated DoF with a known perturbation torque. In contrast, is not actuated, making its response more susceptible to unknown neural system noise. Moreover, exhibits smaller response amplitudes than , resulting in a lower signal-to-noise ratio.
We acknowledge the capability of subjects to learn environmental dynamics. However, we decided on modeling through pure feedback pathways, cautious of the potential for overfitting with the inclusion of feedforward paths [31]. Furthermore, the learning of environmental dynamics by subjects does not represent clinically significant differences [32].
The gain-penalty ILQR uses the output to estimate (including ) initially, then optimizes and through the estimated , i.e., [15]. In the processes of estimating K first and obtaining and later from , the parameter estimation error in can propagate. On the other hand, the output-penalty ILQR, we mainly use to obtain and minimizing the error propagation, which results in a better goodness of fit in terms of VAF.
Despite these significant findings, there are some limitations to this study. First, unfortunately, we did not collect electromyography data during the test, which could have provided valuable insights into muscle activations. Novice strategies might have relied more on muscle co-contractions, as previous research has shown that performance declines with increasing muscle co-contraction [33]. Future studies should investigate if our observed kinematic patterns correspond to distinct electromyography patterns to test this hypothesis. Additionally, some cases were excluded due to poor fitting, particularly with regards to the upper body angle. Although we improved the fitting by introducing the weight into the preliminary estimation and adopting the output-penalty ILQR, there is still room for improvement in model fitting. Future research should explore methods to enhance the accuracy of the model fitting process. Furthermore, our current model includes the physical human–robot interaction (2DoF pendulum) and linear feedback of positions and velocities, reflecting the proprioceptive feedback pathway. Muscle dynamics and sensory delay were not explicitly modeled. However, in our previous research [34], where we incorporated muscle dynamics and sensory delay, we found that the inferred intent was predominantly influenced by kinematic states rather than delay or muscle dynamics. Therefore, our current model is considered a good approximation.
4 Conclusion
An output-penalty ILQR was derived in this paper. We used the experimental data directly in the cost function, unlike a previous study with gain-penalty ILQR [15]. Our method can be applied not only to the estimation of human control intent but also to the reverse engineering of black box controllers. By employing the output-penalty ILQR method on the seated balance experimental data, we observed a meaningful enhancement in goodness of fit. Specifically, the average VAF improvement for and was 1.04% and 8.74%, respectively, representing an improvement ratio over the gain-penalty ILQR approach. There was no significant difference in the inferred (dominant) intent between the two methods. This was due to the starting point of the algorithm. This is a step closer to investigating control intent differences between healthy subjects and subjects with altered motor control, e.g., low back pain patients.
Acknowledgment
The contents are solely the responsibility of the authors and do not necessarily represent the official views of NCCIH.
Funding Data
National Center for Complementary and Integrative Health (NCCIH) at the National Institutes of Health (Grant No. U19AT006057; Funder ID: 10.13039/100008460).
National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00221762; Funder ID: 10.13039/501100003725).
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.