Solving optimal design problems through crowdsourcing faces a dilemma: On the one hand, human beings have been shown to be more effective than algorithms at searching for good solutions of certain real-world problems with high-dimensional or discrete solution spaces; on the other hand, the cost of setting up crowdsourcing environments, the uncertainty in the crowd's domain-specific competence, and the lack of commitment of the crowd contribute to the lack of real-world application of design crowdsourcing. We are thus motivated to investigate a solution-searching mechanism where an optimization algorithm is tuned based on human demonstrations on solution searching, so that the search can be continued after human participants abandon the problem. To do so, we model the iterative search process as a Bayesian optimization (BO) algorithm and propose an inverse BO (IBO) algorithm to find the maximum likelihood estimators (MLEs) of the BO parameters based on human solutions. We show through a vehicle design and control problem that the search performance of BO can be improved by recovering its parameters based on an effective human search. Thus, IBO has the potential to improve the success rate of design crowdsourcing activities, by requiring only good search strategies instead of good solutions from the crowd.

Introduction

Challenges and Opportunities for Design Crowdsourcing.

Optimal design problems often have large solution spaces and highly nonconvex objectives and constraints, inhibiting effective solution searching through existing optimization algorithms. Some of these problems, however, have been quite successfully (yet heuristically) solved by human beings. Notable examples include protein folding [1,2], RNA synthesis [3,4], genome sequence alignment [5], robot trajectory planning [6], and others [79]. The superior performance of some human beings at solving these problems demonstrates the advantages of human intelligence, which are supported by cognitive science and neuroscience findings [10] (see discussion in Sec. 5.1). However, despite a handful of success stories, applications of crowdsourcing to real-world design problems have yet to overcome several practical barriers. The cost of setting up problem-dependent crowdsourcing environments, the lack of commitment from crowd members, and uncertainty in domain-specific crowd competence have all contributed to its lack of adoption, while the growing availability of computation resources often makes straight-forward optimization or brute-force search a more convenient approach.

Our earlier study [8] highlighted these challenges for design crowdsourcing: We gamified a vehicle design and control problem (called the “ecoRacer” problem in what follows) where the objective is to complete a track with the minimal energy consumption within a time limit, by finding the optimal final drive ratio of the vehicle and the control policy for acceleration and regenerative braking. The game was broadcast on social media and received more than 2000 plays from 124 unique players within the first month. Results showed that (1) the marginal improvement in average game score of the crowd over an algorithm does not necessarily justify the high cost for developing crowdsourcing games and (2) only a few players were committed to the search for more than 50 iterations, and still fewer can outperform the computer-found solution at all (see summary in Fig. 1).

Fig. 1
(a) Summary of player participation and performance and (b) results from the game showed while most players failed to outperform the Bayesian optimization algorithm, some of them can identify good solutions early on. (Reproduced with permission from Ren et al. [8,11]. Copyright 2016 and 2015 by ASME.)
Fig. 1
(a) Summary of player participation and performance and (b) results from the game showed while most players failed to outperform the Bayesian optimization algorithm, some of them can identify good solutions early on. (Reproduced with permission from Ren et al. [8,11]. Copyright 2016 and 2015 by ASME.)
Close modal

Nonetheless, human search results displayed a significantly different search pattern than that of the algorithm. In particular, quite a few players showed rapid early improvement in performance, beyond the average performance of the computer, before they quit the game without reaching a solution close to the theoretical optimum. This observation is consistent with existing research (see, for example, Khatib et al. [2] on a human-designed protein folding algorithm having a short-term advantage over a standard algorithm) and suggests that while few people care to actually find the “best solution,” their early demonstrations on how they search for a better solution may still be valuable. Specifically, we hypothesize that if a computer algorithm can be tuned to mimic these demonstrations, it can serve as a replacement to human solvers in their absence, to search in an effective way without ever abandoning the problem.

Learning to Search.

This paper aims to test the previously mentioned hypothesis. We model a human solver's search behavior through a Bayesian optimization (BO) algorithm (BO, also known as efficient global optimization) [12,13]. The algorithm iterates between two steps: (1) Estimating the shape of the problem space, based on previous solutions and the corresponding performances, using a Gaussian process (GP) model [14] and (2) creating a new solution based on this estimate (details in Sec. 2). While BO is not provably the underlying mechanism humans use, we hypothesize that the algorithm can be tuned to mimic the results of successful human search strategies, specifically in comparison with other popular gradient- and nongradient-based optimization algorithms. The key assumption in modeling human search behavior through BO is the use of a GP to account for human beings' learning of input–output relationships (or called “function learning” in psychology). This assumption is supported by various findings: In a recent review of function-learning models, Lucas et al. [15] showed that the two major schools of models, i.e., rule- and similarity-based, can be unified through a Gaussian process.1 As discussed in Wilson et al. [16], the evidence that Occam's Razor plays an important role in human prediction also suggests that GP is an appropriate model for function learning, as GP reduces model complexity by construction [17]. Empirically, Borji and Itti [18] showed that BO, with the use of GP, has the closest convergence performance to human searches when applied to one-dimensional (1D) optimization problems. In fact, many higher-dimensional problems that human beings naturally solve, such as locomotion planning, have also been successfully solved through the use of GP [1922].

Under this modeling assumption, we investigate how BO parameters can be estimated for the algorithm to best match human solver's search trajectory, i.e., the sequence of solution-performance pairs. To this end, we introduce an inverse BO (IBO) algorithm to derive the maximum likelihood estimators (MLEs) for BO parameters and discuss challenges in its implementation (see Sec. 3). Validation of the IBO algorithm takes two steps. We first use a simulation study to show that IBO can successfully estimate BO parameters used in generating a search trajectory (Sec. 3.2). We then show through the ecoRacer problem that the search performance of BO can be improved when its parameters are modified based on observing an effective human search and implementing IBO (Sec. 4). The results provide evidence that IBO can accelerate a search using only good search strategies without needing a large number of good human solutions. Thus, incorporating IBO in design crowdsourcing may lower the requirement on crowd commitment and so increase its chance of success. Limitations and their potential relaxations of the current IBO implementation will be discussed in depth in Sec. 5.

Related Work.

It is important to note that the focus of this paper is on the design of optimization algorithms aided by human demonstrations, rather than the derivation of qualitative explanations of the strengths and limitations of human design strategies. There have been numerous studies from the latter category in recent years (see Refs. [2328] for example). This paper is also distinguished from studies that propose human-inspired optimization algorithms (see Refs. [2931] for example), in that the learning of the optimization algorithm in our case is conducted by another algorithm, rather than by human researchers. From this aspect, our study is related to studies in learning-to-learn [32] where algorithms (e.g., for gradient-based optimization [33] and optimal control [34]) are tuned and controlled by a higher-level algorithm. In such work, however, the algorithms are often improved purely computationally through reinforcement learning (RL) by solving similar problems repeatedly. Due to the use of human demonstrations, our paper is also related to inverse reinforcement learning (IRL) (see discussion in Sec. 5.2), where human control strategies are used for defining and finding optimal control strategies.

Preliminaries on Bayesian Optimization

This section provides some background knowledge on BO to facilitate the discussion on IBO in Sec. 3.

Terminologies and Notations.

Let an optimization problem be minxXf(x) where Xp is the solution space. A search trajectory with K iterations can be represented by hK:=<XK,fK>, where XK and fK represent the collection of K samples in X and their objective values, respectively. h0:=<X0,f0> represents an initial exploration set with K0 samples. Human strategy is represented by algorithmic parameters λ that govern the search behavior: During the search, each new solution xk+1 (for k=0,,K1) is determined by hk:=<Xk,fk> and λ through maximizing a merit function with respect to x: xk+1=argmaxxXQ(x;hk,λ). The functional forms of the merit function Q(x) will be introduced in Secs. 2.2 and 3. We also define Λ:=diag(λ) and its estimator as Λ̂:=diag(λ̂).

The BO Algorithm.

We briefly review the BO algorithm, to explain how each new sample x is drawn based on the merit function Q(x), itself defined by previous samples. Knowing this procedure is necessary for understanding the inverse BO algorithm, where we estimate the most likely BO parameters for a given trajectory of samples.

Bayesian optimization contains two major steps in each iteration: For a collection of samples of a black-box function, a GP model is updated; the merit function is then formulated based on the GP model, and the next sample is chosen by maximizing the merit. Model update: It first updates a GP model to predict objective values, based on current observations hk and Gaussian parameters λ. Without considering random noise in evaluating the objective, the GP model can be derived as f̂(x;hk,λ)=b+rTR1(fkb), where b=((1TR1fk)/(1TR11)),r is a column vector with elements ri=exp((xxi)TΛ(xxi)) for i=1,,k,R is a symmetric matrix with Rij=exp((xixj)TΛ(xixj)) for i,j=1,,k, and 1 is a column vector with ones. Without prior knowledge, the MLE of λ for the GP model can be derived by solving
(1)
where σ2=(fk1b)TR1(fk1b)/n is the MLE of the GP variance. Sampling the solution space: The second step is to determine the next sample using the GP model. A common sampling strategy is to pick the new solution in X that maximizes the expected improvement from the current best objective value fmin:=minfk (assuming a minimization problem): QEI(x;hk,λ)=(fminf̂)Φ((fminf̂)/σ)+σϕ((fminf̂)/σ). Here, Φ(·) and ϕ(·) are the cumulative distribution function and probability density function of the standard normal distribution, respectively. The new sample is thus obtained by solving
(2)

Figure 2 demonstrates four iterations of BO in optimizing a 1D function, with the GP model and the expected improvement function updated in each iteration. Note that similar to human searching behavior, BO is a stochastic process: First, the choice of the new design is stochastic, with better designs being more probable to be chosen;2 and second, the initial exploration h0 can be stochastic when it is modeled by a random sampling scheme, e.g., Latin hypercube sampling (LHS, see Ref. [12] for details).

Fig. 2
Four iterations of BO on a 1D function. Obj: The objective function. GP: Gaussian process model. EI: Expected improvement function. Image is modified from Ref. [11].
Fig. 2
Four iterations of BO on a 1D function. Obj: The objective function. GP: Gaussian process model. EI: Expected improvement function. Image is modified from Ref. [11].
Close modal

Inverse Bayesian Optimization

We consider human solution search to consist two stages: A few exploratory searches are first conducted to acquire a preliminary understanding of the problem, before the execution of BO follows. For example, a player may spend a few trials to get familiar with a new game, before thinking about strategies to improve his score. IBO minimizes the sum of two costs corresponding to the exploration and BO stages, respectively. By doing so, it finds the most likely explanation of the underlying search strategy.

Specifically, IBO estimates λ, along with the size of the initial exploration set K0, given the trajectory hK. To do so, we introduce and minimize a cost function consisting of the exploration cost for h0, denoted as LINI, and the BO cost for the rest of hK, denoted as LBO. We define LINI:=log(Dp(X0)), where p(X0) is the joint probability of the exploration set and D:=|X| is the size of the solution space; and LBO:=log(Dp(hKh0|h0))=k=0K1log(Dp(xk+1|hk)) where p(xk+1|hk) is the density for choosing xk+1 conditioned on hk. Here, log(·) stands for natural logarithm.

The derivation of LINI and LBO is as follows: To calculate LINI, we assume that each new sample during the exploration phase, xi for i=1,,K0, tends to maximize its minimum Euclidean distance d(xi,X<i) to previous samples X<i, this is referred to as the max–min sampling scheme in what follows. Let the joint probability of the exploration set be p(X0)=p(x1)p(x2|x1)p(xK0|X<K0) and each conditional probability follows a Boltzmann distribution: p(xi|X<i)=exp(αINId(xi,X<i))/ZINI(xi,αINI). Here, the scalar αINI represents how strictly each sample from X0 follows the max–min sampling scheme, and ZINI(xi,αINI)=xXexp(αINId(x,X<i))dx is a partition function that ensures that Xp(xi|X<i)dx=1. Note that the first sample in the exploration set is considered to be uniformly drawn, and thus, its contribution to the cost (a constant) can be omitted.

To calculate LBO, the conditional probability density of sampling xX based on current hk can be similarly modeled as a Boltzmann distribution
(3)

where ZBO(hk,λ,αBO)=xXexp(αBOQEI(x;hk,λ))dx is also a partition function. The parameter αBO plays a similar role to αINI. For simplicity, we define l̃i:=log(Dp(xi|X<i)) and lk:=log(Dp(xk+1|hk)), so that LINI=i=1K0l̃i and LBO=k=0K1lk. A lower value of l̃ or l represents higher probability density of the current sample to be drawn by max–min sampling or BO, respectively, and a zero indicates that the sample can be considered as uniformly drawn.

Inverse BO solves the following problem to derive λ̂:
(4)

Note that to find the optimal K0 for any given αINI, αBO, and λ, one can first calculate the optimal l̃i and lk for i,k=2,,K, with respect to αINI, αBO, and λ, and then scan K0=2,,K to find the lowest value of LINI+LBO. The scan starts at K0=2 because it is not meaningful to initialize BO with a single sample.

Numerical Integration for ZBO.

The calculation of each l requires an approximation of the integral ZBO(hk,λ,αBO), where the integrand QEI(x;hk,λ) is usually a highly nonconvex function with respect to x, with function values dropping significantly around local maxima. See Fig. 2 for example. Thus, we propose to approximate ZBO with importance sampling using a customized proposal density function that combines a uniform distribution with density p(x)=1/D and a multivariate normal distribution with density q(x)=(2πσIp)1exp(||xμ||2/2σI2), where σI and μ are the parameters of q(x). The uniform distribution is used to sample over X, while the normal distribution helps to improve the approximation by capturing the potential peak at the current sample xk+1. Thus, we set μ:=xk+1. Let xiuU for i=1,...,I and xjnN for j=1,...,J be samples from p(x) and q(x), respectively. The approximation ẐBO can be calculated by
(5)

with arguments of QEI omitted for simplicity. The derivation of Eq. (5) is presented in the Appendix. Note that this approximation works under the assumption that xXq(x)dx1, which is plausible as the normal distribution is designed to have a narrow spread to match the local peak at xk+1. In this paper, the shape of this normal distribution is set by σI=0.01 universally. While the setting of σI affects the variance of the approximation of ZBO, we found this setting to perform well in practice. For ZINI, since the minimum Euclidean distance function in a high-dimensional space with limited samples is a relatively smooth function, we use Monte Carlo sampling for its approximation.

Simulation Studies.

As a validation step, we show that IBO can recover the parameters of a general BO given only an observed search trajectory. If IBO can determine the correct parameters (1) after a few number of iterations, (2) in a high-dimensional problem space, and (3) from a wide range of trajectory/parameter settings, then it could be used to recover parameters for matching a BO algorithm to an observed human search.

We use a simulation study to show that, for a given search trajectory, IBO can correctly identify the true λ provided the trajectory is sufficiently different from a random search. In addition, the simulation indicates that learning from already-efficient search behavior (i.e., estimating λ through IBO of an observed effective search trajectory) can lead to better BO convergence than the more common self-improvement methods (i.e., updating λ̂ by maximizing the likelihood of the observations according to the GP model).

Simulation Settings and Results.

The simulation study is detailed as follows: We apply BO to a 30-dimensional Rosenbrock function constrained by X:=[2,2]30. To initialize BO, we use LHS to draw ten samples from X. BO terminates when the expected improvement for the next iteration is less than 103. At each iteration, the expected improvement is maximized using a multistart gradient descent algorithm [38] with 100 LHS initial guesses. A set of BO parameters, Λ=0.01I,0.1I,1.0I, and 10.0I, are used to perform the search, where I is the identity matrix. For each of the four settings, 30 independent trials are recorded.

For each BO setting Λ, each candidate estimator Λ̂, and each trajectory of length K=5,...,20, we solve Eq. (4) using a grid search with GαBO:={0.01,0.1,1.0,10.0} and GK0:={2,,K}. We fix αINI to 1.0 and 10.0 and will discuss its influence on the estimation. Figure 3 presents the resulting minimal L for all the four cases and under all guesses. Each curve in each subplot shows how the minimal L (with respect to αBO and K0) changes as the search continues. The means and standard deviations of L are calculated using the 30 trials. ZINI is approximated using a sample size of 10,000. In approximating ZBO, samples from the normal and the uniform distributions are of equal sizes (I=J=5000).

Fig. 3
The minimal cost L for search trajectory lengths N=5,...,20 with respect to GαBO and GK0. αINI is fixed to 1.0 and 10.0.
Fig. 3
The minimal cost L for search trajectory lengths N=5,...,20 with respect to GαBO and GK0. αINI is fixed to 1.0 and 10.0.
Close modal

Analysis of the Results.

Based on the results from this simulation, as summarized in Fig. 3, the major finding from this simulation study is that IBO can successfully recover the BO parameters in cases where BO does not resemble uniform random sampling of the design space. In the cases of Λ=0.01I,0.1I,1.0I, we see that the correct choices of Λ̂ consistently lead to the lowest cost along the search process. After only one or two iterations, in nearly all the cases, the correct parameter has the highest likelihood of all the four propositions, and this remains the case along the search. However, under large BO parameters such as Λ=10.0I, the similarity between any two points in the design space becomes close to zero, leading to (almost) uniform uncertainty and expected improvement. Therefore, this setting reduces BO to a uniform random sampling scheme. Figure 3(d) shows that IBO does not perform well in this situation. To better understand the behavior of IBO under near-random searches, a curious reader may find a discussion on the properties of the costs l and l̃ in the Appendix.

Learning From Others Versus Self-Adaptation.

The study mentioned earlier showed that the correct BO setting λ can be learned through IBO. This subsection further demonstrates the advantage of “learning from others” (i.e., updating λ through IBO), over “self-adaptation” (i.e., finding the MLE of λ using hk). The settings follow the study mentioned earlier and the results are shown in Fig. 4. First, to show the significant influence of λ on search effectiveness, we show the convergence of two fixed search strategies with Λ=0.01 and 10.0. Note that while neither converges to the optimal solution within 50 iterations, the former is significantly more effective than the latter. For “self-adaptive BO,” we use a grid search (GΛ={0.01I,0.1I,1.0I,10.0I}) to find Λ̂GP that maximizes Eq. (1) at each iteration and use Λ̂GP to find the next sample. We show in Fig. 4(b) the percentages of the four guesses being Λ̂GP along the search, using GΛ as the initial guesses for BO. The learning from others case starts with Λ=10.0I and uses IBO to derive Λ̂ from the trajectory produced by Λ=0.01I. From Figs. 3 and 4(b), we see that Λ̂GP does not converge to Λ=0.01I as quickly as IBO, which explains why learning from others outperforms self-adaptation in Fig. 4(a). It is worth noting that this difference in performance may be relatively dependant on the dimensionality of the problem, as the two strategies were found to have similar convergence performance when applied to two-dimensional functions. One potential explanation for this is that, in a lower dimensional space, an effective Λ̂GP can be learned with a smaller number of samples.

Fig. 4
(a) Comparison on BO convergence using four algorithmic settings: (orange) Λ=10.0I, (green) Λ=0.01I, (gray) the MLE of Λ is used for each new sample, and (red) the initial setting Λ=10.0I is updated by IBO using the trajectory from Λ=0.01I. (b) The percentages of estimated Λ̂MLE along the number of iterations, averaged over the cases with Λ={0.01I,0.1I,1.0I,10.0I} and 30 trials for each case.
Fig. 4
(a) Comparison on BO convergence using four algorithmic settings: (orange) Λ=10.0I, (green) Λ=0.01I, (gray) the MLE of Λ is used for each new sample, and (red) the initial setting Λ=10.0I is updated by IBO using the trajectory from Λ=0.01I. (b) The percentages of estimated Λ̂MLE along the number of iterations, averaged over the cases with Λ={0.01I,0.1I,1.0I,10.0I} and 30 trials for each case.
Close modal

Case Study

We now investigate how IBO may improve the performance of BO when applied to a vehicle design and control problem.

Fig. 5
Independent component analysis (ICA) bases learned from all human plays and the ecoRacer track. Vertical lines on the track correspond to the peak locations of the bases.
Fig. 5
Independent component analysis (ICA) bases learned from all human plays and the ecoRacer track. Vertical lines on the track correspond to the peak locations of the bases.
Close modal

Dimension Reduction for Player's Control Signals.

The solution data from each game play consist of (1) the final gear ratio, (2) the recorded acceleration and braking signals, and (3) the corresponding game score. The length of a raw control signal matches that of the track, which has 18,160 distance steps. Encoding control signals to a low-dimensional space is feasible since common acceleration and braking patterns exist across all plays. In Ref. [11], this was done by introducing manually defined state-dependent basis functions (i.e., polynomials of the velocity of the car, slope of the track, distance to the terminal, remaining battery energy, and time spent) to parameterize the control signals. The underlying assumption that human players are aware of all the state-dependent bases is untested.

In this paper, we perform dimension reduction based on evidence that human beings often solve high-dimensional problem by performing problem abstraction and using a hierarchical search [3943]. In the context of the ecoRacer game, we hypothesize that players segment the track into m discrete sections and make separate control decisions in each segment. Mathematically, this is equivalent to projecting observed signals onto m independent basis, which can be elegantly addressed by ICA [44]. Compared with principal component analysis, where the bases minimize the covariance of the data, our ICA implementation (see results in Fig. 5) maximizes the Kullback–Leibler divergence between all bases pairs and is more suitable for non-Gaussian signals, such as the control data from this game (i.e., the acceleration/braking signals across players at each step along the track are unlikely to follow a Gaussian distribution).

Much like principal component analysis the choice of the number of ICA bases requires a balance between fidelity and practicality. While it is theoretically possible to find the “most likely” number of bases using information-theoretic criteria for model selection [45],3 we chose to use 30 bases because (1) over 95% of the variance is explained and (2) the resultant solution space (30 control variables and one design variable) is small enough for BO to be effective.

Derivation of λ̂ and λ̂GP.

We apply IBO to two players, referred to as “P2” and “P3,” who achieved the second and third highest score within 31 and 73 plays, respectively, much less than the 150 plays from the achiever of the highest score. To do so, we first encode all the control solutions from the two players using the learned ICA bases. Together with the final drive ratios, all the solutions are then normalized to be within [1,1]31. IBO is performed separately on P2 and P3. We found that the probability for either player to have followed the max–min sampling scheme is lower than that of following BO, as the minimal values of l̃(xk,αINI) for k=2,...,31 (with respect to αINI) are dominated by those of l(xk,αBO). This means that the players were not likely to have performed an exploration before they started trying to improve their performance. This finding is reasonable, as the scoring mechanism in ecoRacer game, just like in other racer games with fairly predictable vehicle dynamics, can be understood by the player early on. Therefore, the search for λ̂ is performed by solving Eq. (4) with λ[0.01,10.0]31, αBOGαBO, and a minimal number of initial samples (K0=2) required for BO. For comparison purpose, we obtain λ̂GP using plays from P2, which represents a case where BO parameters are fine-tuned by the observed game plays, without trying to explain why these solutions were searched by the player.

Due to the nonconvexity of Eqs. (4) and (1), gradient-based searches using a series of ten initial guesses are conducted to avoid inferior local solutions. Finite difference is used for gradient approximation. Both λ̂ and λ̂GP are calculated offline and fixed during the execution of BO.

Comparison of BO Performance.

Figure 6 compares the BO performance under λ̂ (for P2 and P3), λ̂GP, and Λ=I. In each case, we start with the first two plays from the players and run 180 BO iterations. Similar to the simulation study, results are reported using 20 trials due to the stochastic nature of BO. Due to the small trial number, bootstrap variance estimators are reported as the shades around the average in the figure. λ̂ outperforms the other two settings consistently along the search with statistical significance. The BO performance by mimicking P2 is slightly better than that of P3.

Fig. 6
The residual of current best score versus the known best score, with settings λ̂ (IBO, red), λ̂GP (MLE, blue), and the default λ=I (green). Results are shown as averages over 30 trials. One-sigma confidence intervals are calculated via 5000 bootstrap samples. Red and black dots are scores from P2 and P3, respectively.
Fig. 6
The residual of current best score versus the known best score, with settings λ̂ (IBO, red), λ̂GP (MLE, blue), and the default λ=I (green). Results are shown as averages over 30 trials. One-sigma confidence intervals are calculated via 5000 bootstrap samples. Red and black dots are scores from P2 and P3, respectively.
Close modal

The result shows that BO can be improved noticeably by learning from P2 and P3. However, the players' search is not fully mimicked by IBO, as they improved much faster than the modified BO does, indicating that the proposed model still has room for improvement. Nevertheless, the IBO implementation still achieves the closest performance to the players' among all the BO instances, and it is the only algorithm that achieved better performance than the players' best play within 100 iterations. This result demonstrates the potential of IBO to continue an effective human search after the player quits, with an improved search performance from a standard BO.

For completeness, we also note that in all the cases, the BO identifies the true optimal final drive ratio at the end of the search. We also qualitatively compare the best human solution with one BO solution with high score, along with the theoretically optimal solution in Fig. 7. The result indicates that while these control strategies yield similar scores, they are quantitatively different, although braking toward the end is observed as a common strategy. Human search data are documented in the webpage,4 where the best players' solution strategies are published.

Fig. 7
Qualitative comparison on control strategies from the theoretical optimal solution (top), one of the BO solutions (middle), and the best player solution (bottom)
Fig. 7
Qualitative comparison on control strategies from the theoretical optimal solution (top), one of the BO solutions (middle), and the best player solution (bottom)
Close modal

Discussion

The study mentioned earlier provided a starting point for learning optimization algorithms based on human solution-search data. Yet, many pressing questions remain unanswered. This section will address a few notable ones. Some potential answers to these questions will rely on readers' familiarity with inverse reinforcement learning [19,47,48] (IRL, also called apprenticeship learning [49,50] and inverse optimal control [51]). To familiarize readers with this topic, a discussion on the connection between IBO and IRL is provided in Sec. 5.2.

Limitations and Potential Values of IBO.

From the case study, a strategy learning through IBO outperformed default algorithms, but is yet to reach the performance of the best human solver. This indicates potential room to further improve the algorithm. In the following, we discuss notable limitations of IBO. We shall also note that these also apply to the general problem of designing optimization algorithms through human demonstrations (called DO in what follows).

Model of human search strategies: Studies in cognitive science have put forth several core ingredients of human intelligence, including intuitive physics [5255], problem decomposition skills [42,56,57], ability in learning-to-learn [58], and others [10]. While evidence has shown the connection between BO and human search [18], suitable models for human search strategies can be problem dependent. For example, for low-dimensional design problems, Egan et al. [59] showed that people adopting univariate search are more likely to achieve effective search. This result is supported by earlier psychological studies on how children perform scientific reasoning and thus may be useful to explain how people identify unfamiliar systems. However, univariate search may not reflect how people search for solutions in a familiar context (such as car driving) and with a large number of control and design variables to tune, as is the situation of the ecoRacer game. For such high-dimensional and physics-based design and control problems, a potentially reasonable human search model could be to incorporate human intuitive physics models into the evaluation of the expected improvement. Thus instead of estimating GP parameters, one could estimate a statistical model of the state-space equations of the dynamical system, which influences the expected improvement. At a more abstract level, the fundamental challenge in understanding how a human search strategy should be modeled is the lack of knowledge about the functional form of the local objective (i.e., the Q-function) that governs the generation of new solutions during the search based on the current state (cumulative knowledge learned by the human solver). As we will discuss later in this section, this challenge is also a key topic in IRL. Not surprisingly, one notable solution from IRL to this problem is in fact to use nonparametric models such as GP [19,60].

Uncertainty in estimation: A limited amount of demonstrations could be insufficient to provide a good estimation of the BO parameters, even though the underlying parameters are the effective ones. One potential solution to this could be to create a reward mechanism in the crowdsourcing setting, where the reward is determined by both the observed search effectiveness of each human solver and the uncertainty in the estimation of their search strategy. In the context of BO, this uncertainty can be measured by the covariance of the estimator, i.e., the Hessian of the cost function in Eq. (4). For people with effective search yet high estimation uncertainty, we can solicit more solutions from them by offering rewards. It would also be interesting to understand the influence of the properties of the problem, e.g., the size of the solution space, on the convergence of the estimation.

Knowledge transferability: The third limitation concerns the transferability of knowledge (search strategies) learned from one task (an optimization problem) to others. This limitation also leads to the question of how “effectiveness” of searches shall be measured, as we are not yet able to tell in what condition a strategy that has high rate of improvement (such as P2 in ecoRacer) will continue to produce better solutions than other strategies in a long term. The same issue, however, exists in IRL: e.g., a control policy learned for pancake flipping does not guarantee optimal egg flipping due to the differences in physical properties between pancakes and eggs. One solution to this in IRL is to allow the policy to adjust to new problem settings, by correcting the state transition model according to the new observations. This solution may also be applied to IBO. In the context of ecoRacer, knowledge such as “starting acceleration at the beginning of the track” could be considered as a universal strategy and requires less exploration, while the actual duration for executing this strategy may differ across problem settings. Therefore, it could be more effective for BO to adjust its parameters based on the ones that are learned from human demonstrations on a similar problem, rather than learning from scratch.

To summarize, IBO could be a valuable tool for machines to mimic human search behavior when (1) the underlying human search mechanism follows BO, (2) the demonstration is sufficient for estimating the true BO parameters with low variances, and (3) the true optimal BO parameters for a long-term search can be estimated based on an effective short-term search.

The Difference Between Learning to Search and Learning a Solution.

The proposed IBO approach can be considered as a way to design optimization algorithms with human guidance and is mathematically similar to IRL. In order to explain the similarities and differences between the two, we first introduce Markov decision process (MDP) and RL and make an analogy between MDP and an optimization algorithm.

Preliminaries on MDP and RL.

An MDP is defined by a tuple <S,A,T,R,γ,b0>, where S is a set of states, A is a set of actions, the state transition function T(s,a,s) determines the probability of changing from state s to s when action a is taken, R(s,a) is the instantaneous reward of taking action a at state s, γ[0,1) is the discount factor of future reward, and b0(s) specifies the probability of starting the process at state s. In RL, a control policy π is a mapping from a state to an action, i.e., π:SA. The long-term value of π for a starting state s can be calculated by Vπ(s)=R(s,π(s))+γsST(s,π(s),s)Vπ(s), and thus, the value of π over all possible starting states is the expectation Vπ=sSb0(s)Vπ(s). A common way to represent a control policy is to introduce a Q-function Q(s,a;λ) with unknown control parameters λ, and let the policy be a(s)=argmaxAQ(s,a;λ). RL identifies the optimal λ that maximizes Vπ.

MDP Versus Optimization Algorithm.

An optimization algorithm defines a decision process: Its instantaneous reward is the improvement in the objective value achieved by each new sample, and the cumulative reward represents the total improvement in the objective within a finite number of iterations; its state contains the current solution (in X), the corresponding objective value, and potentially the gradient and higher-order derivatives of the objective function at the current solution; its action is the next solution to evaluate; and its state transition is governed by the optimization algorithm and its parameters. This is similar to MDP where the state transition is affected by the control parameters. The decision process defined by an optimization algorithm, however, is usually non-Markovian, as the new solutions rely on the entire search trajectory. Note that it is still possible to consider the optimization process as an MDP, by redefining the state as the continuously growing search trajectory, i.e., elements in the state set S shall represent all possible search trajectories, rather than samples in X.

IRL Versus IBO.

RL algorithms identify an optimal control policy for an MDP with a given reward function. However, real-world applications hardly have explicit definitions of rewards, e.g., the reward for “driving a car” cannot be explicitly defined, although people form control policies based on their inherent reward (preference). Therefore, control policy for such applications can be learned more effectively through demonstrations of human beings, which are assumed to be optimal according to the inherent reward of the demonstrator. IRL techniques have thus been developed to identify the reward (and consequently, the Q-function and the optimal control policy) that explains human demonstrations, either by estimating the reward parameters so that the demonstrated policy has a higher value than any other policies by a margin [47,49,61,62] or by finding the maximum likelihood control parameters directly [48,63].

The IBO approach introduced in this paper is closely related to latter type of IRLs, and more precisely, to the maximum entropy method of Ziebart et al. [48]. Briefly, the maximum entropy IRL proposes the following MLE of parameters λ based on a set of demonstrations h:
(6)

where Zi(λ) is a partition function for the visited state si. One can notice the similarities between Eqs. (6) and (4): (1) Both are maximum likelihood parameter estimations related to an instantaneous cost, i.e., the reward in Eq. (6) and the expected improvement in IBO. (2) Both involves partition functions that are computationally expensive and dependent on the parameters λ. Due to this dependency, a direct Markov-Chain Monte Carlo sampling in the space of λ (e.g., as in Ref. [63]) cannot be applied to optimize the likelihood function since the partition values for two different samples of λ do not cancel. Ziebart et al. discussed on alternative approach to address this computational challenge, by using the “expected edge frequency calculation” algorithm that has a complexity of O(N|S||A|) for each gradient calculation of the objective in Eq. (6), where N is a large number [48]. However, this approach can be infeasible for the IBO estimation problem in Eq. (4) since (1) the space X is usually continuous and (2) even with a discretization of X, the enormous size of S and A can easily make the calculation intractable, based on the discussion in Sec. 5.2.2.

Further, one shall notice that IRL and IBO use different assumptions about human demonstrations: Demonstrations in IRL are assumed to be near-optimal. Thus, learning from them leads to an optimal control policy for an MDP. Demonstrations in IBO, on the other hand, are assumed to be from an effect search strategy, yet are not necessarily optimal. Thus, learning from them leads to an optimization algorithm, rather than a solution. This difference affects the application of the two: IRL can be used when the machine is told to mimic existing solutions, by understanding why these solutions are considered good, e.g., it answers the question “why do people flip pancakes this way?”; IBO can be used when the machine is meant to mimic the process of searching for good solutions, by understanding how to evaluate the expected improvement of solutions, e.g., it answers the question “how did people figure out this way of pancake flipping?”

Conclusions

In this paper, we attempted to address a dilemma in design crowdsourcing: While human beings acquire more advanced intelligence than machines in solving certain types of optimal design problems, soliciting valuable solutions through existing crowdsourcing mechanisms is not cost-effective due to the lack of control over crowd participation and the problem-specific qualification of the crowd. Based on the previous finding that more people acquire good searching strategies than good solutions, we proposed in this paper to mimic human search demonstrations by inversely learning a Bayesian optimization algorithm, so that long-term search can be executed more effectively by the computer even when human solvers abandon the problem. Through simulation and case studies, we showed improved performance of BO when it is equipped with parameters learned through an effective human search. However, the significant performance gap between a human demonstrator and the proposed algorithm in the case study suggested room for improvement of the algorithm. Future investigation will focus on closing this gap by exploring more suitable cognitive models of human solution searching for specific types of optimal design problems.

Funding Data

  • National Science Foundation (Grant No. CMMI-1266184).

Appendix

Derivation of the Partition Function (ẐBO).
Here we provide the derivation of the approximation of ẐBO in Eq. (5). Let p(x)=1/D and q(x) be a uniform and a normal density function, respectively, D be the size of X, and f(x) be the function to be integrated. Also, let I and J be the sample sets drawn from these two distributions, with sizes I:=|I| and J:=|J|. We have
IBO Behavior Under Near-Random Search
Properties of l and l̃.

From Sec. 3.1, the unbiased estimation of l(x,αBO) through importance sampling is

(A1)

l̂(x,αBO) has the following properties.

Property 1. αBO=0 leads to l̂(x,0)=0, indicating that x is uniformly sampled. One can see that the optimal cost of LBO is nonpositive, as one can always achieve LBO = 0 by considering samples to be uniformly drawn.

Property 2. When the expected improvement function is constant almost everywhere, i.e., Pr(QEI(x)=C)=1, we have Pr(l̂(x,αBO)=0)=1. This is because a uniformly drawn initial guess will almost surely satisfy the optimality condition for maximizing a constant function.

Property 3. Note that 1+Dq(xi)1 for xiU due to the small σI (see Sec. 3.1) and (exp(αBOQEI(xi)))/(1+Dq(xi))0 for large D and small αBO. The partial derivative of l̂(x,0) with respect to αBO can be approximated as
(A2)

where c(αBO)>0 and Δai:=QEI(xi)QEI(x). Here, we need to introduce a conjecture: Let Q¯EI:=XQEI(x)dx/D be the average expected improvement, and A:=X(QEI(x)>Q¯EI)dx be the measure of a subspace where the sampled expected improvement value is higher than Q¯EI. A decreases from above to below D/2 along the increase of the BO sample size. In other words, a uniformly drawn sample has more than 50% of chance to have an expected improvement value higher than Q¯EI at the early stage of BO and less than 50% at the late stage.

One evidence of the conjecture is illustrated in Fig. 2: In the first iteration, Q¯EI is slightly lower than 0.5 while the majority of X has QEI>Q¯EI; in the fourth iteration, however, only a small region around the peak has QEI>Q¯EI. Using this conjecture, we can show that UΔai<0 when the sample size is small, thus ((l̂(x,0))/(αBO))<0. Together with Property 1, we have l̂(x,αBO)<0 for a small αBO and a small sample size.

Property 4. We notice that in this experiment, the discrepancy between LHS and the modeled max–min sampling scheme leads to overall high (positive) l̃ values, indicating that the samples are not likely follow this scheme. This is consistent with the fact that LHS is not exactly the same as max–min sampling, at least until all of h0 have been considered. We also see that negative l̃ values can be observed when αINI is low, suggesting that the LHS samples can be better explained by a loosely executed max–min sampling scheme than a strict one.

Discussion on Findings From Fig. 3.

We now summarize a complete list of findings based on these properties. Finding 1. A comparison between αINI=1.0 and 10.0 leads to a finding consistent with Property 4. Since the samples are not likely to be drawn from a strictly executed max–min sampling scheme, the entire search trajectory is considered to be created from BO in the case of αINI=10.0. While the early samples (less than 10) can be considered as from max–min sampling when αINI=1.0 (l̃<0), the low magnitude of l̃ causes this difference to be only visible in the case of Λ=10.0I, where the magnitude of l is also low.

Finding 2. IBO correctly identifies the true Λ within a few iterations after the initial exploration, except for the case of Λ=10.0I. To explain this exception, we first note that Λ=10.0I leads to an expected improvement function that is constant almost everywhere (except for the sampled locations where QEI = 0), and thus, BO reduces to uniform sampling. From Property 2, LBO = 0 almost surely when we have the correct guess on Λ. Also, recall from Property 4 that LINI > 0 when αINI is high. The above two together explain why with the correct guess of Λ=10.0I, we have L close to zero when αINI=10.0 and slightly negative when αINI=1.0.5

To explain the negative L values for the incorrect guesses of Λ, we use Property 3 to show that when the sample size is small and the expected improvement function is not flat, LBO < 0 for a small αBO, and thus L < 0. To summarize, Finding 2 suggests that for a search trajectory with a limited length that resembles a random search, the proposed IBO approach will consider it being derived from a BO that loosely solves Eq. (4). However, this caveat is of little practical concern, since (1) a random search rarely outperforms BO with nontrivial settings and (2) a BO with low αBO (instead of high Λ) can equally simulate a random search.

1

To be more accurate, the discussion in Ref. [15] is for function learning with continuous variables. While our case study involves discrete variables (acceleration and braking signals), the dimension reduction process converts these variables to continuous ones. See Sec. 4.

2

Numerically, this is because optimizing the nonconvex function QEI requires a nested global optimization routine, such as genetic algorithm, CMA-ES [35], DIRECT [36], and BARON [37]. Some implementations of these, e.g., genetic algorithm and CMA-ES, can be stochastic.

3

For completeness, we used 1000 principal component analysis as preprocessing to obtain the most likely number of ICA components under three suitable criteria: minimum description length, Akaike information criterion, and Kullback information criterion, as 187, 464, and 373, respectively, using the method from Ref. [45]. While these dimensionalities could make sense from a neurological perspective (e.g., given that the game takes 36 s, a decision interval of 36s/187=192ms is close to the range for the time-frame of attentional blink, which is 200–500 ms [46]), the resultant high-dimensional solution spaces are unfavorable for BO.

5

But why does the guess of Λ=10.0I lead to significantly decreasing L in the other three cases? This is because in those cases, BO does not resemble random sampling, i.e., the sequences of samples are more clustered. When a new sample is among this cluster, its similarities to existing ones are nonzero even when a large Λ is assumed, due to the small Euclidean distance among the pairs. And in turn, the expected improvement function has peaks within the clusters and remains constant far away from them, rather than being a constant almost everywhere. As a result, the optimal value of l̂(x,αBO) with respect to αBO becomes negative, even when Λ is incorrectly guessed as 10.0I.

References

1.
Cooper
,
S.
,
Khatib
,
F.
,
Treuille
,
A.
,
Barbero
,
J.
,
Lee
,
J.
,
Beenen
,
M.
,
Leaver-Fay
,
A.
,
Baker
,
D.
, and
Popović
,
Z.
,
2010
, “
Solve Puzzle for Science
,” Foldit, University of Washington, Seattle, WA, accessed July 26, 2017, http://fold.it
2.
Khatib
,
F.
,
Cooper
,
S.
,
Tyka
,
M. D.
,
Xu
,
K.
,
Makedon
,
I.
,
Popović
,
Z.
,
Baker
,
D.
, and
Players
,
F.
,
2011
, “
Algorithm Discovery by Protein Folding Game Players
,”
Proc. Natl. Acad. Sci.
,
108
(
47
), pp.
18949
18953
.
3.
Lee
,
J.
,
Kladwang
,
W.
,
Lee
,
M.
,
Cantu
,
D.
,
Azizyan
,
M.
,
Kim
,
H.
,
Limpaecher
,
A.
,
Yoon
,
S.
,
Treuille
,
A.
, and
Das
,
R.
,
2014
, “
Solve Puzzle. Invent Medicine
,” Eterna, Carnegie Mellon University/Stanford University, Pittsburgh, PA/Stanford, CA, accessed July 26, 2017, http://eterna.cmu.edu
4.
Lee
,
J.
,
Kladwang
,
W.
,
Lee
,
M.
,
Cantu
,
D.
,
Azizyan
,
M.
,
Kim
,
H.
,
Limpaecher
,
A.
,
Yoon
,
S.
,
Treuille
,
A.
, and
Das
,
R.
,
2014
, “
RNA Design Rules From a Massive Open Laboratory
,”
Proc. Natl. Acad. Sci.
,
111
(
6
), pp.
2122
2127
.
5.
Kawrykow
,
A.
,
Roumanis
,
G.
,
Kam
,
A.
,
Kwak
,
D.
,
Leung
,
C.
,
Wu
,
C.
,
Zarour
,
E.
,
Sarmenta
,
L.
,
Blanchette
,
M.
, and
Waldispühl
,
J.
,
2012
, “
Phylo: A Citizen Science Approach for Improving Multiple Sequence Alignment
,”
PLoS One
,
7
(
3
), p.
e31362
.
6.
Sung
,
J.
,
Jin
,
S. H.
, and
Saxena
,
A.
,
2015
, “
Robobarista: Object Part Based Transfer of Manipulation Trajectories From Crowd-Sourcing in 3D Pointclouds
,” preprint
arXiv:1504.03071
.https://arxiv.org/abs/1504.03071
7.
Le Bras
,
R.
,
Bernstein
,
R.
,
Gomes
,
C. P.
,
Selman
,
B.
, and
Van Dover
,
R. B.
,
2013
, “
Crowdsourcing Backdoor Identification for Combinatorial Optimization
,”
23rd International Joint Conference on Artificial Intelligence
(
IJCAI
), Beijing, China, Aug. 3–9, pp.
2840
2847
.https://pdfs.semanticscholar.org/fdfb/1a3e026b8d57487c1e54ea044494a1056df6.pdf
8.
Ren
,
Y.
,
Bayrak
,
A. E.
, and
Papalambros
,
P. Y.
,
2016
, “
Ecoracer: Game-Based Optimal Electric Vehicle Design and Driver Control Using Human Players
,”
ASME J. Mech. Des.
,
138
(
6
), p.
061407
.
9.
Schrope
,
M.
,
2013
, “
Solving Tough Problems With Games
,”
Proc. Natl. Acad. Sci.
,
110
(
18
), pp.
7104
7106
.
10.
Lake
,
B. M.
,
Ullman
,
T. D.
,
Tenenbaum
,
J. B.
, and
Gershman
,
S. J.
,
2016
, “
Building Machines That Learn and Think Like People
,” preprint
arXiv:1604.00289
.https://arxiv.org/abs/1604.00289
11.
Ren
,
Y.
,
Bayrak
,
A. E.
, and
Papalambros
,
P. Y.
,
2015
, “
EcoRacer: Game-Based Optimal Electric Vehicle Design and Driver Control Using Human Players
,”
ASME
Paper No. DETC2015-46836.
12.
Jones
,
D.
,
Schonlau
,
M.
, and
Welch
,
W.
,
1998
, “
Efficient Global Optimization of Expensive Black-Box Functions
,”
J. Global Optim.
,
13
(
4
), pp.
455
492
.
13.
Brochu
,
E.
,
Cora
,
V. M.
, and
De Freitas
,
N.
,
2010
, “
A Tutorial on Bayesian Optimization of Expensive Cost Functions, With Application to Active User Modeling and Hierarchical Reinforcement Learning
,” preprint
arXiv:1012.2599
.https://arxiv.org/abs/1012.2599
14.
Rasmussen
,
C. E.
, and Williams, C. K. I.,
2006
,
Gaussian Processes for Machine Learning
,
MIT Press
,
Cambridge, MA
.
15.
Lucas
,
C. G.
,
Griffiths
,
T. L.
,
Williams
,
J. J.
, and
Kalish
,
M. L.
,
2015
, “
A Rational Model of Function Learning
,”
Psychon. Bull. Rev.
,
22
(
5
), pp.
1193
1215
.
16.
Wilson
,
A. G.
,
Dann
,
C.
,
Lucas
,
C.
, and
Xing
,
E. P.
,
2015
, “
The Human Kernel
,”
Advances in Neural Information Processing Systems
(
NIPS
), Montreal, QC, Canada, Dec. 7–12, pp.
2854
2862
.https://papers.nips.cc/paper/5765-the-human-kernel.pdf
17.
Rasmussen
,
C. E.
, and
Ghahramani
,
Z.
,
2001
, “
Occam's Razor
,”
Advances in Neural Information Processing Systems
(
NIPS
), Vancouver, BC, Canada, Dec. 3–8, pp.
294
300
.http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.5075
18.
Borji
,
A.
, and
Itti
,
L.
,
2013
, “
Bayesian Optimization Explains Human Active Search
,”
Advances in Neural Information Processing Systems
(
NIPS
), Lake Tahoe, NV, Dec. 5–10, pp.
55
63
.http://dl.acm.org/citation.cfm?id=2999611.2999618
19.
Levine
,
S.
,
Popovic
,
Z.
, and
Koltun
,
V.
,
2011
, “
Nonlinear Inverse Reinforcement Learning With Gaussian Processes
,”
Advances in Neural Information Processing Systems
, pp.
19
27
.
20.
Deisenroth
,
M. P.
,
Neumann
,
G.
, and
Peters
,
J.
,
2013
, “
A Survey on Policy Search for Robotics
,”
Found. Trends Rob.
,
2
(
1–2
), pp.
1
142
.
21.
Calandra
,
R.
,
Gopalan
,
N.
,
Seyfarth
,
A.
,
Peters
,
J.
, and
Deisenroth
,
M. P.
,
2014
, “
Bayesian Gait Optimization for Bipedal Locomotion
,”
International Conference on Learning and Intelligent Optimization
(
LION
), Gainesville, FL, Feb. 16–21, pp.
274
290
.
22.
Cully
,
A.
,
Clune
,
J.
,
Tarapore
,
D.
, and
Mouret
,
J.-B.
,
2015
, “
Robots That Can Adapt Like Animals
,”
Nature
,
521
(
7553
), pp.
503
507
.
23.
Pretz
,
J. E.
,
2008
, “
Intuition Versus Analysis: Strategy and Experience in Complex Everyday Problem Solving
,”
Mem. Cognit.
,
36
(
3
), pp.
554
566
.
24.
Linsey
,
J. S.
,
Tseng
,
I.
,
Fu
,
K.
,
Cagan
,
J.
,
Wood
,
K. L.
, and
Schunn
,
C.
,
2010
, “
A Study of Design Fixation, Its Mitigation and Perception in Engineering Design Faculty
,”
ASME J. Mech. Des.
,
132
(
4
), p.
041003
.
25.
Daly
,
S. R.
,
Yilmaz
,
S.
,
Christian
,
J. L.
,
Seifert
,
C. M.
, and
Gonzalez
,
R.
,
2012
, “
Design Heuristics in Engineering Concept Generation
,”
J. Eng. Educ.
,
101
(
4
), pp.
601
629
.
26.
Cagan
,
J.
,
Dinar
,
M.
,
Shah
,
J. J.
,
Leifer
,
L.
,
Linsey
,
J.
,
Smith
,
S.
, and
Vargas-Hernandez
,
N.
,
2013
, “
Empirical Studies of Design Thinking: Past, Present, Future
,”
ASME
Paper No. DETC2013-13302.
27.
Björklund
,
T. A.
,
2013
, “
Initial Mental Representations of Design Problems: Differences Between Experts and Novices
,”
Des. Stud.
,
34
(
2
), pp.
135
160
.
28.
Egan
,
P.
, and
Cagan
,
J.
,
2016
, “
Human and Computational Approaches for Design Problem-Solving
,”
Experimental Design Research
,
Springer
,
Cham, Switzerland
, pp.
187
205
.
29.
Cagan
,
J.
, and
Kotovsky
,
K.
,
1997
, “
Simulated Annealing and the Generation of the Objective Function: A Model of Learning During Problem Solving
,”
Comput. Intell.
,
13
(
4
), pp.
534
581
.
30.
Landry
,
L. H.
, and
Cagan
,
J.
,
2011
, “
Protocol-Based Multi-Agent Systems: Examining the Effect of Diversity, Dynamism, and Cooperation in Heuristic Optimization Approaches
,”
ASME J. Mech. Des.
,
133
(
2
), p.
021001
.
31.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2016
, “
Drawing Inspiration From Human Design Teams for Better Search and Optimization: The Heterogeneous Simulated Annealing Teams Algorithm
,”
ASME J. Mech. Des.
,
138
(
4
), p.
044501
.
32.
Thrun
,
S.
, and
Pratt
,
L.
,
1998
, “
Learning to Learn: Introduction and Overview
,”
Learning to Learn
,
Springer
, Boston, MA, pp.
3
17
.
33.
Wang
,
J. X.
,
Kurth-Nelson
,
Z.
,
Tirumala
,
D.
,
Soyer
,
H.
,
Leibo
,
J. Z.
,
Munos
,
R.
,
Blundell
,
C.
,
Kumaran
,
D.
, and
Botvinick
,
M.
,
2016
, “
Learning to Reinforcement Learn
,” preprint
arXiv:1611.05763
.https://arxiv.org/abs/1611.05763
34.
Andrychowicz
,
M.
,
Denil
,
M.
,
Gomez
,
S.
,
Hoffman
,
M. W.
,
Pfau
,
D.
,
Schaul
,
T.
, and
de Freitas
,
N.
,
2016
, “
Learning to Learn by Gradient Descent by Gradient Descent
,”
Advances in Neural Information Processing Systems
(
NIPS
), Barcelona, Spain, Dec. 5–10, pp.
3981
3989
.https://papers.nips.cc/paper/6461-learning-to-learn-by-gradient-descent-by-gradient-descent
35.
Hansen
,
N.
,
Müller
,
S. D.
, and
Koumoutsakos
,
P.
,
2003
, “
Reducing the Time Complexity of the Derandomized Evolution Strategy With Covariance Matrix Adaptation (CMA-ES)
,”
Evol. Comput.
,
11
(
1
), pp.
1
18
.
36.
Jones
,
D. R.
,
Perttunen
,
C. D.
, and
Stuckman
,
B. E.
,
1993
, “
Lipschitzian Optimization Without the Lipschitz Constant
,”
J. Optim. Theory Appl.
,
79
(
1
), pp.
157
181
.
37.
Sahinidis
,
N. V.
,
1996
, “
BARON: A General Purpose Global Optimization Software Package
,”
J. Global Optim.
,
8
(
2
), pp.
201
205
.
38.
Zhu
,
C.
,
Byrd
,
R. H.
,
Lu
,
P.
, and
Nocedal
,
J.
,
1994
, “
L-BFGS-B: Fortran Subroutines for Large Scale Bound Constrained Optimization
,” Northwestern University, Evanston, IL, Report No.
NAM-11
.http://people.sc.fsu.edu/~inavon/5420a/lbfgsb.pdf
39.
McGovern
,
A.
,
Sutton
,
R. S.
, and
Fagg
,
A. H.
,
1997
, “
Roles of Macro-Actions in Accelerating Reinforcement Learning
,”
Grace Hopper Celebration of Women in Computing
(
GHC
), San Jose, CA, Sept. 19–21, Vol.
1317
.https://pdfs.semanticscholar.org/6c42/70b9ca7cc63a02ddae8974322ec5ea082743.pdf
40.
McGovern
,
A.
, and
Barto
,
A. G.
,
2001
, “
Automatic Discovery of Subgoals in Reinforcement Learning Using Diverse Density (Computer Science Department Faculty Publication Series)
,” International Conference on Machine Learning (
ICML
), Williamstown, MA, June 28–July 1, p.
8
.https://pdfs.semanticscholar.org/7eca/3acd1a4239d8a299478885c7c0548f3900a8.pdf
41.
Dietterich
,
T. G.
,
1998
, “
The MAXQ Method for Hierarchical Reinforcement Learning
,” 15th International Conference on Machine Learning (
ICML
), Madison, WI, July 24–27, pp.
118
126
.https://pdfs.semanticscholar.org/fdc7/c1e10d935e4b648a32938f13368906864ab3.pdf
42.
Kulkarni
,
T. D.
,
Narasimhan
,
K. R.
,
Saeedi
,
A.
, and
Tenenbaum
,
J. B.
,
2016
, “
Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation
,” preprint
arXiv:1604.06057
.https://arxiv.org/abs/1604.06057
43.
Botvinick
,
M.
, and
Weinstein
,
A.
,
2014
, “
Model-Based Hierarchical Reinforcement Learning and Human Action Control
,”
Philos. Trans. R. Soc. B
,
369
(
1655
), p. 20130480.
44.
Stone
,
J. V.
,
2004
,
Independent Component Analysis
,
Wiley
,
Hoboken, NJ
.
45.
Hui
,
M.
,
Li
,
J.
,
Wen
,
X.
,
Yao
,
L.
, and
Long
,
Z.
,
2011
, “
An Empirical Comparison of Information-Theoretic Criteria in Estimating the Number of Independent Components of fMRI Data
,”
PLoS One
,
6
(
12
), p.
e29274
.
46.
Tombu
,
M. N.
,
Asplund
,
C. L.
,
Dux
,
P. E.
,
Godwin
,
D.
,
Martin
,
J. W.
, and
Marois
,
R.
,
2011
, “
A Unified Attentional Bottleneck in the Human Brain
,”
Proc. Natl. Acad. Sci.
,
108
(
33
), pp.
13426
13431
.
47.
Ng
,
A. Y.
, and
Russell
,
S. J.
,
2000
, “
Algorithms for Inverse Reinforcement Learning
,”
17th International Conference on Machine Learning
(
ICML
), Stanford, CA, June 29–July 2, pp.
663
670
.http://ai.stanford.edu/~ang/papers/icml00-irl.pdf
48.
Ziebart
,
B. D.
,
Maas
,
A. L.
,
Bagnell
,
J. A.
, and
Dey
,
A. K.
,
2008
, “
Maximum Entropy Inverse Reinforcement Learning
,”
23rd National Conference on Artificial Intelligence
(
AAAI
), Chicago, IL, July 13–17, pp.
1433
1438
.https://www.aaai.org/Papers/AAAI/2008/AAAI08-227.pdf
49.
Abbeel
,
P.
, and
Ng
,
A. Y.
,
2004
, “
Apprenticeship Learning Via Inverse Reinforcement Learning
,”
21st International Conference on Machine Learning
(
ICML
), Banff, AB, Canada, July 4–8, p.
1
.http://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf
50.
Abbeel
,
P.
,
Coates
,
A.
, and
Ng
,
A. Y.
,
2010
, “
Autonomous Helicopter Aerobatics Through Apprenticeship Learning
,”
Int. J. Rob. Res.
,
29
(
13
), pp.
1608
1639
.
51.
Dvijotham
,
K.
, and
Todorov
,
E.
,
2010
, “
Inverse Optimal Control With Linearly-Solvable MDPs
,”
27th International Conference on Machine Learning
(
ICML
), Haifa, Israel, June 21–24, pp.
335
342
.https://homes.cs.washington.edu/~todorov/papers/DvijothamICML10.pdf
52.
Spelke
,
E. S.
,
Gutheil
,
G.
,
Van de Walle
,
G.
, and Osherson, D.,
1995
, “
The Development of Object Perception
,”
An Invitation to Cognitive Science
, Vol. 2, 2nd ed., MIT Press, Cambridge, MA.
53.
Baillargeon
,
R.
,
Li
,
J.
,
Ng
,
W.
, and
Yuan
,
S.
,
2009
, “
An Account of Infants' Physical Reasoning
,”
Learning and the Infant Mind
, Oxford University Press, New York, pp.
66
116
.
54.
Bates
,
C. J.
,
Yildirim
,
I.
,
Tenenbaum
,
J. B.
, and
Battaglia
,
P. W.
,
2015
, “
Humans Predict Liquid Dynamics Using Probabilistic Simulation
,”
37th Annual Conference of the Cognitive Science Society
(
COGSCI
), Pasadena, CA, July 22–25, pp. 172–177.http://www.mit.edu/~ilkery/papers/probabilistic-simulation-model.pdf
55.
Gershman
,
S. J.
,
Horvitz
,
E. J.
, and
Tenenbaum
,
J. B.
,
2015
, “
Computational Rationality: A Converging Paradigm for Intelligence in Brains, Minds, and Machines
,”
Science
,
349
(
6245
), pp.
273
278
.
56.
Fodor
,
J. A.
,
1975
,
The Language of Thought
, Vol.
5
,
Harvard University Press
,
Cambridge, MA
.
57.
Biederman
,
I.
,
1987
, “
Recognition-by-Components: A Theory of Human Image Understanding
,”
Psychol. Rev.
,
94
(
2
), p.
115
.
58.
Harlow
,
H. F.
,
1949
, “
The Formation of Learning Sets
,”
Psychol. Rev.
,
56
(
1
), p.
51
.
59.
Egan
,
P.
,
Cagan
,
J.
,
Schunn
,
C.
, and
LeDuc
,
P.
,
2015
, “
Synergistic Human-Agent Methods for Deriving Effective Search Strategies: The Case of Nanoscale Design
,”
Res. Eng. Des.
,
26
(
2
), pp.
145
169
.
60.
Choi
,
J.
, and
Kim
,
K.-E.
,
2012
, “
Nonparametric Bayesian Inverse Reinforcement Learning for Multiple Reward Functions
,”
Advances in Neural Information Processing Systems
(
NIPS
), Lake Tahoe, NV, Dec. 3–8, pp.
305
313
.https://papers.nips.cc/paper/4737-nonparametric-bayesian-inverse-reinforcement-learning-for-multiple-reward-functions
61.
Ratliff
,
N. D.
,
Bagnell
,
J. A.
, and
Zinkevich
,
M. A.
,
2006
, “
Maximum Margin Planning
,”
23rd International Conference on Machine Learning
(
NIPS
), Pittsburgh, PA, June 25–29, pp.
729
736
.http://martin.zinkevich.org/publications/maximummarginplanning.pdf
62.
Syed
,
U.
, and
Schapire
,
R. E.
,
2007
, “
A Game-Theoretic Approach to Apprenticeship Learning
,”
Advances in Neural Information Processing Systems
(
NIPS
), Vancouver, BC, Canada, Dec. 3–6, pp.
1449
1456
.https://papers.nips.cc/paper/3293-a-game-theoretic-approach-to-apprenticeship-learning
63.
Ramachandran
,
D.
, and
Amir
,
E.
,
2007
, “
Bayesian Inverse Reinforcement Learning
,”
Urbana
,
51
(61801), pp. 1–4.https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-416.pdf