Solving optimal design problems through crowdsourcing faces a dilemma: On the one hand, human beings have been shown to be more effective than algorithms at searching for good solutions of certain real-world problems with high-dimensional or discrete solution spaces; on the other hand, the cost of setting up crowdsourcing environments, the uncertainty in the crowd's domain-specific competence, and the lack of commitment of the crowd contribute to the lack of real-world application of design crowdsourcing. We are thus motivated to investigate a solution-searching mechanism where an optimization algorithm is tuned based on human demonstrations on solution searching, so that the search can be continued after human participants abandon the problem. To do so, we model the iterative search process as a Bayesian optimization (BO) algorithm and propose an inverse BO (IBO) algorithm to find the maximum likelihood estimators (MLEs) of the BO parameters based on human solutions. We show through a vehicle design and control problem that the search performance of BO can be improved by recovering its parameters based on an effective human search. Thus, IBO has the potential to improve the success rate of design crowdsourcing activities, by requiring only good search strategies instead of good solutions from the crowd.
Introduction
Challenges and Opportunities for Design Crowdsourcing.
Optimal design problems often have large solution spaces and highly nonconvex objectives and constraints, inhibiting effective solution searching through existing optimization algorithms. Some of these problems, however, have been quite successfully (yet heuristically) solved by human beings. Notable examples include protein folding [1,2], RNA synthesis [3,4], genome sequence alignment [5], robot trajectory planning [6], and others [7–9]. The superior performance of some human beings at solving these problems demonstrates the advantages of human intelligence, which are supported by cognitive science and neuroscience findings [10] (see discussion in Sec. 5.1). However, despite a handful of success stories, applications of crowdsourcing to real-world design problems have yet to overcome several practical barriers. The cost of setting up problem-dependent crowdsourcing environments, the lack of commitment from crowd members, and uncertainty in domain-specific crowd competence have all contributed to its lack of adoption, while the growing availability of computation resources often makes straight-forward optimization or brute-force search a more convenient approach.
Our earlier study [8] highlighted these challenges for design crowdsourcing: We gamified a vehicle design and control problem (called the “ecoRacer” problem in what follows) where the objective is to complete a track with the minimal energy consumption within a time limit, by finding the optimal final drive ratio of the vehicle and the control policy for acceleration and regenerative braking. The game was broadcast on social media and received more than 2000 plays from 124 unique players within the first month. Results showed that (1) the marginal improvement in average game score of the crowd over an algorithm does not necessarily justify the high cost for developing crowdsourcing games and (2) only a few players were committed to the search for more than 50 iterations, and still fewer can outperform the computer-found solution at all (see summary in Fig. 1).
Nonetheless, human search results displayed a significantly different search pattern than that of the algorithm. In particular, quite a few players showed rapid early improvement in performance, beyond the average performance of the computer, before they quit the game without reaching a solution close to the theoretical optimum. This observation is consistent with existing research (see, for example, Khatib et al. [2] on a human-designed protein folding algorithm having a short-term advantage over a standard algorithm) and suggests that while few people care to actually find the “best solution,” their early demonstrations on how they search for a better solution may still be valuable. Specifically, we hypothesize that if a computer algorithm can be tuned to mimic these demonstrations, it can serve as a replacement to human solvers in their absence, to search in an effective way without ever abandoning the problem.
Learning to Search.
This paper aims to test the previously mentioned hypothesis. We model a human solver's search behavior through a Bayesian optimization (BO) algorithm (BO, also known as efficient global optimization) [12,13]. The algorithm iterates between two steps: (1) Estimating the shape of the problem space, based on previous solutions and the corresponding performances, using a Gaussian process (GP) model [14] and (2) creating a new solution based on this estimate (details in Sec. 2). While BO is not provably the underlying mechanism humans use, we hypothesize that the algorithm can be tuned to mimic the results of successful human search strategies, specifically in comparison with other popular gradient- and nongradient-based optimization algorithms. The key assumption in modeling human search behavior through BO is the use of a GP to account for human beings' learning of input–output relationships (or called “function learning” in psychology). This assumption is supported by various findings: In a recent review of function-learning models, Lucas et al. [15] showed that the two major schools of models, i.e., rule- and similarity-based, can be unified through a Gaussian process.1 As discussed in Wilson et al. [16], the evidence that Occam's Razor plays an important role in human prediction also suggests that GP is an appropriate model for function learning, as GP reduces model complexity by construction [17]. Empirically, Borji and Itti [18] showed that BO, with the use of GP, has the closest convergence performance to human searches when applied to one-dimensional (1D) optimization problems. In fact, many higher-dimensional problems that human beings naturally solve, such as locomotion planning, have also been successfully solved through the use of GP [19–22].
Under this modeling assumption, we investigate how BO parameters can be estimated for the algorithm to best match human solver's search trajectory, i.e., the sequence of solution-performance pairs. To this end, we introduce an inverse BO (IBO) algorithm to derive the maximum likelihood estimators (MLEs) for BO parameters and discuss challenges in its implementation (see Sec. 3). Validation of the IBO algorithm takes two steps. We first use a simulation study to show that IBO can successfully estimate BO parameters used in generating a search trajectory (Sec. 3.2). We then show through the ecoRacer problem that the search performance of BO can be improved when its parameters are modified based on observing an effective human search and implementing IBO (Sec. 4). The results provide evidence that IBO can accelerate a search using only good search strategies without needing a large number of good human solutions. Thus, incorporating IBO in design crowdsourcing may lower the requirement on crowd commitment and so increase its chance of success. Limitations and their potential relaxations of the current IBO implementation will be discussed in depth in Sec. 5.
Related Work.
It is important to note that the focus of this paper is on the design of optimization algorithms aided by human demonstrations, rather than the derivation of qualitative explanations of the strengths and limitations of human design strategies. There have been numerous studies from the latter category in recent years (see Refs. [23–28] for example). This paper is also distinguished from studies that propose human-inspired optimization algorithms (see Refs. [29–31] for example), in that the learning of the optimization algorithm in our case is conducted by another algorithm, rather than by human researchers. From this aspect, our study is related to studies in learning-to-learn [32] where algorithms (e.g., for gradient-based optimization [33] and optimal control [34]) are tuned and controlled by a higher-level algorithm. In such work, however, the algorithms are often improved purely computationally through reinforcement learning (RL) by solving similar problems repeatedly. Due to the use of human demonstrations, our paper is also related to inverse reinforcement learning (IRL) (see discussion in Sec. 5.2), where human control strategies are used for defining and finding optimal control strategies.
Preliminaries on Bayesian Optimization
This section provides some background knowledge on BO to facilitate the discussion on IBO in Sec. 3.
Terminologies and Notations.
Let an optimization problem be where is the solution space. A search trajectory with K iterations can be represented by , where XK and represent the collection of K samples in and their objective values, respectively. represents an initial exploration set with K0 samples. Human strategy is represented by algorithmic parameters that govern the search behavior: During the search, each new solution (for ) is determined by and through maximizing a merit function with respect to : . The functional forms of the merit function will be introduced in Secs. 2.2 and 3. We also define and its estimator as .
The BO Algorithm.
We briefly review the BO algorithm, to explain how each new sample is drawn based on the merit function , itself defined by previous samples. Knowing this procedure is necessary for understanding the inverse BO algorithm, where we estimate the most likely BO parameters for a given trajectory of samples.
Figure 2 demonstrates four iterations of BO in optimizing a 1D function, with the GP model and the expected improvement function updated in each iteration. Note that similar to human searching behavior, BO is a stochastic process: First, the choice of the new design is stochastic, with better designs being more probable to be chosen;2 and second, the initial exploration h0 can be stochastic when it is modeled by a random sampling scheme, e.g., Latin hypercube sampling (LHS, see Ref. [12] for details).
Inverse Bayesian Optimization
We consider human solution search to consist two stages: A few exploratory searches are first conducted to acquire a preliminary understanding of the problem, before the execution of BO follows. For example, a player may spend a few trials to get familiar with a new game, before thinking about strategies to improve his score. IBO minimizes the sum of two costs corresponding to the exploration and BO stages, respectively. By doing so, it finds the most likely explanation of the underlying search strategy.
Specifically, IBO estimates , along with the size of the initial exploration set K0, given the trajectory hK. To do so, we introduce and minimize a cost function consisting of the exploration cost for h0, denoted as LINI, and the BO cost for the rest of hK, denoted as LBO. We define , where is the joint probability of the exploration set and is the size of the solution space; and where is the density for choosing conditioned on hk. Here, stands for natural logarithm.
The derivation of LINI and LBO is as follows: To calculate LINI, we assume that each new sample during the exploration phase, xi for , tends to maximize its minimum Euclidean distance to previous samples , this is referred to as the max–min sampling scheme in what follows. Let the joint probability of the exploration set be and each conditional probability follows a Boltzmann distribution: . Here, the scalar αINI represents how strictly each sample from follows the max–min sampling scheme, and is a partition function that ensures that . Note that the first sample in the exploration set is considered to be uniformly drawn, and thus, its contribution to the cost (a constant) can be omitted.
where is also a partition function. The parameter αBO plays a similar role to αINI. For simplicity, we define and , so that and . A lower value of or l represents higher probability density of the current sample to be drawn by max–min sampling or BO, respectively, and a zero indicates that the sample can be considered as uniformly drawn.
Note that to find the optimal K0 for any given αINI, αBO, and , one can first calculate the optimal and lk for , with respect to αINI, αBO, and , and then scan to find the lowest value of . The scan starts at because it is not meaningful to initialize BO with a single sample.
Numerical Integration for ZBO.
with arguments of QEI omitted for simplicity. The derivation of Eq. (5) is presented in the Appendix. Note that this approximation works under the assumption that , which is plausible as the normal distribution is designed to have a narrow spread to match the local peak at . In this paper, the shape of this normal distribution is set by universally. While the setting of σI affects the variance of the approximation of ZBO, we found this setting to perform well in practice. For ZINI, since the minimum Euclidean distance function in a high-dimensional space with limited samples is a relatively smooth function, we use Monte Carlo sampling for its approximation.
Simulation Studies.
As a validation step, we show that IBO can recover the parameters of a general BO given only an observed search trajectory. If IBO can determine the correct parameters (1) after a few number of iterations, (2) in a high-dimensional problem space, and (3) from a wide range of trajectory/parameter settings, then it could be used to recover parameters for matching a BO algorithm to an observed human search.
We use a simulation study to show that, for a given search trajectory, IBO can correctly identify the true provided the trajectory is sufficiently different from a random search. In addition, the simulation indicates that learning from already-efficient search behavior (i.e., estimating through IBO of an observed effective search trajectory) can lead to better BO convergence than the more common self-improvement methods (i.e., updating by maximizing the likelihood of the observations according to the GP model).
Simulation Settings and Results.
The simulation study is detailed as follows: We apply BO to a 30-dimensional Rosenbrock function constrained by . To initialize BO, we use LHS to draw ten samples from . BO terminates when the expected improvement for the next iteration is less than . At each iteration, the expected improvement is maximized using a multistart gradient descent algorithm [38] with 100 LHS initial guesses. A set of BO parameters, , and , are used to perform the search, where is the identity matrix. For each of the four settings, 30 independent trials are recorded.
For each BO setting , each candidate estimator , and each trajectory of length , we solve Eq. (4) using a grid search with and . We fix αINI to 1.0 and 10.0 and will discuss its influence on the estimation. Figure 3 presents the resulting minimal L for all the four cases and under all guesses. Each curve in each subplot shows how the minimal L (with respect to αBO and K0) changes as the search continues. The means and standard deviations of L are calculated using the 30 trials. ZINI is approximated using a sample size of 10,000. In approximating ZBO, samples from the normal and the uniform distributions are of equal sizes ().
Analysis of the Results.
Based on the results from this simulation, as summarized in Fig. 3, the major finding from this simulation study is that IBO can successfully recover the BO parameters in cases where BO does not resemble uniform random sampling of the design space. In the cases of , we see that the correct choices of consistently lead to the lowest cost along the search process. After only one or two iterations, in nearly all the cases, the correct parameter has the highest likelihood of all the four propositions, and this remains the case along the search. However, under large BO parameters such as , the similarity between any two points in the design space becomes close to zero, leading to (almost) uniform uncertainty and expected improvement. Therefore, this setting reduces BO to a uniform random sampling scheme. Figure 3(d) shows that IBO does not perform well in this situation. To better understand the behavior of IBO under near-random searches, a curious reader may find a discussion on the properties of the costs l and in the Appendix.
Learning From Others Versus Self-Adaptation.
The study mentioned earlier showed that the correct BO setting can be learned through IBO. This subsection further demonstrates the advantage of “learning from others” (i.e., updating through IBO), over “self-adaptation” (i.e., finding the MLE of using hk). The settings follow the study mentioned earlier and the results are shown in Fig. 4. First, to show the significant influence of on search effectiveness, we show the convergence of two fixed search strategies with and 10.0. Note that while neither converges to the optimal solution within 50 iterations, the former is significantly more effective than the latter. For “self-adaptive BO,” we use a grid search () to find that maximizes Eq. (1) at each iteration and use to find the next sample. We show in Fig. 4(b) the percentages of the four guesses being along the search, using as the initial guesses for BO. The learning from others case starts with and uses IBO to derive from the trajectory produced by . From Figs. 3 and 4(b), we see that does not converge to as quickly as IBO, which explains why learning from others outperforms self-adaptation in Fig. 4(a). It is worth noting that this difference in performance may be relatively dependant on the dimensionality of the problem, as the two strategies were found to have similar convergence performance when applied to two-dimensional functions. One potential explanation for this is that, in a lower dimensional space, an effective can be learned with a smaller number of samples.
Case Study
We now investigate how IBO may improve the performance of BO when applied to a vehicle design and control problem.
Dimension Reduction for Player's Control Signals.
The solution data from each game play consist of (1) the final gear ratio, (2) the recorded acceleration and braking signals, and (3) the corresponding game score. The length of a raw control signal matches that of the track, which has 18,160 distance steps. Encoding control signals to a low-dimensional space is feasible since common acceleration and braking patterns exist across all plays. In Ref. [11], this was done by introducing manually defined state-dependent basis functions (i.e., polynomials of the velocity of the car, slope of the track, distance to the terminal, remaining battery energy, and time spent) to parameterize the control signals. The underlying assumption that human players are aware of all the state-dependent bases is untested.
In this paper, we perform dimension reduction based on evidence that human beings often solve high-dimensional problem by performing problem abstraction and using a hierarchical search [39–43]. In the context of the ecoRacer game, we hypothesize that players segment the track into m discrete sections and make separate control decisions in each segment. Mathematically, this is equivalent to projecting observed signals onto m independent basis, which can be elegantly addressed by ICA [44]. Compared with principal component analysis, where the bases minimize the covariance of the data, our ICA implementation (see results in Fig. 5) maximizes the Kullback–Leibler divergence between all bases pairs and is more suitable for non-Gaussian signals, such as the control data from this game (i.e., the acceleration/braking signals across players at each step along the track are unlikely to follow a Gaussian distribution).
Much like principal component analysis the choice of the number of ICA bases requires a balance between fidelity and practicality. While it is theoretically possible to find the “most likely” number of bases using information-theoretic criteria for model selection [45],3 we chose to use 30 bases because (1) over 95% of the variance is explained and (2) the resultant solution space (30 control variables and one design variable) is small enough for BO to be effective.
Derivation of λ̂ and λ̂GP.
We apply IBO to two players, referred to as “P2” and “P3,” who achieved the second and third highest score within 31 and 73 plays, respectively, much less than the 150 plays from the achiever of the highest score. To do so, we first encode all the control solutions from the two players using the learned ICA bases. Together with the final drive ratios, all the solutions are then normalized to be within . IBO is performed separately on P2 and P3. We found that the probability for either player to have followed the max–min sampling scheme is lower than that of following BO, as the minimal values of for (with respect to αINI) are dominated by those of . This means that the players were not likely to have performed an exploration before they started trying to improve their performance. This finding is reasonable, as the scoring mechanism in ecoRacer game, just like in other racer games with fairly predictable vehicle dynamics, can be understood by the player early on. Therefore, the search for is performed by solving Eq. (4) with , , and a minimal number of initial samples () required for BO. For comparison purpose, we obtain using plays from P2, which represents a case where BO parameters are fine-tuned by the observed game plays, without trying to explain why these solutions were searched by the player.
Comparison of BO Performance.
Figure 6 compares the BO performance under (for P2 and P3), , and . In each case, we start with the first two plays from the players and run 180 BO iterations. Similar to the simulation study, results are reported using 20 trials due to the stochastic nature of BO. Due to the small trial number, bootstrap variance estimators are reported as the shades around the average in the figure. outperforms the other two settings consistently along the search with statistical significance. The BO performance by mimicking P2 is slightly better than that of P3.
The result shows that BO can be improved noticeably by learning from P2 and P3. However, the players' search is not fully mimicked by IBO, as they improved much faster than the modified BO does, indicating that the proposed model still has room for improvement. Nevertheless, the IBO implementation still achieves the closest performance to the players' among all the BO instances, and it is the only algorithm that achieved better performance than the players' best play within 100 iterations. This result demonstrates the potential of IBO to continue an effective human search after the player quits, with an improved search performance from a standard BO.
For completeness, we also note that in all the cases, the BO identifies the true optimal final drive ratio at the end of the search. We also qualitatively compare the best human solution with one BO solution with high score, along with the theoretically optimal solution in Fig. 7. The result indicates that while these control strategies yield similar scores, they are quantitatively different, although braking toward the end is observed as a common strategy. Human search data are documented in the webpage,4 where the best players' solution strategies are published.
Discussion
The study mentioned earlier provided a starting point for learning optimization algorithms based on human solution-search data. Yet, many pressing questions remain unanswered. This section will address a few notable ones. Some potential answers to these questions will rely on readers' familiarity with inverse reinforcement learning [19,47,48] (IRL, also called apprenticeship learning [49,50] and inverse optimal control [51]). To familiarize readers with this topic, a discussion on the connection between IBO and IRL is provided in Sec. 5.2.
Limitations and Potential Values of IBO.
From the case study, a strategy learning through IBO outperformed default algorithms, but is yet to reach the performance of the best human solver. This indicates potential room to further improve the algorithm. In the following, we discuss notable limitations of IBO. We shall also note that these also apply to the general problem of designing optimization algorithms through human demonstrations (called DO in what follows).
Model of human search strategies: Studies in cognitive science have put forth several core ingredients of human intelligence, including intuitive physics [52–55], problem decomposition skills [42,56,57], ability in learning-to-learn [58], and others [10]. While evidence has shown the connection between BO and human search [18], suitable models for human search strategies can be problem dependent. For example, for low-dimensional design problems, Egan et al. [59] showed that people adopting univariate search are more likely to achieve effective search. This result is supported by earlier psychological studies on how children perform scientific reasoning and thus may be useful to explain how people identify unfamiliar systems. However, univariate search may not reflect how people search for solutions in a familiar context (such as car driving) and with a large number of control and design variables to tune, as is the situation of the ecoRacer game. For such high-dimensional and physics-based design and control problems, a potentially reasonable human search model could be to incorporate human intuitive physics models into the evaluation of the expected improvement. Thus instead of estimating GP parameters, one could estimate a statistical model of the state-space equations of the dynamical system, which influences the expected improvement. At a more abstract level, the fundamental challenge in understanding how a human search strategy should be modeled is the lack of knowledge about the functional form of the local objective (i.e., the Q-function) that governs the generation of new solutions during the search based on the current state (cumulative knowledge learned by the human solver). As we will discuss later in this section, this challenge is also a key topic in IRL. Not surprisingly, one notable solution from IRL to this problem is in fact to use nonparametric models such as GP [19,60].
Uncertainty in estimation: A limited amount of demonstrations could be insufficient to provide a good estimation of the BO parameters, even though the underlying parameters are the effective ones. One potential solution to this could be to create a reward mechanism in the crowdsourcing setting, where the reward is determined by both the observed search effectiveness of each human solver and the uncertainty in the estimation of their search strategy. In the context of BO, this uncertainty can be measured by the covariance of the estimator, i.e., the Hessian of the cost function in Eq. (4). For people with effective search yet high estimation uncertainty, we can solicit more solutions from them by offering rewards. It would also be interesting to understand the influence of the properties of the problem, e.g., the size of the solution space, on the convergence of the estimation.
Knowledge transferability: The third limitation concerns the transferability of knowledge (search strategies) learned from one task (an optimization problem) to others. This limitation also leads to the question of how “effectiveness” of searches shall be measured, as we are not yet able to tell in what condition a strategy that has high rate of improvement (such as P2 in ecoRacer) will continue to produce better solutions than other strategies in a long term. The same issue, however, exists in IRL: e.g., a control policy learned for pancake flipping does not guarantee optimal egg flipping due to the differences in physical properties between pancakes and eggs. One solution to this in IRL is to allow the policy to adjust to new problem settings, by correcting the state transition model according to the new observations. This solution may also be applied to IBO. In the context of ecoRacer, knowledge such as “starting acceleration at the beginning of the track” could be considered as a universal strategy and requires less exploration, while the actual duration for executing this strategy may differ across problem settings. Therefore, it could be more effective for BO to adjust its parameters based on the ones that are learned from human demonstrations on a similar problem, rather than learning from scratch.
To summarize, IBO could be a valuable tool for machines to mimic human search behavior when (1) the underlying human search mechanism follows BO, (2) the demonstration is sufficient for estimating the true BO parameters with low variances, and (3) the true optimal BO parameters for a long-term search can be estimated based on an effective short-term search.
The Difference Between Learning to Search and Learning a Solution.
The proposed IBO approach can be considered as a way to design optimization algorithms with human guidance and is mathematically similar to IRL. In order to explain the similarities and differences between the two, we first introduce Markov decision process (MDP) and RL and make an analogy between MDP and an optimization algorithm.
Preliminaries on MDP and RL.
An MDP is defined by a tuple , where is a set of states, is a set of actions, the state transition function determines the probability of changing from state to when action is taken, is the instantaneous reward of taking action at state , is the discount factor of future reward, and specifies the probability of starting the process at state . In RL, a control policy π is a mapping from a state to an action, i.e., . The long-term value of π for a starting state can be calculated by , and thus, the value of π over all possible starting states is the expectation . A common way to represent a control policy is to introduce a Q-function with unknown control parameters , and let the policy be . RL identifies the optimal that maximizes .
MDP Versus Optimization Algorithm.
An optimization algorithm defines a decision process: Its instantaneous reward is the improvement in the objective value achieved by each new sample, and the cumulative reward represents the total improvement in the objective within a finite number of iterations; its state contains the current solution (in ), the corresponding objective value, and potentially the gradient and higher-order derivatives of the objective function at the current solution; its action is the next solution to evaluate; and its state transition is governed by the optimization algorithm and its parameters. This is similar to MDP where the state transition is affected by the control parameters. The decision process defined by an optimization algorithm, however, is usually non-Markovian, as the new solutions rely on the entire search trajectory. Note that it is still possible to consider the optimization process as an MDP, by redefining the state as the continuously growing search trajectory, i.e., elements in the state set shall represent all possible search trajectories, rather than samples in .
IRL Versus IBO.
RL algorithms identify an optimal control policy for an MDP with a given reward function. However, real-world applications hardly have explicit definitions of rewards, e.g., the reward for “driving a car” cannot be explicitly defined, although people form control policies based on their inherent reward (preference). Therefore, control policy for such applications can be learned more effectively through demonstrations of human beings, which are assumed to be optimal according to the inherent reward of the demonstrator. IRL techniques have thus been developed to identify the reward (and consequently, the Q-function and the optimal control policy) that explains human demonstrations, either by estimating the reward parameters so that the demonstrated policy has a higher value than any other policies by a margin [47,49,61,62] or by finding the maximum likelihood control parameters directly [48,63].
where is a partition function for the visited state si. One can notice the similarities between Eqs. (6) and (4): (1) Both are maximum likelihood parameter estimations related to an instantaneous cost, i.e., the reward in Eq. (6) and the expected improvement in IBO. (2) Both involves partition functions that are computationally expensive and dependent on the parameters . Due to this dependency, a direct Markov-Chain Monte Carlo sampling in the space of (e.g., as in Ref. [63]) cannot be applied to optimize the likelihood function since the partition values for two different samples of do not cancel. Ziebart et al. discussed on alternative approach to address this computational challenge, by using the “expected edge frequency calculation” algorithm that has a complexity of for each gradient calculation of the objective in Eq. (6), where N is a large number [48]. However, this approach can be infeasible for the IBO estimation problem in Eq. (4) since (1) the space is usually continuous and (2) even with a discretization of , the enormous size of and can easily make the calculation intractable, based on the discussion in Sec. 5.2.2.
Further, one shall notice that IRL and IBO use different assumptions about human demonstrations: Demonstrations in IRL are assumed to be near-optimal. Thus, learning from them leads to an optimal control policy for an MDP. Demonstrations in IBO, on the other hand, are assumed to be from an effect search strategy, yet are not necessarily optimal. Thus, learning from them leads to an optimization algorithm, rather than a solution. This difference affects the application of the two: IRL can be used when the machine is told to mimic existing solutions, by understanding why these solutions are considered good, e.g., it answers the question “why do people flip pancakes this way?”; IBO can be used when the machine is meant to mimic the process of searching for good solutions, by understanding how to evaluate the expected improvement of solutions, e.g., it answers the question “how did people figure out this way of pancake flipping?”
Conclusions
In this paper, we attempted to address a dilemma in design crowdsourcing: While human beings acquire more advanced intelligence than machines in solving certain types of optimal design problems, soliciting valuable solutions through existing crowdsourcing mechanisms is not cost-effective due to the lack of control over crowd participation and the problem-specific qualification of the crowd. Based on the previous finding that more people acquire good searching strategies than good solutions, we proposed in this paper to mimic human search demonstrations by inversely learning a Bayesian optimization algorithm, so that long-term search can be executed more effectively by the computer even when human solvers abandon the problem. Through simulation and case studies, we showed improved performance of BO when it is equipped with parameters learned through an effective human search. However, the significant performance gap between a human demonstrator and the proposed algorithm in the case study suggested room for improvement of the algorithm. Future investigation will focus on closing this gap by exploring more suitable cognitive models of human solution searching for specific types of optimal design problems.
Funding Data
National Science Foundation (Grant No. CMMI-1266184).
Appendix
Derivation of the Partition Function (ẐBO).
IBO Behavior Under Near-Random Search
Properties of l and l̃.
From Sec. 3.1, the unbiased estimation of through importance sampling is
has the following properties.
Property 1. leads to , indicating that is uniformly sampled. One can see that the optimal cost of LBO is nonpositive, as one can always achieve LBO = 0 by considering samples to be uniformly drawn.
Property 2. When the expected improvement function is constant almost everywhere, i.e., , we have . This is because a uniformly drawn initial guess will almost surely satisfy the optimality condition for maximizing a constant function.
where and . Here, we need to introduce a conjecture: Let be the average expected improvement, and be the measure of a subspace where the sampled expected improvement value is higher than . A decreases from above to below along the increase of the BO sample size. In other words, a uniformly drawn sample has more than 50% of chance to have an expected improvement value higher than at the early stage of BO and less than 50% at the late stage.
One evidence of the conjecture is illustrated in Fig. 2: In the first iteration, is slightly lower than 0.5 while the majority of has ; in the fourth iteration, however, only a small region around the peak has . Using this conjecture, we can show that when the sample size is small, thus . Together with Property 1, we have for a small αBO and a small sample size.
Property 4. We notice that in this experiment, the discrepancy between LHS and the modeled max–min sampling scheme leads to overall high (positive) values, indicating that the samples are not likely follow this scheme. This is consistent with the fact that LHS is not exactly the same as max–min sampling, at least until all of h0 have been considered. We also see that negative values can be observed when αINI is low, suggesting that the LHS samples can be better explained by a loosely executed max–min sampling scheme than a strict one.
Discussion on Findings From Fig. 3.
We now summarize a complete list of findings based on these properties. Finding 1. A comparison between and 10.0 leads to a finding consistent with Property 4. Since the samples are not likely to be drawn from a strictly executed max–min sampling scheme, the entire search trajectory is considered to be created from BO in the case of . While the early samples (less than 10) can be considered as from max–min sampling when (), the low magnitude of causes this difference to be only visible in the case of , where the magnitude of l is also low.
Finding 2. IBO correctly identifies the true within a few iterations after the initial exploration, except for the case of . To explain this exception, we first note that leads to an expected improvement function that is constant almost everywhere (except for the sampled locations where QEI = 0), and thus, BO reduces to uniform sampling. From Property 2, LBO = 0 almost surely when we have the correct guess on . Also, recall from Property 4 that LINI > 0 when αINI is high. The above two together explain why with the correct guess of , we have L close to zero when and slightly negative when .5
To explain the negative L values for the incorrect guesses of , we use Property 3 to show that when the sample size is small and the expected improvement function is not flat, LBO < 0 for a small αBO, and thus L < 0. To summarize, Finding 2 suggests that for a search trajectory with a limited length that resembles a random search, the proposed IBO approach will consider it being derived from a BO that loosely solves Eq. (4). However, this caveat is of little practical concern, since (1) a random search rarely outperforms BO with nontrivial settings and (2) a BO with low αBO (instead of high ) can equally simulate a random search.
For completeness, we used 1000 principal component analysis as preprocessing to obtain the most likely number of ICA components under three suitable criteria: minimum description length, Akaike information criterion, and Kullback information criterion, as 187, 464, and 373, respectively, using the method from Ref. [45]. While these dimensionalities could make sense from a neurological perspective (e.g., given that the game takes 36 s, a decision interval of is close to the range for the time-frame of attentional blink, which is 200–500 ms [46]), the resultant high-dimensional solution spaces are unfavorable for BO.
But why does the guess of lead to significantly decreasing L in the other three cases? This is because in those cases, BO does not resemble random sampling, i.e., the sequences of samples are more clustered. When a new sample is among this cluster, its similarities to existing ones are nonzero even when a large is assumed, due to the small Euclidean distance among the pairs. And in turn, the expected improvement function has peaks within the clusters and remains constant far away from them, rather than being a constant almost everywhere. As a result, the optimal value of with respect to αBO becomes negative, even when is incorrectly guessed as .