WO2000020983A1 - A system and method for determining production plans and for predicting innovation - Google Patents

A system and method for determining production plans and for predicting innovation Download PDF

Info

Publication number
WO2000020983A1
WO2000020983A1 PCT/US1999/022911 US9922911W WO0020983A1 WO 2000020983 A1 WO2000020983 A1 WO 2000020983A1 US 9922911 W US9922911 W US 9922911W WO 0020983 A1 WO0020983 A1 WO 0020983A1
Authority
WO
WIPO (PCT)
Prior art keywords
production
recipe
cost
production recipe
operations
Prior art date
Application number
PCT/US1999/022911
Other languages
French (fr)
Other versions
WO2000020983A8 (en
Inventor
Philip Auerswald
Stuart A. Kauffman
Jose Lobo
Karl Shell
Original Assignee
Bios Group Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bios Group Lp filed Critical Bios Group Lp
Priority to AU62836/99A priority Critical patent/AU6283699A/en
Publication of WO2000020983A1 publication Critical patent/WO2000020983A1/en
Publication of WO2000020983A8 publication Critical patent/WO2000020983A8/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling

Definitions

  • the present invention relates generally to design and evaluation of research and development, technology transfer, and learning-by-doing, and more particularly to the determination of production plans and the prediction of innovation.
  • neoclassical model a production plan is merely a point in input- output space.
  • the neoclassical model has been extended to accommodate inter-temporal features such as the variability over time of factory supplies, uncertainty about the production process, and uncertainty about prices.
  • the neoclassical model of production is not, however, fully dynamic since it does not provide a microeconomic basis for explaining technological evolution due, for example, to learning-by-doing, education and training, research and development, or technology transfer.
  • Macroeconomic models based on production externalities and/or non-conventional inputs have been useful in raising important issues about public policy toward technology and in explaining observed increases in aggregate output, but the inadequacy of the
  • a simple example with one input and one output is described in FIG. 16.
  • a neoclassical production plan is a point (x, y) where x ⁇ 0 is the quantity of the input and where y > 0 is the quantity of the output.
  • the (shaded) set T is the set of all feasible neoclassical production plans.
  • the northwest boundary of T is the production possibility frontier (PPF).
  • Points on the PPF represent the "efficient" production plans according to the neoclassical model, since no other plan can be found that gives either more output for the same input or less input for the same output.
  • the PPF is linear, i.e., production exhibits constant returns to scale.
  • production plan A in FIG. 16 is “efficient.”
  • Production plan B is “inefficient.”
  • a strictly "dominates" B pair A yields more output with less input than pair B.
  • a and B are nearby (or similar) production plans.
  • C is distant from A and B.
  • Recipe 1 supports all pairs (x, y) ⁇ 0 that satisfy y ⁇ ⁇ x, where ⁇ is a positive scalar.
  • Recipe 2 supports all pairs (x, v) > 0 that satisfy > ⁇ ⁇ 'x, where ⁇ ' ⁇ ⁇ is a positive scalar.
  • Recipe 1 is from an engineering viewpoint very different from Recipe 2, even though pair A is close to pair B — would it ever be rational to produce at 5? The answer is yes.
  • Recipe 2 is relatively untried. Using Recipe 2 might lead to the discovery of recipes close to 2, but with lower production costs than Recipe 1 (and Recipe 2). A case could be made in some circumstances for using only Recipe 2 and in other circumstances for using both Recipe 1 and Recipe 2 simultaneously.
  • a preferred embodiment comprises a method for determining a production plan comprising the steps of: defining a plurality of production recipes such that each of said production recipes is a vector of n operations; selecting a current one of said production recipes; evaluating said current production recipe to determine its cost; modifying said current production recipe to create a trial production recipe; evaluating said trial production recipe to determine its cost; and assigning said trial production recipe to said current production recipe if said cost of said trial production recipe is less than said cost of said current production recipe.
  • a second preferred embodiment comprises a method for predicting technological innovation comprising the steps of: defining a model comprising: a plurality of production recipes such that each of said production recipes is a vector of n operations; and a plurality of model parameters; and executing said model comprising the steps of: selecting a current production recipe; evaluating said current production recipe to determine its cost; modifying said current production recipe to create a trail production recipe; and assigning said trial production recipe to said current production recipe if said cost of said trial production recipe is less than said cost of said current production recipe.
  • FIG. 1 is a flowchart diagram illustrating the general steps of a preferred embodiment.
  • FIG. 2 illustrates an analytical representation of a production recipes model used in a preferred embodiment.
  • FIG. 3 illustrates the ⁇ -neighborhoods used in a preferred embodiment.
  • FIG. 4 illustrates a technological graph
  • FIG. 5 illustrates the analytical representation of cost reduction dynamics used in a preferred embodiment.
  • FIG. 6 is a table (Table 1) of estimated progress ratios for various industries.
  • FIG. 7 is a table (Table 2) summarizing the parameters used in a preferred embodiment.
  • FIG. 8 is a table (Table 3) summarizing the results of a 20-run per parameter vector experiment run on a focused parameter set.
  • FIG. 9 is a table (Table 4) summarizing the results of a 20-run per parameter vector experiment on a random parameter set.
  • FIG. 10 is a three part table (Table 5) illustrating the prediction of the sample mean of the estimated progress ratios by OLS scoring from the focused parameter set.
  • FIG. 11 is a three part table (Table 6) illustrating the prediction of the sample mean of the estimated progress ratios by OLS scoring from the random parameter set.
  • FIG. 12 is a three part table (Table 7) illustrating the prediction of the sample standard deviation of the estimated progress ratios by OLS scoring from the focused parameter set.
  • FIG. 13 is a three part table (Table 8) illustrating the prediction of the sample standard deviation of the estimated progress ratios by OLS scoring from the random parameter set.
  • FIG. 14 is a three part table (Table 9) illustrating the prediction of the mean improvement percentage per measured batch by OLS scoring from the random parameter set.
  • FIG. 15 is a three part table (Table 10) illustrating the prediction of the mean curvature mis-specification by OLS scoring from the random parameter set.
  • FIG. 16 illustrates weaknesses of neoclassical production theory.
  • FIG. 17 is a histogram of estimated firm progress ratios.
  • FIG. 18 illustrates the phenomenon of plateauing.
  • FIG. 19 illustrates the SFS (Slow Fast Slow) Mis-specification.
  • FIG. 20 is a graph of a base parameter experience curve, after one production run.
  • FIG. 21 is a graph of a smooth landscape, after one production run.
  • FIG. 22 is a graph of a rugged landscape, after one production run.
  • FIG. 24 is a graph of the base parameter experience curve, with the same parameters as in the graph in FIG. 20, but averaged over twenty production runs.
  • FIG. 25 is a graph of a smooth landscape, with the same parameters as in the graph in FIG. 21, but averaged over twenty production runs.
  • FIG. 26 is a graph of a rugged landscape, with the same parameters as in the graph in FIG. 22, but averaged over twenty production runs.
  • FIG. 27 is a graph of a rugged landscape, with the same parameters as in the graphs in FIGS. 22 and 26, but averaged over fifty production runs.
  • FIG. 28 is a graph of a landscape with the same parameters as in the graph in FIG. 23, but averaged over twenty production runs.
  • FIG. 29 illustrates a way in which parameters are chosen in the preferred embodiment.
  • FIG. 30 illustrates effects of varying n, the number of operations in the recipe, on the sample mean of the estimated progress ratios (p ) and the sample standard deviation of the estimated progress ratios (s p ).
  • FIG. 31 illustrates effects of varying s, the number of possible states for each operation in the recipe, on the sample mean of the estimated progress ratios (p ) and the sample standard deviation of the estimated progress ratios (s p ).
  • FIG. 34 illustrates effects of varying ⁇ , the maximum number of operations changed per trial, on the sample mean of the estimated progress ratios ( p ) and the sample standard deviation of the estimated progress ratios (s p ).
  • FIG. 35 illustrates effects of varying T, the number of trials, on the sample mean of the estimated progress ratios (p ) and the sample standard deviation of the estimated progress ratios (s p ).
  • FIG. 36 illustrates plateauing effects of varying n, the number of operations, on the sample mean of the estimated progress ratios (p ) and the standard deviation in the percent of trials that result in improved productivity (s,).
  • FIG. 37 illustrates plateauing effects of varying s, the number of possible states for each operation in the recipe, on the mean percentage of trials that result in improved productivity ( z ), the standard deviation in the percent of trials that result in improved productivity (s,), and other parameters.
  • FIG. 38 illustrates plateauing effects of varying e, the number of other operations that affect the cost of each operation, on the mean percentage of trials that result in improved productivity ( z ) and the standard deviation in the percent of trials that result in improved productivity (5,).
  • FIG. 39 illustrates plateauing effects of varying ⁇ , the maximum number of operations changed per trial, on the mean percentage of trials that result in improved productivity ( z ) and the standard deviation in the percent of trials that result in improved productivity (s z ).
  • FIG. 40 discloses a representative computer system 4010 in conjunction with which the embodiments of the present invention may be implemented.
  • a preferred embodiment is a system and method for modeling technological innovation.
  • One of the more important aspects of a preferred embodiment is in the description of the production plan.
  • To the usual input-output specification is added a description of the underlying engineering recipe employed. Describing how one recipe is related to another then allows one to build models that suggest which types of technologies are likely to be uncovered in the course of ordinary shop-floor operations (learning-by-doing), which R&D programs are most likely to be successful, and which types of technologies are ripe for transfer from one firm (or economy) to another.
  • a production recipe is described by a vector of basic production operations such as heating, mixing, stirring, shaping, boxing, internal factory - transportation, and so forth.
  • basic production operations such as heating, mixing, stirring, shaping, boxing, internal factory - transportation, and so forth.
  • the input requirements for each of the operations depends on the instruction (or setting) given for that operation and the instructions given for some of the other operations.
  • the embodiment allows for production externalities within the firm.
  • a specific application of the (more general) production recipes approach (along with the nascent technology approach) of the preferred embodiment is the construction of a model of shop-floor learning-by-doing. It is assumed that the firm employs a single input to produce a single output and that, for a given fixed recipe, this process entails constant returns to scale. It is also assumed that the firm's output stream is predetermined. The method allows for deviations from the currently reigning technology, but assumes that such production trials (or "errors") are not directly controlled by the firm. It is assumed that a newly discovered recipe is either accepted or rejected merely on the basis of current cost efficiency relative to that of the reigning technology.
  • a preferred embodiment comprises the general steps of applying the machinery of the production recipes model to a specific problem (in a preferred embodiment, the learning-by- doing problem), refining the parameters of the model to fit the details of the problem, and then utilizing the model to predict which nascent technologies are the most promising.
  • the subject disclosure describes a system and method for constructing a microeconomic model of technological evolution.
  • nascent technologies are added, which include both undiscovered technologies and forgotten technologies.
  • a production recipe ⁇ is a complete list of engineering instructions for producing given outputs from given inputs. In the preferred embodiment it is assumed that the firm uses a single input to produce a single output. It is also assumed that, given the recipe choice, there is no waste in production.
  • the method of representing technologies utilized in a preferred embodiment is an improvement on the approach taken in several methods of modeling technological innovation in which there are two types of technologies, advanced and backward (and two types of firms, also advanced and backward) (see, e.g., Shell (1973) and Grossman and Helpman (1991) (G.M. Grossman and E. Helpman, Innovation and Growth in the Global Economy, 1991)).
  • Advanced firms have access to both the advanced and backward technologies, but backward firms are restricted to the backward technology.
  • ⁇ a be the advanced recipe
  • ⁇ b be the backward recipe.
  • the strong non-crossing in the absence of non-crossing, there might not be a most advanced (or a most backward) recipe) assumption is made, so
  • a production recipe is described by a vector of basic production operations such as heating, mixing, stirring, shaping, boxing, internal factory transportation, and so forth.
  • basic production operations such as heating, mixing, stirring, shaping, boxing, internal factory transportation, and so forth.
  • the input requirements for each of the operations depends on the instruction (or setting) given for that operation and the instructions given for some of the other operations.
  • one allows for production externalities within the firm. These intra-firm production externalities are crucial to the analysis.
  • the method of a preferred embodiment is to use the production recipes model to provide a microeconomic model for observed learning-by-doing in production, and then use computer simulations to discover which nascent technologies are the most promising.
  • a complete model of production is specified (the LRF) (which includes inputs, outputs, and recipes; as well as current technologies and nascent technologies) and then "closed” so as to model shop floor productivity improvements and hence to explain the observed empirical features of the firm's experience curve.
  • LRF complete model of production
  • the learning-by-doing model is important in its own right for economic theory and economic policy. It would be worthwhile to understand the micro sources of the productivity increases and what promotes them, rather than merely representing this process as a fixed macroeconomic externality.
  • Empirical experience-curve analysis is central to management science and management practice. It would be desirable if these experience curves could be explained in terms of basic micro economics.
  • the method of production recipes (and nascent technologies) is quite general, with possible applications for modeling R&D, basic research, and technology transfer. In the disclosed preferred embodiment, the general method is illustrated by a concrete application. There are three reasons why learning- by-doing is a good candidate for the preferred embodiment: (1) Empirical studies of engineering experience curves are abundant. (2) Learning-by-doing permits one to be relatively less sophisticated in modeling the purposiveness of the economic agents, so that one can focus — for the time being — on the relatively sophisticated model of technology. (3) The one input/one output learning-by-doing model allows one to use — with only minor modifications — a tested computer simulation program.
  • the preferred embodiment is not restricted to myopic recipe selection. It would be made of
  • a preferred embodiment works for one input, one output, constant return cases for which the focus is either the short term or the intermediate term.
  • FIG. 1 is a flow diagram illustrating the method of a preferred embodiment.
  • a set of one or more production recipes, each recipe comprised of/; operations, is
  • a technological graph T is constructed whose nodes are the recipe vectors ⁇ .
  • a measure of distance between the recipe vectors is defined.
  • a production recipe trial on T is defined to be a recipe vector within a distance ⁇ e ⁇ 1, 2, ..., n ⁇ of a currently adopted recipe vector.
  • productivity gain or loss is defined in terms of an estimated progress ratio, and at step
  • 25 150 production recipes are evaluated by measuring the productivity gain or loss of trials on Y.
  • production is assumed to involve n distinct engineering operations.
  • the finiteness of the set ⁇ ⁇ recipe vectors ⁇ has serious consequences.
  • a model based on a finite space of recipes does not permit long-run productivity growth.
  • the model with a finite set of recipes is, however, quite appropriate for modeling the intermediate term productivity improvements observed in the manufacture of specific goods (such as a particular model of a microprocessor) over their relatively short product lives (measured in months, years or — at the very most — decades).
  • the unit labor cost of operation /, ⁇ '( ⁇ ) is a random variable whose distribution function is defined on 1 + .
  • the random variables ⁇ '( ⁇ ) and ⁇ '( ⁇ ') are not necessarily independent. In fact, ⁇ ' depends on the instructions, ⁇ ', for operation / and possibly on (some of) the instructions for the other operations, ⁇ " '. (With minor abuse of notation, one could then have denoted the unit labor costs of operation i by ⁇ ( ⁇ '; ⁇ " '), or more simply, ⁇ '( ⁇ ).)
  • ⁇ ( ⁇ ) is the unit cost of production employing recipe ⁇ .
  • ⁇ ( ⁇ ) is a random variable. If ⁇ is allowed to vary over ⁇ , then ⁇ ( ⁇ ) is a random field.
  • a random field is a slight generalization of a stochastic process to allow the argument (in this case ⁇ ) to be a vector (as opposed to being a scalar such as "time").
  • n 1, ⁇ ( ⁇ ) is then an ordinary stochastic process.
  • ⁇ '( ⁇ ) the realization of the random variable ⁇ '( ⁇ ).
  • n the number of production operations
  • s the number of possible instructions or settings for each operation
  • e the externality parameter that gives the number of operations whose settings affect the costs of one operation
  • the maximum step size per trial
  • (5) ⁇ the number of trials made on the shop floor per measured batch
  • (6) T the length of the production run.
  • the appropriate metric might be altogether different.
  • the distance would in this case be small, since every chemist is aware that the elements chlorine and fluorine are likely to react similarly because they are from the same column of the Periodic Table of the Elements.
  • N.( ⁇ ) ⁇ ' e ⁇
  • d( ⁇ , ⁇ ') / ⁇ , where / is a positive integer.
  • N ⁇ ( ⁇ ) 310 is a ⁇ -neighborhood of ⁇ 210.
  • the nodes or vertices 410 of V are the recipes ⁇ e ⁇ .
  • the edges 420 of T connect a given recipe to recipes distance 1 away (i.e., to the elements of N,( ⁇ )).
  • the definition of the input requirement field which is a full description of all the basic technological possibilities facing the firm.
  • LRF The input requirement field (LRF) is the random field ⁇ ( ⁇ ) (defined over the vertices of the technological graph T).
  • the set of sub-recipes cost-relevant to operation / is a projection of the s" recipes into s e sub-recipes. There are ns e such sub-recipes in all.
  • e is an inverse measure of the correlation between ⁇ ( ⁇ ') and ⁇ ( ⁇ ) for ⁇ ' close to ⁇ .
  • the corresponding landscape C( ⁇ ) (a realization of the LRF) is typically "correlated,” or “smooth,” for small values of e, while ⁇ >( ⁇ ) is typically “uncorrelated,” or “rugged,” for large values of e.
  • is a positive integer it is inte ⁇ reted as the number of trials per measured batch.
  • ⁇ > 0 is inte ⁇ reted as the average number of trials per measured batch. Note that ⁇ > 1 if B ⁇ B, and ⁇ ⁇ 1 if B ⁇ B.
  • the trials parameter ⁇ is likely to be higher for airframes than for computer chips, because the number of operations, n, is likely to be higher in the airframe case.
  • the trials parameter, ⁇ also depends on management practices, co ⁇ orate culture, and worker psychology. In a tight production environment, there would be fewer defects, but fewer trials. In a looser production environment, there would be more defects, but also more trials. Hence management is likely to be keen to attempt to control ⁇ if possible to achieve an optimal balance between "sloppiness" and "inventiveness.”
  • a trial can be inte ⁇ reted in at least three (non-exclusive) ways.
  • the first inte ⁇ retation is that the trial is a small scale experiment in production to which the firm does not fully commit. This would be a model of R&D in which the only cost of the R&D activity is the missed opportunity for investigating alternative recipes during the period in question.
  • a natural alternative is to assume that the firm must commit to the new production recipe in order to sample it: If recipe ⁇ , is chosen during the production run t, then the labor requirement for run t will be the realization of ⁇ ( ⁇ ,). In this case, unit costs may actually increase from one period to the next, i.e., cost retrogressions could occur.
  • a third inte ⁇ retation is possible.
  • Each sub-unit begins a production run at time t with an assumed labor requirement ⁇ !( ⁇ ,. ,), the unit cost for run t - 1.
  • ⁇ !( ⁇ ,. ,) the unit cost for run t - 1.
  • the associated labor requirement for this production sub-unit is then ⁇ ( ⁇ '), the realization of ⁇ ( ⁇ ').
  • the average per- batch cost of production is close to H( ⁇ ,_ ,), the unit cost of the (pre-trial) reigning recipe. Consequently, in this scenario, one can think of firms trying new recipes, without substantial sacrifices in current labor requirements.
  • This third inte ⁇ retation is utilized in the preferred embodiment, because it allows simulations to be based on an existing, tested computer program.
  • the probability distribution of trials could be different. For example, a scheme that loads more probability on the recipes closer to ⁇ would in general be more realistic. Such a modification would be a matter of weighting the probability distribution as function of the distance from ⁇ , . ,, would be obvious to one of ordinary skill in the art.
  • the density is 0 for the extreme labor requirements, 0 and 1; otherwise it is positive. If ⁇ ' e is currently available, then ⁇ '( ⁇ ' e ) is a degenerate random variable, in which all of the probability is massed on a scalar ⁇ , ( ⁇ ) e [0, 1/n]. If the recipe ⁇ is currently available, then ⁇ ( ⁇ ) is a degenerate random variable, in which all of the probability is massed on a scalar £( ⁇ ) e[0,l].
  • ⁇ !, C( ⁇ ( ) is the unit labor requirement for production run t
  • Y t _ ⁇ ⁇ y
  • s ⁇ cumulative output up to (but not including) run t
  • b > 0 is the learning coefficient
  • a > 0 is the labor needed to produce the first batch of the good.
  • one airframe is equal to one measured batch.
  • Y t would be the serial number of the last airframe in production run t.
  • the learning coefficient represents the rate at which the productivity increases as the firm acquires "experience.”
  • a commonly used measure of productivity improvement is t e progress ratio.
  • a small progress ratio is an indicator of rapid cost improvement, while a higher progress ratio indicates less cost improvement.
  • is the average unit labor requirement (or cost) of producing the goods with serial numbers greater than Y t _ 2 and less than or equal to Y t _ ,.
  • Table 1 (FIG. 6) are displayed some estimates of the learning coefficients and the progress ratios from firm-level experience curves (by industry). From this and the review articles by Conley (1970) (P. Conley, Experience curves as a planning tool, IEEE Spectrum, 7:63-68, 1970), Yelle (1979) (L.E. Yelle, The learning curve: Historical review and comprehensive survey, Decision Sciences, 10, January 1979), Dutton and Thomas (1984) (J.M. Dutton and A. Thomas, Treating progress functions as a managerial opportunity,
  • FIGS. 20-23 The most fundamental unit of analysis is a single realization of the production run, examples of which are displayed in FIGS. 20-23.
  • a "point” in one of these figures gives the normalized log of the unit labor requirement for the currently prevailing technology.
  • the horizontal axis is the log of time (or, equivalently, the log of the accumulated number of quality-control batches to date) .
  • a production run is then a "walk" on a landscape (i.e., a realization f( ⁇ ) of the LRF ⁇ ( ⁇ )).
  • the line in one of these figures is the OLS fit to the points in the figure.
  • the landscape ⁇ ( ⁇ ) and the method of "walking" on this landscape have now been completely defined. All that remains to be specified is the starting point (on the landscape) for the walk. It should be clear to those of ordinary skill in the art that for some applications, the starting point might be given by information about the production experience of competitors or suggestions from the firm's R&D department. In the absence of such prior information, one can merely pick one recipe randomly (with uniform probabilities over the s" recipes in the set ⁇ ) to be the starting point.
  • the log (unless otherwise indicated, log denotes the natural logarithm) of the labor requirement is re-normalized so that the adjusted log of the initial unit labor requirement is 1.0.
  • adjusted log labor requirement 1 + log ⁇ .
  • the adjusted log labor requirement is negative for H ⁇ 0.36787. Negative values of this convenient measure cause no problems (though it would be economic nonsense if unadjusted $ were to be negative).
  • b is the OLS estimate of the learning parameter b, i.e., b is the absolute value of the slope of the regression line.
  • H ⁇ the labor requirement after ⁇ trials (or the "terminal" labor requirement, for short).
  • the terminal labor requirement H ⁇ is usually (but not always!) inversely related to b : usually, the lower the final labor requirement the steeper is the experience curve. If there were no specification error of the experience curve, this would always be the case and H T would be an uninteresting statistic.
  • b would then be small because OLS would heavily weight the asymptote in this case.
  • the fact that i ⁇ is small indicates that the estimated learning coefficient b (and hence the estimated progress ratio P) might be misleading.
  • the measure z is preferable to direct measures of plateauing (e.g., average plateau length) because it is less sensitive to the distortions caused by the presence of a long final plateau.
  • plateauing e.g., average plateau length
  • the transient is of greater interest than the steady state, since in most real world cases, the rate of product replacement (due to, say, a new, superior product) is rapid relative to the exhaustion of productivity improvements for the original product.
  • the total number of improvements observed by ⁇ /T is weighted so that the measure will reflect, not the absolute number of improvements found, but rather the likelihood that a new observation will be a productivity improvement.
  • FIGS. 20-23 only one production run was used.
  • FIGS. 24-28 presents averaged data of multiple production runs based on a given vector of parameters.
  • FIG. 24 is nearly the same as FIG. 20 except that in FIG. 24 the data is averaged over 20 separate production runs.
  • the vertical axis in FIG. 24 measures the adjusted log labor requirement of the industry average over 20 firms or, alternatively the adjusted log labor requirement of the
  • the average walk shown in FIG. 25 is even
  • FIG. 26 is the 20-f ⁇ rm average run using FIG.
  • FIG. 27 is the 50-firm average run based on the same rugged landscape data. Plateauing is further reduced; 15.7% of the trials result in improvements.
  • the estimated progress ratio for each of the 3 cases (FIGS. 22, 26, 27) is about 98%. It is difficult to judge the SFS effect when there is so much plateauing, but it is positive and one could argue that the effect is constant across FIGS. 22, 26, and 27.
  • FIG. 28 is the 20-firm average related to the single firm walk in FIG. 23. The averaging reduces plateauing (increases z), has no effect on the progress ratio or the terminal labor requirement, and seems to reduce the SFS effect but only slightly.
  • computing 20 different realizations means running the simulation program using the same parameter set, but with 20 different random seeds.
  • a new random seed yields a new realization of the externality connection among the operations, a new realization of the landscape C( ⁇ ), a new starting point on the landscape and hence a new experience curve.
  • the "maximum degree of randomness" between different realizations of the experience curve was chosen.
  • the focused parameter set was constructed from the above four base case vectors by varying the six parameters one at a time. Summary statistics for the focused parameter set are 25 given in Table 3 (FIG. 8). There are 173 parameter vectors in the focused set. For each vector there are 20 runs, so the total number of runs is 3,460.
  • the method of choosing parameters would be that described in FIG. 29.
  • the base parameter vector is given as row 3, column 5 of the simple 10 x 10 matrix recipe set in FIG. 30 29.
  • the set of focused parameters is the union of the set of recipes having a row-3 component with the set of recipes having a column-5 component.
  • the focused parameter set is in the shaded "cross.”
  • the advantage of the focused parameter method is that one begins with a set of reasonable base parameters and then tests the sensitivity of the predictions to each of these parameters varied one at a time from each of the base vectors.
  • the parameters are often set at "extreme" values to test parameter sensitivity.
  • the number of parameter vectors in the set of random parameters is 250. There are 20 runs per vector. Hence there are 5000 runs in all for the random parameter set. Summary statistics for the random parameter set are given in Table 4 (FIG. 9).
  • the effect of varying a particular parameter on one of the predictions (or "results") of the model typically depends on the inte ⁇ lay of two effects: (1) the effect of the parameter change on the size of the recipe space and (2) the effect of the change on the trial (or, recipe sampling) mechanics. Typically, these effects are too complicated to permit an analytic solution, especially since the bias is on the short term and the medium term. Computation is called for.
  • the first effect of increasing n is to increase the number of recipes, s". This effect (especially for large T) should tend to increase long-term productivity improvement.
  • increasing n decreases (especially when e is small) the expected cost reduction of one-step or few-step changes in the recipe on unit labor costs, because — given the assumption of additive unit costs — with more operations each operation contributes less to the overall cost. This suggests that increasing n might decrease the rate of short-term (and medium- term) productivity improvement.
  • the sample means, p (over 20 runs) and the sample standard deviations, are plotted. It was observed that for small n the effect of increasing n is to decrease p (i.e., to increase the mean rate of productivity improvement), but for larger n, the effect on/? " of increasing n is positive. As n becomes even larger, the effect on/? of increasing n attenuates.
  • the standard deviation, s is decreasing in n.
  • Varying T provides a method for analyzing the curvature mis-specification of the experience curve. See FIG. 35.
  • the statistic z (percent of trials that result in improved productivity) is an inverse measure of plateauing. Increasing the number of recipes by increasing n decreases plateauing with no discernible effect on the standard deviation s. (see FIG. 37). Increasing the number of recipes by increasing s decreases plateauing but increases (except for large values of s) the standard deviation s.. Increasing e, in general, increases plateauing (See FIG. 38). Increasing ⁇ , in general, increases plateauing, but for small e, s, and ⁇ , it appears that increasing ⁇ reduces plateauing (see FIG. 39). The positive curvature effect is pronounced in FIGS.
  • Table 5 (FIG. 10) and Table 6 (FIG. 11) are summarized the results of OLS scoring for prediction of/?.
  • the R 2 s are not very high. This is probably because the functional form being used is highly mis-specified. Eyeball scoring indicated that the interaction effects of the parameters can be quite subtle and that the effects of varying even a single parameter are not monotone. Nonetheless, the t tests yield high levels of significance for most of the parameters.
  • the R 2 s and the t tests are more favorable for the random set. This is probably because the random set does not suffer from "FIG. 29 bias," i.e., (1) the data is more dispersed for the random set and (2) the focused set conditions more on the interesting small values for which the results tend to be non-monotone in the parameters.
  • Increasing n reduces the predicted s p .
  • Increasing s increases the s p predicted from focused parameters.
  • Increasing e decreases the predictions of s p .
  • the predicted effects of varying ⁇ differ between the two parameter sets.
  • Increasing ⁇ decreases the prediction of s p .
  • FIG. 40 discloses a representative computer system 4010 in conjunction with which the embodiments of the present invention may be implemented.
  • Computer system 4010 may be a personal computer, workstation, or a larger system such as a minicomputer.
  • a personal computer workstation
  • a larger system such as a minicomputer.
  • the present invention is not limited to a particular class or model of computer.
  • FIG. 40 depictative computer system 4010 includes a central processing unit (CPU) 4012, a memory unit 4014, one or more storage devices 4016, an input device 4018, an output device 4020, and communication interface 4022.
  • a system bus 4024 is provided for communications between these elements.
  • Computer system 4010 may additionally function through use of an operating system such as Windows, DOS, or UNIX.
  • an operating system such as Windows, DOS, or UNIX.
  • Windows Windows, DOS, or UNIX
  • Storage devices 4016 may illustratively include one or more floppy or hard disk drives, CD-ROMs, DVDs, or tapes.
  • Input device 4018 comprises a keyboard, mouse, microphone, or other similar device.
  • Output device 4020 is a computer monitor or any other known computer output device.
  • Communication interface 4022 may be a modem, a network interface, or other connection to external electronic devices, such as a serial or parallel port
  • the learning-by-doing model is sufficiently rich to match the reported progress ratios from estimated experience curves.
  • certain modifications are routine. For example, if one has data or priors on the values of/7, s , z, s, , c 2 , s c , , or any of the other predictions for a particular plant, firm or industry producing a specific good, then one can search for parameters n, s, e, ⁇ and ⁇ that predict the data.

Abstract

The present invention relates generally to design and evaluation of research and development, technology transfer, and learning-by-doing, and more particularly to the determination of production plans and the prediction of innovation. A preferred embodiment comprises a method for determining a production plan comprising the steps of: defining a plurality of production recipes such that each of said production recipes is a vector of n operations; selecting a current one of the production recipes; evaluating the current production recipe to determine its cost; modifying the current production recipe to create a trial production recipe (130); evaluating the trial production recipe to determine its cost (140, 150); and assigning the trial production recipe to the current production recipe if the cost of the trial production recipe is less than the cost of the current production recipe.

Description

A SYSTEM AND METHOD FOR DETERMINING PRODUCTION PLANS AND
FOR PREDICTING INNOVATION
FIELD OF THE INVENTION The present invention relates generally to design and evaluation of research and development, technology transfer, and learning-by-doing, and more particularly to the determination of production plans and the prediction of innovation.
BACKGROUND OF THE INVENTION According to the neoclassical model a production plan is merely a point in input- output space. The neoclassical model has been extended to accommodate inter-temporal features such as the variability over time of factory supplies, uncertainty about the production process, and uncertainty about prices. The neoclassical model of production is not, however, fully dynamic since it does not provide a microeconomic basis for explaining technological evolution due, for example, to learning-by-doing, education and training, research and development, or technology transfer.
In this regard, macroeconomics is ahead of its microeconomic foundations. In his celebrated article on learning-by-doing, Arrow (1962) (K.J. Arrow, The economic implications of learning-by-doing, Review of Economic Studies, 29:155-73, 1962) accounted for the observed fact that unit production costs could fall even in the absence of capital accumulation and R&D effort. Arrow attributed the increased productivity to learning-by- doing on the shop floor by production workers and managers. Arrow modeled learning-by- doing as a positive macroeconomic production externality: increases in "manufacturing experience" — as measured (for example) by cumulative gross investment — lead to increased productivity. Several other macro models of technological progress were based on some production externality. See, for example, Clemhout and Wan (1965) (S. Clemhout and H. Wan Jr., Learning-by-doing and infant industry protection, Review of Economic Studies, 32:233-240, 1965), Shell (1967) (K. Shell, A model of inventive activity and capital accumulation, in K. Shell, editor, Essays on the Theory of Optimal Economic Growth, pages 67-85, 1967), Stokey (1988) (N.L. Stokey, Learning by doing and the introduction of new goods, Journal of Political Economy, 96(4):701-719, 1988); Romer (1990) (P.M. Romer, Endogenous technological change, Journal of Political Economy, 98(5):571-5102, 1990); and Lucas (1993) (R.E. Lucas Jr., Making a miracle, Econometrica, 61(2):251-272, March 1993).
Another class of (not unrelated) macro models of technological evolution was based on non-conventional factors of production. Uzawa (1965) (H. Uzawa, Optimum technical 5 change in an aggregative model of economic growth, International Economic Review, 1965) introduced in a simple growth model "human capital, " the stock of which can be increased by devoting resources to education. In the hands of Lucas (1988) (R.E. Lucas Jr., On the mechanics of economic development, Journal of Monetary Economics, 22:3-42, 1988); Caballe and Santos (1993) (J. Caballe and M. Santos, On endogenous growth with physical
10 and human capital, JPE, 101 (6): 1042- 1067, December 1993) and others, this human capital model (with clearly modeled externalities) became a staple for analyzing productivity growth. Shell (1966) (K. Shell, Toward a theory of inventive activity and capital accumulation, American Economic Review, 56(2):62-68, May 1966); Shell (1967); Shell (1973) (K. Shell, Inventive Activity, industrial organization and economic growth, in J.A. Mirrlees and N.H.
15 Stern, editors, Models of Economic Growth, pages 77-100, 1973); Romer (1986) (P. M. Romer, Increasing returns and long-run growth, Journal of Political Economy, 94:1002- 1037, 1986); and Romer (1990) introduced into growth models the macro variable "technological knowledge.'" The Shell-Romer model combines "technological knowledge" (or "stock of patents") with production externalities and increasing returns-to-scale to analyze
20 the role of industrial organization in growth, the dependence of growth on initial conditions, and other important macro problems.
Macroeconomic models based on production externalities and/or non-conventional inputs have been useful in raising important issues about public policy toward technology and in explaining observed increases in aggregate output, but the inadequacy of the
25 microeconomic foundations of these models is a serious problem for the analysis of production.
The neoclassical economic model of production (see e.g., Debreu (1959, Chapter 3) (G. Debreu, Theory of Value: An Axiomatic Analysis of Economic Equilibrium, 1959), Arrow and Hahn (1971, Chapter 3) (K.J. Arrow and F. Hahn, General Competitive Analysis,
30 1971), and the references to the literature therein) is a reduced-form model of existing technological possibilities. Each firm is endowed with a technology set — a set of technologically feasible input-output combinations. These technology sets are assumed to be fixed parameters of the neoclassical economy.
A simple example with one input and one output is described in FIG. 16. A neoclassical production plan is a point (x, y) where x ≥ 0 is the quantity of the input and where y > 0 is the quantity of the output. The (shaded) set T is the set of all feasible neoclassical production plans. The northwest boundary of T is the production possibility frontier (PPF). Points on the PPF represent the "efficient" production plans according to the neoclassical model, since no other plan can be found that gives either more output for the same input or less input for the same output. In this example, the PPF is linear, i.e., production exhibits constant returns to scale. The production function is y =θ , where the positive scalar θ is the slope of the PPF.
From the viewpoint of neoclassical production models, production plan A in FIG. 16 is "efficient." Production plan B is "inefficient." In fact, A strictly "dominates" B: pair A yields more output with less input than pair B. A and B are nearby (or similar) production plans. C is distant from A and B.
Now look at FIG. 16 from the engineering point of view. Suppose that engineers tell us that there are only two known processes (Recipe 1 and Recipe 2) for producing this output. Recipe 1 supports all pairs (x, y) ≥ 0 that satisfy y < θx, where θ is a positive scalar. Recipe 2 supports all pairs (x, v) > 0 that satisfy > < θ'x, where θ' < θ is a positive scalar. The production pairs A and C lie on the ray y = θx, while pair B lies on the ray y = θ'x (indicated in FIG. 16 by the dashed line).
Suppose Recipe 1 is from an engineering viewpoint very different from Recipe 2, even though pair A is close to pair B — would it ever be rational to produce at 5? The answer is yes. Suppose that Recipe 2 is relatively untried. Using Recipe 2 might lead to the discovery of recipes close to 2, but with lower production costs than Recipe 1 (and Recipe 2). A case could be made in some circumstances for using only Recipe 2 and in other circumstances for using both Recipe 1 and Recipe 2 simultaneously.
According to the neoclassical model, production pairs A and B are close (or similar), while the production pairs A and C are far apart (or dissimilar). But as measured by their recipes, A and B are by assumption far apart (or dissimilar) if Recipe 2 is used for B, while plans A and C must be based on precisely the same recipe, Recipe 1. Furthermore, the neoclassical model does not accurately represent the opportunities facing the production manager. He must jointly choose the recipe and the production plan. Having chosen Recipe 1 , only a production pair from the PPF should be chosen. Having chosen Recipe 2, only production pairs from the dashed ray in FIG. 16 should be chosen. All other pairs in T (i.e., those not satisfying v = θx or y = θ'x) result from waste of some output or some input when using one or both of the two basic engineering processes.
SUMMARY OF THE INVENTION A preferred embodiment comprises a method for determining a production plan comprising the steps of: defining a plurality of production recipes such that each of said production recipes is a vector of n operations; selecting a current one of said production recipes; evaluating said current production recipe to determine its cost; modifying said current production recipe to create a trial production recipe; evaluating said trial production recipe to determine its cost; and assigning said trial production recipe to said current production recipe if said cost of said trial production recipe is less than said cost of said current production recipe.
A second preferred embodiment comprises a method for predicting technological innovation comprising the steps of: defining a model comprising: a plurality of production recipes such that each of said production recipes is a vector of n operations; and a plurality of model parameters; and executing said model comprising the steps of: selecting a current production recipe; evaluating said current production recipe to determine its cost; modifying said current production recipe to create a trail production recipe; and assigning said trial production recipe to said current production recipe if said cost of said trial production recipe is less than said cost of said current production recipe. BRIEF DESCRIPTION OF THE DRAWINGS
The following detailed description, given by way of example, will best be understood in conjunction with the accompanying drawings in which:
FIG. 1 is a flowchart diagram illustrating the general steps of a preferred embodiment. FIG. 2 illustrates an analytical representation of a production recipes model used in a preferred embodiment.
FIG. 3 illustrates the δ-neighborhoods used in a preferred embodiment.
FIG. 4 illustrates a technological graph.
FIG. 5 illustrates the analytical representation of cost reduction dynamics used in a preferred embodiment.
FIG. 6 is a table (Table 1) of estimated progress ratios for various industries.
FIG. 7 is a table (Table 2) summarizing the parameters used in a preferred embodiment.
FIG. 8 is a table (Table 3) summarizing the results of a 20-run per parameter vector experiment run on a focused parameter set.
FIG. 9 is a table (Table 4) summarizing the results of a 20-run per parameter vector experiment on a random parameter set.
FIG. 10 is a three part table (Table 5) illustrating the prediction of the sample mean of the estimated progress ratios by OLS scoring from the focused parameter set. FIG. 11 is a three part table (Table 6) illustrating the prediction of the sample mean of the estimated progress ratios by OLS scoring from the random parameter set.
FIG. 12 is a three part table (Table 7) illustrating the prediction of the sample standard deviation of the estimated progress ratios by OLS scoring from the focused parameter set. FIG. 13 is a three part table (Table 8) illustrating the prediction of the sample standard deviation of the estimated progress ratios by OLS scoring from the random parameter set.
FIG. 14 is a three part table (Table 9) illustrating the prediction of the mean improvement percentage per measured batch by OLS scoring from the random parameter set. FIG. 15 is a three part table (Table 10) illustrating the prediction of the mean curvature mis-specification by OLS scoring from the random parameter set. FIG. 16 illustrates weaknesses of neoclassical production theory.
FIG. 17 is a histogram of estimated firm progress ratios.
FIG. 18 illustrates the phenomenon of plateauing.
FIG. 19 illustrates the SFS (Slow Fast Slow) Mis-specification. FIG. 20 is a graph of a base parameter experience curve, after one production run.
FIG. 21 is a graph of a smooth landscape, after one production run.
FIG. 22 is a graph of a rugged landscape, after one production run.
FIG. 23 is a graph of a landscape with n = 10, e = 1, and s = 100, after one production run. FIG. 24 is a graph of the base parameter experience curve, with the same parameters as in the graph in FIG. 20, but averaged over twenty production runs.
FIG. 25 is a graph of a smooth landscape, with the same parameters as in the graph in FIG. 21, but averaged over twenty production runs.
FIG. 26 is a graph of a rugged landscape, with the same parameters as in the graph in FIG. 22, but averaged over twenty production runs.
FIG. 27 is a graph of a rugged landscape, with the same parameters as in the graphs in FIGS. 22 and 26, but averaged over fifty production runs.
FIG. 28 is a graph of a landscape with the same parameters as in the graph in FIG. 23, but averaged over twenty production runs. FIG. 29 illustrates a way in which parameters are chosen in the preferred embodiment.
FIG. 30 illustrates effects of varying n, the number of operations in the recipe, on the sample mean of the estimated progress ratios (p ) and the sample standard deviation of the estimated progress ratios (sp). FIG. 31 illustrates effects of varying s, the number of possible states for each operation in the recipe, on the sample mean of the estimated progress ratios (p ) and the sample standard deviation of the estimated progress ratios (sp).
FIG. 32 illustrates effects of varying e, the number of other operations that affect the cost of each operation, on the sample mean of the estimated progress ratios (p ) and the sample standard deviation of the estimated progress ratios (s ) (in the case s = 2). FIG. 33 illustrates effects of varying e, the number of other operations that affect the cost of each operation, on the sample mean of the estimated progress ratios (p ) and the sample standard deviation of the estimated progress ratios (sp) (in the case 5 = 10).
FIG. 34 illustrates effects of varying δ, the maximum number of operations changed per trial, on the sample mean of the estimated progress ratios ( p ) and the sample standard deviation of the estimated progress ratios (sp).
FIG. 35 illustrates effects of varying T, the number of trials, on the sample mean of the estimated progress ratios (p ) and the sample standard deviation of the estimated progress ratios (sp). FIG. 36 illustrates plateauing effects of varying n, the number of operations, on the sample mean of the estimated progress ratios (p ) and the standard deviation in the percent of trials that result in improved productivity (s,).
FIG. 37 illustrates plateauing effects of varying s, the number of possible states for each operation in the recipe, on the mean percentage of trials that result in improved productivity ( z ), the standard deviation in the percent of trials that result in improved productivity (s,), and other parameters.
FIG. 38 illustrates plateauing effects of varying e, the number of other operations that affect the cost of each operation, on the mean percentage of trials that result in improved productivity ( z ) and the standard deviation in the percent of trials that result in improved productivity (5,).
FIG. 39 illustrates plateauing effects of varying δ, the maximum number of operations changed per trial, on the mean percentage of trials that result in improved productivity ( z ) and the standard deviation in the percent of trials that result in improved productivity (sz). FIG. 40 discloses a representative computer system 4010 in conjunction with which the embodiments of the present invention may be implemented.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENT
A preferred embodiment is a system and method for modeling technological innovation. One of the more important aspects of a preferred embodiment is in the description of the production plan. To the usual input-output specification is added a description of the underlying engineering recipe employed. Describing how one recipe is related to another then allows one to build models that suggest which types of technologies are likely to be uncovered in the course of ordinary shop-floor operations (learning-by-doing), which R&D programs are most likely to be successful, and which types of technologies are ripe for transfer from one firm (or economy) to another.
In a preferred embodiment, a production recipe is described by a vector of basic production operations such as heating, mixing, stirring, shaping, boxing, internal factory - transportation, and so forth. For given outputs, the input requirements for each of the operations depends on the instruction (or setting) given for that operation and the instructions given for some of the other operations. Hence the embodiment allows for production externalities within the firm.
A specific application of the (more general) production recipes approach (along with the nascent technology approach) of the preferred embodiment is the construction of a model of shop-floor learning-by-doing. It is assumed that the firm employs a single input to produce a single output and that, for a given fixed recipe, this process entails constant returns to scale. It is also assumed that the firm's output stream is predetermined. The method allows for deviations from the currently reigning technology, but assumes that such production trials (or "errors") are not directly controlled by the firm. It is assumed that a newly discovered recipe is either accepted or rejected merely on the basis of current cost efficiency relative to that of the reigning technology.
By correctly choosing the basic parameters of the model, one is able to match the basic statistics and important qualitative phenomena from observed experience curves — including the mean progress ratios (an inverse measure of the slope of the experience curve) and their standard deviations, plateauing (runs without improvements), curvature bias, and sensitivity to the length of the production run. The disclosed method of modeling of recipes works for one input, one output, constant returns cases for which the focus is either the short term or the intermediate term.
A preferred embodiment comprises the general steps of applying the machinery of the production recipes model to a specific problem (in a preferred embodiment, the learning-by- doing problem), refining the parameters of the model to fit the details of the problem, and then utilizing the model to predict which nascent technologies are the most promising. The subject disclosure describes a system and method for constructing a microeconomic model of technological evolution. To the existing (or "currently available") technologies of the neoclassical production model, nascent technologies are added, which include both undiscovered technologies and forgotten technologies. One might be skeptical about any modeling of undiscovered technologies. While existing technologies can be verified by current engineering practice, undiscovered technologies cannot. On the other hand, practicing production engineers and business managers are not reluctant to base important business decisions on forecasts of technological progress in the firm's manufacturing operations. In fact, one of the most reliable analytic tools in production management is the engineering experience curve (or "learning" curve), which projects existing unit production costs for a given product into its future unit production costs. Among production engineers, marketing managers, business executives, and even corporate directors, empirical learning curves are far better known and more frequently used than are empirical production functions or empirical cost functions. A production recipe ω is a complete list of engineering instructions for producing given outputs from given inputs. In the preferred embodiment it is assumed that the firm uses a single input to produce a single output. It is also assumed that, given the recipe choice, there is no waste in production. It is assumed that in production run t, the firm produces y, ≥ 0 units (production engineers often use a precise (but different for each output class) unit of measurement called the "batch") of output by employing η, > 0 units of input (hereafter "labor") based on the recipe ω,. It is also assumed that there is no uncertainty about the production process. Let {, = v/η, be unit labor cost. Then tt, = C(y,; ω,), where, for fixed ω„ C is the average cost function. If H first falls and then rises as y, is increased, then the average cost curve is U-shaped. If H is independent of y„ then there are constant returns to scale. The method of representing technologies utilized in a preferred embodiment is an improvement on the approach taken in several methods of modeling technological innovation in which there are two types of technologies, advanced and backward (and two types of firms, also advanced and backward) (see, e.g., Shell (1973) and Grossman and Helpman (1991) (G.M. Grossman and E. Helpman, Innovation and Growth in the Global Economy, 1991)). Advanced firms have access to both the advanced and backward technologies, but backward firms are restricted to the backward technology. Let ωa be the advanced recipe and ωb be the backward recipe. In this literature, the strong non-crossing (in the absence of non-crossing, there might not be a most advanced (or a most backward) recipe) assumption is made, so
0< d(y; ωa) < Q(y; ωb) < ∞ for each y > 0.
For the advanced firm, the set of recipes Ωa is given by Ωa = {ωa, ωb}.
For the backward firm, the set of recipes, Ωb, is given by
Ωb = {ωb}. In the subject specification, the above very simple model is generalized to allow the firm to choose not merely from (at most) two recipes, but instead from a large general set of recipes, Ω. The formal model in what follows is restricted to constant returns to scale, so that the unit labor requirement depends only on the recipe employed. The labor requirement for a given recipe is not typically known with certainty. Instead there is associated with each recipe ω a probability measure over the set of labor requirements. Although it is also assumed that there is a (relatively small) subset of currently available recipes ΩCA c Ω, and that the respective labor requirements for each of these recipes is known with certainty, this assumption is not critical.
In the preferred embodiment, a production recipe is described by a vector of basic production operations such as heating, mixing, stirring, shaping, boxing, internal factory transportation, and so forth. For given outputs, the input requirements for each of the operations depends on the instruction (or setting) given for that operation and the instructions given for some of the other operations. Hence one allows for production externalities within the firm. These intra-firm production externalities are crucial to the analysis.
The method of a preferred embodiment is to use the production recipes model to provide a microeconomic model for observed learning-by-doing in production, and then use computer simulations to discover which nascent technologies are the most promising. A complete model of production is specified (the LRF) (which includes inputs, outputs, and recipes; as well as current technologies and nascent technologies) and then "closed" so as to model shop floor productivity improvements and hence to explain the observed empirical features of the firm's experience curve. There are three reasons for doing this: (1) The learning-by-doing model is important in its own right for economic theory and economic policy. It would be worthwhile to understand the micro sources of the productivity increases and what promotes them, rather than merely representing this process as a fixed macroeconomic externality. (2) Empirical experience-curve analysis is central to management science and management practice. It would be desirable if these experience curves could be explained in terms of basic micro economics. (3) The method of production recipes (and nascent technologies) is quite general, with possible applications for modeling R&D, basic research, and technology transfer. In the disclosed preferred embodiment, the general method is illustrated by a concrete application. There are three reasons why learning- by-doing is a good candidate for the preferred embodiment: (1) Empirical studies of engineering experience curves are abundant. (2) Learning-by-doing permits one to be relatively less sophisticated in modeling the purposiveness of the economic agents, so that one can focus — for the time being — on the relatively sophisticated model of technology. (3) The one input/one output learning-by-doing model allows one to use — with only minor modifications — a tested computer simulation program.
In the production recipes/nascent technology approach of a preferred embodiment, the construction of a model of shop-floor learning-by-doing to determine promising nascent technologies, it is assumed that the firm employs a single input to produce a single output and that, for a given fixed recipe, this process entails constant returns to scale. It is also assumed that the firm's output stream is predetermined. The method allows for deviations from the currently reigning technology, but assumes that such production trials (or "errors") are not directly controlled by the firm. It is assumed that a newly discovered recipe is either accepted or rejected merely on the basis of current cost efficiency relative to that of the reigning technology.
These strong assumptions allow the employment of a variant of Kauffman's NK model (see Kauffman and Levin (1987) (S. Kauffman and S. Levin, Toward a general theory of adaptive walks on rugged landscapes, Journal of Theoretical Biology, 1987) and
Kauffman (1988, 1993) (S. Kauffman, The evolution of economic webs, in P.W. Anderson, K.J. Arrow, and D. Pines, editors, The Economy as an Evolving Complex System, 1988; S. Kauffman, Origins of Order: Self-Organization and Selection in Evolution, 1993)) to analyze the dynamics of manufacturing costs. The NK model was originally designed for an- alyzing asexual biological evolution. In the evolutionary biology inteφretation, it is assumed that the "fitness" of a creature can be represented by a scalar. The corresponding assumptions for learning-by-doing are the single output, the single input, and constant returns to scale, which together allow the scalar "fitness" to be replaced by the scalar "current technological efficiency" (the inverse of current unit production cost). In the biological inteφretation, it is assumed that genetic changes occur at random and that fitter creatures immediately replace 5 those that are less fit. In the present inteφretation, the corresponding assumptions are that shop-floor trials take place at random and that the reigning recipe is replaced by the new recipe if and only if the new recipe is more efficient in the short run - i.e., recipe selection is- myopic.
The preferred embodiment is not restricted to myopic recipe selection. It would
10 certainly be routine for one skilled in the art to extend the model of learning-by-doing to allow for both foresight in selection of the reigning recipe(s) and some control by the firm of the rate and direction of experimentation. For the latter, there must be costs of experimenting. These would include the output losses from pilot-project retrogressions, the opportunity costs of sampling other recipes, and additional resource costs of experimenting
15 with distant recipes.
A preferred embodiment works for one input, one output, constant return cases for which the focus is either the short term or the intermediate term.
FIG. 1 is a flow diagram illustrating the method of a preferred embodiment. At step 100 a set of one or more production recipes, each recipe comprised of/; operations, is
20 represented as a set of recipe vectors {ω} in W. At step 110 a technological graph T is constructed whose nodes are the recipe vectors ω. At step 120 a measure of distance between the recipe vectors is defined. At step 130 a production recipe trial on T is defined to be a recipe vector within a distance δ e {1, 2, ..., n} of a currently adopted recipe vector. At step 140 productivity gain or loss is defined in terms of an estimated progress ratio, and at step
25 150 production recipes are evaluated by measuring the productivity gain or loss of trials on Y. In a preferred embodiment, production is assumed to involve n distinct engineering operations. The recipe ω 210 can then be represented by ω = (ω ..., ω ..., ωn) e Rn where ω' represents the instructions for operation / for / = 1, ..., n. See FIG. 2. It is assumed
30 that for each operation / the set of possible instructions is discrete. These choices may be "qualitative" (e.g., whether to use a conveyor belt or a fork-lift truck for internal transport) or they may be "quantitative" (e.g., the setting of the temperature or other knob on a machine). In the latter case, the variable being adjusted is approximated by discrete settings (think of the knob "clicking" from one setting to another). In particular, it is assumed that ω' is an integer which satisfies ωA {l,..., s} for i = 1, . . . , n, where s 220 is a positive integer. Hence the number of recipes (we shall sometimes refer to the recipe vectors and the recipes themselves interchangeably, since context will clarify whether actual recipes or vectors in R" are being referred to) is finite and given by #Ω = A
The finiteness of the set Ω = {recipe vectors ω} has serious consequences. A model based on a finite space of recipes does not permit long-run productivity growth. The model with a finite set of recipes is, however, quite appropriate for modeling the intermediate term productivity improvements observed in the manufacture of specific goods (such as a particular model of a microprocessor) over their relatively short product lives (measured in months, years or — at the very most — decades).
It is assumed that the unit labor cost of operation /, φ'(ω), is a random variable whose distribution function is defined on 1+. Consider two distinct recipes, ω and ω'. The random variables φ'(ω) and φ'(ω') are not necessarily independent. In fact, φ' depends on the instructions, ω', for operation / and possibly on (some of) the instructions for the other operations, ω"'. (With minor abuse of notation, one could then have denoted the unit labor costs of operation i by φ(ω'; ω"'), or more simply, φ'(ω).) The labor requirements are assumed to be additive; hence n Φ(ω) = ∑ Φ'(ω)
;'=1 where φ(ω) is the unit cost of production employing recipe ω. For ω fixed, φ(ω) is a random variable. If ω is allowed to vary over Ω, then φ(ω) is a random field. A random field is a slight generalization of a stochastic process to allow the argument (in this case ω) to be a vector (as opposed to being a scalar such as "time"). For the special case in which n = 1, φ(ω) is then an ordinary stochastic process. Denote by ϋ'(ω) the realization of the random variable φ'(ω). The n realization of the random variable φ(ω) is d(ω) - ∑ d'(ω). If ω varies over Ω, the family of ι = l
realizations C(ω) is called the landscape (of the random field φ(ω)).
There are six basic parameters in the disclosed model of shop-floor learning-by-doing: (1) n, the number of production operations, (2) s, the number of possible instructions or settings for each operation, (3) e, the externality parameter that gives the number of operations whose settings affect the costs of one operation, (4) δ, the maximum step size per trial, (5) τ, the number of trials made on the shop floor per measured batch, and (6) T, the length of the production run.
The above discussion shows how the neoclassical notion of technological distance can be misleading. One needs a measure of distance that captures the similarity or dissimilarities of the inherent production processes rather than the relative efficiencies of production pairs or their relative scales. "Distance" between recipes will, of course, depend on the application. If one takes a shop-floor perspective — as is done in the learning-by-doing embodiment of the method — then ω is near 'ω if these recipes are the same except for, say, one temperature setting. If moving from ω to ω' represents the substitution of fluorine for chlorine where fluorine was not formerly in use, then one would probably think of ω and ω' as very far apart in the shop-floor metric. But for an R&D problem, the appropriate metric might be altogether different. In the chemistry research lab, for example, the distance would in this case be small, since every chemist is aware that the elements chlorine and fluorine are likely to react similarly because they are from the same column of the Periodic Table of the Elements.
The preferred embodiment assumes that the set Ω can be described so that distances are meaningful from the appropriate technological perspective. A formal definition follows.
Definition (Distance): The distance d (ω,ω') between the recipes ω and ω' is the minimum number of operations which must be changed in order to convert ω to ω'. Since changing operations is symmetric, J (ω,ω') = J (ω', ω). Example: Assume that ω and ω' differ only in the /th component. Then d (ω,ω') = 1 when ω' = 1 and (ω')' = 2 or when ω' = 1 and (ω') = 37.
This definition of distance makes the most sense when the instructions are merely qualitative. If instead the instructions can be represented by ordinal settings (such as temperature), then the distance notion should be different. If the instructions for operation i in the above example had been temperature settings, then the recipe with its /th entry equal to 2 would be closer to ω than the recipe with its /th entry equal to 37. In particular, 2°C is closer to 1 °C than is 37°C. If settings are ordered, then a wise strategy for the firm might be to change if possible the setting in the same direction that led to the most recent improvement. If 2°C is an improvement over 1 °C, perhaps the next trial should be 3°C. Introduction of ordinal settings and more complicated distance measures is also within the claimed scope of the preferred embodiment. In general, the requirement is that the set Ω have some reasonable notion of distance imposed on it. Definition (Neighbors): Let N,(ω) be the set of /-neighbors of recipe ω:
N.(ω) = {ω' eΩ|d(ω,ω') = /}, where / is a positive integer.
Let Nδ(ω) = {ω' e Ωjω'e υ,=1 δ N,(ω)} be the set of recipes at least distance one from ω but not more than distance δ e {1, ..., n) from ω. See FIG. 3. Nδ(ω) 310 is a δ-neighborhood of ω 210.
With the above definition of distance between recipes, it is straightforward to construct the technological graph V (see FIG. 4). The nodes or vertices 410 of V are the recipes ω e Ω. The edges 420 of T connect a given recipe to recipes distance 1 away (i.e., to the elements of N,(ω)). Next comes the definition of the input requirement field, which is a full description of all the basic technological possibilities facing the firm.
Definition (LRF): The input requirement field (LRF) is the random field φ(ω) (defined over the vertices of the technological graph T).
In order to obtain concrete results, one must further specify the IRF. In the next description, the relationship between the random variables φ(ω) and φ(ω'), which in general are not independent, is specified. After that is specified the functional forms for the φ'(ω).
Except for some of the underlying stochastic structure, this specifies the LRF, but it does not
"close the model." To do that, one needs a complete algorithm of which recipes are chosen for production, what is learned by the firm from its experience, and how much of this is remembered. It is assumed that the costs of a given operation depend on the chosen setting for that operation and possibly on the settings for some (but not necessarily all) of the other operations. Define the connectivity indicator e' by
1 if the choice of setting for operation / affects ej the labor requirement for operation j 0 otherwise for i,j = 1,..., n. Since the choice of the setting for the /th operation always affects the costs . for the th operation, e', = l for / = 1,..., n. The number e' of operations with costs affected by operation / is given by
Figure imgf000018_0001
for / = 1,..., n, while the number e, of operations that affect the costs of operation / is given by
°, - i for / = 1,..., n. Define the set E, , the set of operations cost-relevant to operation /, by
Figure imgf000018_0002
for / = 1, ..., n.
The simplifying assumption is made that each operation is cost-affected by (e-1) other operations, so that #E, = e = e for / = 1, ..., n, where e e {!,..., n} . Under this assumption, the labor requirement of any given operation is affected by the settings for that operation and the settings for exactly (e-1) other operations. Therefore, there are exactly se permutations of the settings that affect the costs of operation /. Each of these is a sub-recipe cost-relevant to operation i. Let {/,,..., ie} denote the elements in the set E, = {j | e7, = 1 } for / = 1,..., n. Then denote by (ω'7,..., ω'e) a sub-recipe cost-relevant to operation / (where for convenience /, is defined to be equal to /) for / = 1,..., n (due to typographic difficulties, the symbol il, e.g., should be identified throughout with the symbol /,). The set of sub-recipes cost-relevant to operation / is a projection of the s" recipes into se sub-recipes. There are nse such sub-recipes in all. The stochastic unit labor requirement for operation / based on the sub-recipe (ω'/,...ω'e) can be written as φ'(ω'!,..., ω'e), or φ'(ω'e) for / = 1,..., n, with only slight abuse of notation.
The parameter e plays a crucial role in the preferred embodiment. If e = 1, there are no (intra-firm) external effects among the operations. Each of the operations could as well have taken place in separate firms, since in this case there can be no gains from coordination. With e = 1, one would also expect the two random variables φ(ω) and φ(ω') to be highly correlated if ω' is close to ω, since by definition ω' and ω would have many instructions in common and hence φ'(ω) = φ'(ω') for most /. The larger the parameter e, the less correlation one would be expect between φ(ω') and φ(ω) even for ω' close to ω. This is because the change in the instructions for one operation affects the costs of several other operations. Hence e is an inverse measure of the correlation between φ(ω') and φ(ω) for ω' close to ω. The corresponding landscape C(ω) (a realization of the LRF) is typically "correlated," or "smooth," for small values of e, while <>(ω) is typically "uncorrelated," or "rugged," for large values of e.
This concludes the disclosure of the general production recipes model; next the specification illustrates the preferred embodiment by applying the general model to the problem of modeling learning-by-doing.
It is assumed that shop floor workers follow the recipe (or blueprint) provided by management, but from time to time, they make small modifications in the current recipe. These modifications are referred to as "trials" in the subject specification. Depending on the context or the inteφretation, the trials may also be thought of as "errors" or "informal experiments."
A trial occurs when (1) at least one operation / in the production recipe ω is modified and (2) that modification is observed and is evaluated by the firm (perhaps by the quality- control engineers). It is assumed that modifications occur during the production of a batch, and that observation and evaluation occur when the production of the batch is completed. It is further assumed that there is only one trial per "quality-control batch." This assumption is not restrictive: since the quality-control batch size, B, may or may not equal the measured batch size, B , used in data-gathering, the batch deflator, τ, is introduced, τ satisfies τ = B /R where B is defined so that exactly one trial is made during the production of the quality- control batch. Hence if τ is a positive integer it is inteφreted as the number of trials per measured batch. In general, τ > 0 is inteφreted as the average number of trials per measured batch. Note that τ > 1 if B ≥B, and τ < 1 if B < B.
One would expect τ to depend on the particular type of manufactured good. The trials parameter τ is likely to be higher for airframes than for computer chips, because the number of operations, n, is likely to be higher in the airframe case. The trials parameter, τ, also depends on management practices, coφorate culture, and worker psychology. In a tight production environment, there would be fewer defects, but fewer trials. In a looser production environment, there would be more defects, but also more trials. Hence management is likely to be keen to attempt to control τ if possible to achieve an optimal balance between "sloppiness" and "inventiveness."
A trial can be inteφreted in at least three (non-exclusive) ways. The first inteφretation is that the trial is a small scale experiment in production to which the firm does not fully commit. This would be a model of R&D in which the only cost of the R&D activity is the missed opportunity for investigating alternative recipes during the period in question. A natural alternative is to assume that the firm must commit to the new production recipe in order to sample it: If recipe ω, is chosen during the production run t, then the labor requirement for run t will be the realization of φ(ω,). In this case, unit costs may actually increase from one period to the next, i.e., cost retrogressions could occur.
A third inteφretation is possible. One can think of the firm operating a large number of distinct production subunits in parallel. Each sub-unit begins a production run at time t with an assumed labor requirement <!(ω,. ,), the unit cost for run t - 1. During the course of production run t, a trial in one production sub-unit leads to production by its subunit using recipe ω'. The associated labor requirement for this production sub-unit is then ϋ(ω'), the realization of φ(ω'). However, because there are many production sub-units, the average per- batch cost of production is close to H(ω,_ ,), the unit cost of the (pre-trial) reigning recipe. Consequently, in this scenario, one can think of firms trying new recipes, without substantial sacrifices in current labor requirements. This third inteφretation is utilized in the preferred embodiment, because it allows simulations to be based on an existing, tested computer program.
The next description is of the trial dynamics and the memory of the firm. Production trials occur on the shop floor level at the rate of 1 per quality control batch
B (or τ per measured batch B). It is assumed that the trial recipe is at distance of at least one but no greater than δ e {1, 2,..., n) from the currently adopted recipe ω. It is assumed that . the probability of a trial is uniform over the neighborhood Nδ(ω). (See FIG. 3 for an illustration of a neighborhood Nδ(ω) where ω = (1,3), s - 4, n = 2, and δ = 2.) The probability distribution of trials could be different. For example, a scheme that loads more probability on the recipes closer to ω would in general be more realistic. Such a modification would be a matter of weighting the probability distribution as function of the distance from ω,.,, would be obvious to one of ordinary skill in the art. The quantitative effect of this change would be to reduce the big-step effect for any given δ, but the qualitative effects would not be altered. That is, the extent of modification of existing recipes is limited to recipes that differ from the currently prevailing recipe by no more than τδ operations — the number of trials per production run multiplied by the number of operations of the recipe that can be altered in a single trial.
It is assumed that the recipe adoption process is very simple. The firm is myopic: if ω is the prevailing recipe and ω',_, is the trial recipe, then the prevailing technology for period t, ω, , will be given by ω,_, if e(ω,_,) < <>(ω' ) ω. ω',_, if «(ω ) > «(ω',_,)
and (1)
Prob
Figure imgf000021_0001
The system (1) defines the cost-reduction dynamics (see FIG. 5). This is a process in which the firm moves from vertex to vertex of the technological graph F that underlies the ERF. If δ = 1, then the firm moves along the edges of the graph Y to the next vertex. For the nascent sub-recipes, it is assumed that the random variables φ'(ω'e) are independent and identically distributed (i.i.d.) and uniform on [0,1/n]. Hence if all the sub- recipes of ω are nascent, then the support of the random variable φ(ω) is contained in [0, 1]. The density function of φ(ω) is then n-shaped and symmetric about 54, the modal labor requirement. The density is 0 for the extreme labor requirements, 0 and 1; otherwise it is positive. If ω'e is currently available, then φ'(ω'e) is a degenerate random variable, in which all of the probability is massed on a scalar ϋ, (ω) e [0, 1/n]. If the recipe ω is currently available, then φ(ω) is a degenerate random variable, in which all of the probability is massed on a scalar £(ω) e[0,l]. If m of the sub-recipes of the recipe ω are nascent (while the remaining n - m sub-recipes are currently available), then the support of φ(ω) is [t, ( + (m/n)], where H is a scalar in [0, (n - m)/n\. Probability (n - m)/n is massed on I On the interval ((, Q + (m/n)), the probability density function (p.d.f.) is n-shaped and symmetric if m > 1. (If m = 2, then the p.d.f. on (0, £ + (m/n) is a "tent.")
The assumption that the support of φ(ω) is contained in [0,1] is not innocuous. The fact that a zero labor requirement is possible could in principle be disturbing, but it is assumed that the probability of such a free lunch is zero. The assumed boundedness of unit cost is more serious. In actuality, many untried recipes turn out to be totally useless - i.e., with the realization f(ω) = +∞. The assumption of boundedness from above of unit labor requirements is an optimistic assumption about prior beliefs on untried recipes. A formal description of experience curves and a review of the (largely empirical) literature on them is now given. The statistical outputs of the existing studies were used as inputs for the simulations of the disclosed embodiment. The existing experience curve analyses are not redone; instead the calculated regression coefficients and their standard errors are used as observations. Few empirical phenomena in production economics or management science are as well documented as the experience curve. It depicts the decline in the firm's per-unit labor requirements (or costs) with the cumulative production of a given manufactured good. Following Wright's (1936) (T. P. Wright, Factors affecting the cost of aiφlanes, Journal of the Aeronautical Sciences, 2:122-128, 1936) study of the airframe industry, this pattern has been investigated for many different goods and industries.
The usual parametric form of the experience curve is the power curve,
Figure imgf000023_0001
t-\ where <!, = C(ω() is the unit labor requirement for production run t, Yt_χ = ∑ y , is the s = \ cumulative output up to (but not including) run t, b > 0 is the learning coefficient, and a > 0 is the labor needed to produce the first batch of the good. In the case of airframes, one airframe is equal to one measured batch. In this case, Yt would be the serial number of the last airframe in production run t. The learning coefficient represents the rate at which the productivity increases as the firm acquires "experience."
There is an impressive amount of empirical work on the engineering experience curve. Wright's law is that unit cost β, is related to Yt _ „ cumulated output, by the power function {, = a(Yl_I)~v so that the progress ratio/? is given hyp = 2 ~'/3 = 0.79. If one allows for a more general power law, one gets that
Figure imgf000023_0002
where b > 0 andp = 2'b. The progress ratio/? is a decreasing function of the exponent b, the learning coefficient. Post- Wright empirical studies suggest that observed progress ratios live in the range of about 60-95% with a mode of about 81-82%. The existing empirical literature reports not only mean progress ratios but also their standard deviations.
A commonly used measure of productivity improvement is t e progress ratio. The progress ratio p is related to the learning coefficient b by p = 2'b or b = -log2( ?). The percentage cost reduction when cumulative output is doubled is 100(1 - p). If b = V , then ? = 0.79 ("Wright's Law"), so the percentage cost reduction is 21 ). If b = 0.30, then ? = 0.81 (see the mode in FIG. 17). A small progress ratio is an indicator of rapid cost improvement, while a higher progress ratio indicates less cost improvement. For example, a p value of 0.85 (an "85% curve") means that per-unit cost falls by 15 percent when cumulative output is doubled, p = 0.60 means per-unit cost falls by 40%> when cumulative output is doubled,/? = 1 means that unit cost is constant, and p > 1 means that unit cost is increasing. The power law given in Equation (2) yields the straight line in log/log space log «, = log o - b log Y,. ,,
where ϋ, is the average unit labor requirement (or cost) of producing the goods with serial numbers greater than Yt _ 2 and less than or equal to Yt _ ,. In Table 1 (FIG. 6) are displayed some estimates of the learning coefficients and the progress ratios from firm-level experience curves (by industry). From this and the review articles by Conley (1970) (P. Conley, Experience curves as a planning tool, IEEE Spectrum, 7:63-68, 1970), Yelle (1979) (L.E. Yelle, The learning curve: Historical review and comprehensive survey, Decision Sciences, 10, January 1979), Dutton and Thomas (1984) (J.M. Dutton and A. Thomas, Treating progress functions as a managerial opportunity,
Academy of Management Review, 9(2):235-247, 1984), Muth (1986) (J. Muth, Search theory and the manufacturing progress function, Management Science, 32:948-962, 1986), and Argote and Epple (1990) (L. Argote and D. Epple, Learning curves in manufacturing, Science, 247:920-924, 1990), one concludes that the salient characteristics of experience curves are: (1) The distribution of progress ratios ranges from 55% (rapid progress) to 105%) (slow — indeed negative! — progress) and "centers" on about 81-82%. See Table 1 (FIG. 6) and FIG. 17. (2) Distinct production processes and goods are associated with their own ranges of values for the estimated progress ratio p. See Table 1 (FIG. 6). (3) There is variation in progress ratios not only among firms in the same industry, but also among different plants operated by a single firm. See Alchian (1963) (A. Alchian, Reliability of progress curves in airframe production. Econometrica, 31 :679- 693, 1963), Dutton and Thomas (1984, pp.236-239), and Epple, Argot and Devadas (1991) (D. Epple, L. Argote, and R. Devadas, Organizational learning curves: A method for investigating intra-plant transfer of knowledge acquired through learning-by-doing, Organization Science, 2(1): 58-70, February 1991). (4) The specification of the OLS (ordinary least-squares) statistical model, i, = aY'b t . ,ε, with log ε, - N(0, σ), is imperfect in (at least) two ways: (A)There are "plateau effects''' in the observed data (see Abernathy and Wayne (1974) (W.J. Abernathy and K. Wayne, Limits of the learning curve, Harvard Business Review, September-October 1974)): (i) Improvements occur after relatively long stretches of constant labor requirements, and (ii) improvements in labor productivity cease beyond some (sufficiently large) cumulative output (see Conway and Schultz (1959) (R.W. Conway and A. Schultz Jr., The manufacturing progress function, Journal of Industrial Engineering, 10:39-54, 1959) and Baloff( 1971) (N. Baloff, Extension of the learning curve - some empirical results, Operational Research Quarterly, 22:329-340, 1971). The (hypothetical) empirical experience curve of FIG. 18 illustrates both types of plateauing. The labor requirement ϋ is falling from t = 0 to t = 10, but not strictly monotonically. There is an interior plateau at batches 3-5. Productivity improvements cease after batch 7, providing a terminal plateau. (B) There is curvature mis- specification (see Levhari and Sheshinski (1973) (D. Levhari and E. Sheshinski, Experience and productivity in the Israeli diamond industry, Econometrica, 41 :239-253, 1973) and Epple et al. (1991, pp. 65-69)). Instead of a straight line in log/log space, the data suggest that an S- shaped curve would often (but not always) fit better: often the data suggest concavity of the function over the early batches, but convexity of the function over the later batches. This could be called the SES phenomenon. Cost improvement is first slower, then /aster, and finally slower than suggested by the straight-line log/log fit. See FIG. 19. (In many observed production runs and many experiments, the right-hand tail of the productivity plot is truncated before the suggested convex range of the function can be observed.) (5) Industry experience curves (in which the data on cost as a function of cumulative output is averaged over several firms) are smoother than the corresponding single-firm experience curves, which in turn are smoother than single-plant experience curves. There are fewer plateaus and the lengths of the interior plateaus are shorter for the averaged data. See the survey by Dutton and Thomas (1984).
An analysis of the comparative dynamics of the disclosed model of shop floor learning-by-doing is given next. In particular, the effects of varying the following basic parameters of the model are described: the number of operations n; the number of instructions per operation s; the externality parameter e; the maximum number of steps per trial δ; the number of trials per batch τ; and the length of the production run T, on the two basic predictions of the model: the sample mean/? of the estimated progress ratios and the sample standard deviation sp, of the estimated progress ratios, and on two measures of model mis-specification: plateauing (or its inverse, the improvement percentage z); and curvature (or SFS) in log/log space. The basic parameters of the model are summarized in Table 2 (FIG. 7). The third column gives the range over which each parameter is defined. Non-integer values of τ would have been possible. The inteφretation of (say) τ = Vz is that a trial occurs once in every three production runs.
Computations were performed on a Dell Dimension XPS Pro 200 PC with a Pentium Pro 200 MHz processor and Windows 95 operating system (version 4.00.950 B). The core program used was written by Bennett Levitan, building on work by William Macready and Terry Jones. Regressions were performed, summary statistics were computed, and plots were generated with SPSS Windows version 7.5 and Matlab Windows version 4.0. The routines used to compute and generate the simulations from the random parameter set were written by Auerswald. Both Levitan's and Auerswald's programs incoφorate a random number generator written by Terry Jones, based on the algorithm of Knuth (1981, pp. 171-72).
The most fundamental unit of analysis is a single realization of the production run, examples of which are displayed in FIGS. 20-23. A "point" in one of these figures gives the normalized log of the unit labor requirement for the currently prevailing technology. The horizontal axis is the log of time (or, equivalently, the log of the accumulated number of quality-control batches to date) . A production run is then a "walk" on a landscape (i.e., a realization f(ω) of the LRF φ(ω)). The line in one of these figures is the OLS fit to the points in the figure.
There are many approaches to defining in detail the e "externality" connections from one operation in a given recipe to other operations in that recipe. If one had some engineering information about these connections one could use this prior information. In the absence of engineering priors, one may draw for each production run t the (e — 1) connections to operation / uniformly (without replacement) from the set of all operations other than /.
Fixing these e connections, a preferred embodiment first computes and stores the realizations of the se random variables φ'(ω'',..., ω'e), (where operation / is fixed but the sub- recipe (ω'1,..., ω'e) varies) . It does this for each of the operations / = 1,..., n. The number of vectors needed to compute the landscape (>(ω) is thus reduced from s" to nse, (In the case of n = 10, s = 3, e = 3, one gets that s" = 310 > 270 = (10)33 — nse.)
The landscape ϋ(ω) and the method of "walking" on this landscape have now been completely defined. All that remains to be specified is the starting point (on the landscape) for the walk. It should be clear to those of ordinary skill in the art that for some applications, the starting point might be given by information about the production experience of competitors or suggestions from the firm's R&D department. In the absence of such prior information, one can merely pick one recipe randomly (with uniform probabilities over the s" recipes in the set Ω) to be the starting point. In a preferred embodiment, the log (unless otherwise indicated, log denotes the natural logarithm) of the labor requirement is re-normalized so that the adjusted log of the initial unit labor requirement is 1.0. The relationship between the labor requirement t and the adjusted log labor requirement is given by: adjusted log labor requirement = 1 + log ϋ. The adjusted log labor requirement is negative for H < 0.36787. Negative values of this convenient measure cause no problems (though it would be economic nonsense if unadjusted $ were to be negative).
For each realization of a single production run, the estimated progress ratio p is given by
where b is the OLS estimate of the learning parameter b, i.e., b is the absolute value of the slope of the regression line. If τ = 1, then the labor requirement at t, II, , and cumulative output at t, Yt , correspond to what has been used for estimating b and p in the known art on experience curves. If τ ≠ 1, then Yt will not be equal to the cumulative number of trials that define time in the model. The relationship between t (cumulative number of trials), Y, (cumulative output), and τ (number of trials per measured batch) is t τ = — γ,
In general Y, is the appropriate measure of "economic time" for a given production run. If τ=l, "calendar time" t and "economic time" Yt are the same. Otherwise, t must be adjusted to measure economic time. In computational terms, if t is fixed, then increasing τ decreases the number of simulated points. Trials take place and labor requirements are modified at each date t, but not all modifications are recorded. For example, with τ = 20 (and B = 1 as assumed), calculation of the per-unit labor requirement would not occur until after the 20th unit was produced, then again not until the 40th unit, and so on. Under the assumption of t = 1000 and τ = 1, 1,000 data points in a given simulated experience curve were observed, but with τ = 20 only 50 data points would be observed.
Another quantity that can be used in measuring cumulative increases in productivity is Hτ, the labor requirement after τ trials (or the "terminal" labor requirement, for short). The focus is on the path of productivity increases, not on the initial labor requirement, so one normalizes i0 = 1. The terminal labor requirement Hτ is usually (but not always!) inversely related to b : usually, the lower the final labor requirement the steeper is the experience curve. If there were no specification error of the experience curve, this would always be the case and HT would be an uninteresting statistic. Consider, however, the case in which there are huge productivity increases in the first few periods after which the labor requirements asymptote. For T large, b would then be small because OLS would heavily weight the asymptote in this case. Here the fact that iτ is small indicates that the estimated learning coefficient b (and hence the estimated progress ratio P) might be misleading.
In order to capture the extent of plateauing, one computes an (inverse) statistic z, improvements percentage per measured batch, defined by z = τ x number of observed improvements
T
The measure z is preferable to direct measures of plateauing (e.g., average plateau length) because it is less sensitive to the distortions caused by the presence of a long final plateau. For experience curve analysis, the transient is of greater interest than the steady state, since in most real world cases, the rate of product replacement (due to, say, a new, superior product) is rapid relative to the exhaustion of productivity improvements for the original product. The total number of improvements observed by τ/T is weighted so that the measure will reflect, not the absolute number of improvements found, but rather the likelihood that a new observation will be a productivity improvement.
Finally, in order to measure the extent of curvature mis-specification in the experience curve data, one estimates a second quadratic specification of the learning model: log
Figure imgf000028_0001
a2 + b2 log Y+ c2 log Y2 + ε, where β is the labor requirement after cumulative production of Y units. The magnitude, sign and level of significance for c , the estimate of c2 , give an indication of the extent of curvature mis-specification in the standard log-log model. A negative and significant c would suggest that the log-log form overstates the rate of early, relative to later, productivity improvements. This is not a test of the full SFS effect. A negative value of c suggests the observed run exhibits SF (first slow and then ast). Often the second S of SFS is outside the observed or calculated production run. FIG. 20 represents a single learning curve for the "base" parameter vector: n = 100, s = 10, e = 5, τ = l, δ = l, r= 1000. These parameters were chosen to be reasonable for the described simulation. The priors about the validity of these parameters were not very strong. Hence sampling was done widely in the space of parameters, but in many of the experiments (173 experiments out of 423) the parameter space was sampled by moving only one parameter at a time while holding the others at one of its base values. This is a particular type of sensitivity analysis.
In FIG. 20, the landscape is not perfectly smooth since e = 5 > 1. Plateauing is also evident throughout the production run. This is confirmed by the small value of z: only 6.2%> of the trials lead to improvements in productivity. Overall productivity increase is moderate; this is confirmed by the estimated progress ratio of 87.5%o. There is also a positive SFS (curvature) effect, but it is not strong. In FIG. 21, two changes have been made in the parameters to set e = 1 and n = 1 ,000, so that the landscape is now as smooth as possible (because e = 1) and a single step improvement is likely to be small (because n is large and e = 1). With the smooth landscape, plateauing (except for terminal plateauing) is so reduced that it cannot be detected by eyeballing the figure. This is confirmed by the value of z: 42.3%> of the trials result in productivity improvements. The estimated progress ratio of 94.6% indicates a relatively slow rate of productivity improvement, but/? is upward biased because of the SFS effect, which seems to be more pronounced. (One must be careful, however, not to visually overweight the sparse plot in log units for the early periods (relative to the later periods) in evaluating the SFS effect; the later data points are more crowded together than the early data points.) If the quadratic model were fitted to the plot in FIG. 21, one would expect c to be significantly negative, since the plot suggests a concave function. (Note that the plot displays clearly the SF of SFS, but that the final S is not displayed in the plot, because of the truncation at log batch number =7.) In FIG. 22, the externality parameter e has been increased to its maximum relative to n (e = n = 100). The landscape is very rugged because of the very large e. Consequently, plateauing is very strong (only 1%> of the trials lead to improvements) and overall progress is very small (p = 97.8).
In FIG. 23, e = 1 so there are no externalities to cause a rough landscape. But n is reduced to 10 while s is increased to 100. The number of operations is few, so that a change 5 in any operation can be expected to have a large impact on the labor requirement. This is reflected by very rapid overall productivity improvement, which is confirmed by the value of the estimated progress ratio of 60.2%o. Plateauing is evident (z is only 4.4%o). The SFS effect is evident, although the caution against over- weighting the sparse early realizations also applies here.
10 In FIGS. 20-23, only one production run was used. Each of FIGS. 24-28 presents averaged data of multiple production runs based on a given vector of parameters. FIG. 24 is nearly the same as FIG. 20 except that in FIG. 24 the data is averaged over 20 separate production runs. The vertical axis in FIG. 24 measures the adjusted log labor requirement of the industry average over 20 firms or, alternatively the adjusted log labor requirement of the
15 firm average over 20 plants. The most important difference between the outputs in FIG. 24 and FIG. 20 is the degree of plateauing. The curve in FIG. 24 is far smoother than the curve in FIG. 20. Plateauing is evident in FIG. 20, while it is barely discernible in FIG. 24. This is confirmed by the z statistics: in the single run case only 6.2%> of the trials resulted in improvement, while in the 20-run average, 65%> of the trials resulted in improvement. The
20 estimated progress ratio (87.5) for the single firm walk is very close to the estimated progress ratio (86.5) for the average walk of the 20 firms. The small, but positive curvature effect seems to be the same for FIGS. 20 and 24.
FIG. 21 is based on a correlated landscape (e = 1) and hence the single run is quite smooth, but some plateauing is discernible. The average walk shown in FIG. 25 is even
25 smoother. For the single run depicted in FIG. 21, p = 94.6% and z = 42.3%>. For the averaged run depicted in FIG. 25,/? = 94.6%> and z = 99.9%>. The strong positive curvature effects in the two figures are nearly identical.
The landscape behind FIG. 22 is very rugged (e = 100). Plateauing is dominant. Only 1% of the trials result in improvements. FIG. 26 is the 20-fιrm average run using FIG.
30 22 parameters. Plateauing is reduced; 7.8% of the trials result in improvement. FIG. 27 is the 50-firm average run based on the same rugged landscape data. Plateauing is further reduced; 15.7% of the trials result in improvements. The estimated progress ratio for each of the 3 cases (FIGS. 22, 26, 27) is about 98%. It is difficult to judge the SFS effect when there is so much plateauing, but it is positive and one could argue that the effect is constant across FIGS. 22, 26, and 27. FIG. 28 is the 20-firm average related to the single firm walk in FIG. 23. The averaging reduces plateauing (increases z), has no effect on the progress ratio or the terminal labor requirement, and seems to reduce the SFS effect but only slightly.
From these experiments, the following conclusions may be reached: (a) averaging profoundly reduces plateauing (and increases z); (b) averaging does not substantially affect the estimated progress ratio; and (c) averaging does not seem to have a strong effect on curvature or SFS. These are consistent with actual observations of the firm and industry experience.
Many of the qualitative features of experience curves — ranges of values of/?, presence of plateaus, different learning rates for the same or similar goods — can be discerned by exam- ining single realizations of learning curves. However, to study the full effects of changes in the values of the underlying parameters on the predictions of the model, one needs to compute more than one realization per set of parameter values. Consequently, for each chosen parameter vector in the experiment, 20 independent realizations were computed.
In the simulations, computing 20 different realizations means running the simulation program using the same parameter set, but with 20 different random seeds. A new random seed yields a new realization of the externality connection among the operations, a new realization of the landscape C(ω), a new starting point on the landscape and hence a new experience curve. Hence in this class of models the "maximum degree of randomness" between different realizations of the experience curve was chosen. (Even if the sets of connections E, (i = l,...,n), the realization of the landscape, (!(ω), and the starting recipe, ω0, were all to be held constant, different realizations of the experience curve would still be possible (indeed almost certain) due to different sequences of recipe sampling.) The sample standard deviations computed are therefore likely to be biased upward.
The set of potentially interesting parameters is large. Computations were restricted to the following grid-like parameter space: n = l, 10, 20, 50, 100, 500, 1000; s = 2, 10, 25, 50, 75, 100; e = 1, 2, 3, 5, 6, 7, 8, 9,10, 25, 50, 75, 100 (but with the constraint that e < n); δ = 1, 2, 4,10, 25, 50, 75, 100 (but with the constraint that δ < n); τ = 1, 10, 50, 100, 250 (but with the constraint that τ < T); T= 100, 500, 1000, 5000. Two subsets of parameters were used. The first subset, the focused set of parameter values, reflects the priors informed by a review of the empirical literature, introspection about production processes, and comparisons with the modeling in evolutionary biology. To achieve the rapid productivity increases that have been observed, rather small values of e and 10 δ relative to n were focused on (in particular: e = 1 and 5 and δ = 1), as well as relatively low values of s (in particular: s = 2 and 10). Actually 5 is not a small value for the parameter e when n = 100 or n = 1 ,000. In fact, e — 1 gives the number of externalities per operation, so the number of externalities per recipe would be n(e — 1). The length of the run was frequently set at T= 1000. This was chosen to reduce or eliminate the effects of the terminal 15 plateau. See the center panel in FIG. 35, which confirms our strategy. For s = 10, n = 100, e = 5, the mean progress ratio is smallest for T= 1000. For T> 1000 the effect of the terminal plateau is to reduce the progress ratio. For many runs, n = 100 was adopted for the number of operations. For many runs τ = 1 was adopted. The base cases for the focused runs are then: (n = 100, s = 2, e = 1, δ = 1, τ = 1, T= 1000) 20 . (Λ = 100, J = 10, e = l, δ = l, τ = l, 7A 1000)
(n = 100, s = 2, e = 5, δ = 1, τ = 1, T= 1000) (n = 100, s = 10, e = 5, δ = 1, τ = 1, T= 1000) The focused parameter set was constructed from the above four base case vectors by varying the six parameters one at a time. Summary statistics for the focused parameter set are 25 given in Table 3 (FIG. 8). There are 173 parameter vectors in the focused set. For each vector there are 20 runs, so the total number of runs is 3,460.
If the parameter space were only of dimension 2 and there were only a single base case, the method of choosing parameters would be that described in FIG. 29. Say the base parameter vector is given as row 3, column 5 of the simple 10 x 10 matrix recipe set in FIG. 30 29. Then the set of focused parameters is the union of the set of recipes having a row-3 component with the set of recipes having a column-5 component. The focused parameter set is in the shaded "cross." The advantage of the focused parameter method is that one begins with a set of reasonable base parameters and then tests the sensitivity of the predictions to each of these parameters varied one at a time from each of the base vectors. The parameters are often set at "extreme" values to test parameter sensitivity. On the other hand, as can be 5 seen from FIG. 29, this selection of parameter points is clearly "statistically inefficient" and results could be sensitive to a choice of base parameter vectors. As a counter to these potential biases, experience curves based on a random set of parameter values were also simulated. The random parameter vectors were chosen as follows: n drawn uniformly from {l,...,1000}; s drawn uniformly from {1,...,100}; e drawn uniformly from {1,...,
10 min(10,n/2)} ; <5drawn uniformly from{l,...,min(10,n/2)} ; τ drawn uniformly from {1,...,10}; and T= 1000.
The number of parameter vectors in the set of random parameters is 250. There are 20 runs per vector. Hence there are 5000 runs in all for the random parameter set. Summary statistics for the random parameter set are given in Table 4 (FIG. 9).
15 The parameters selected for the focused set suffer from "FIG. 29 bias"; the parameters selected for the random set do not. These are not the only differences between the two parameter sets. See Tables 3 and 4. For the parameters n and s, the ranges and the means are greater in the random set than in the focused set, but this is reversed for e and δ. In the focused set, τ was varied between 10 and 1000, but in the "random" set Twas fixed at
20 1000. The means of the predicted /?s are similar, but the range and the standard deviations of the /7s are much larger for the focused set. The mean of the is, the range, and the standard deviation are larger for the focused set. Curvature predictions and labor requirements were not assembled for the focused set.
The effects of varying the parameters of the model were evaluated using two scoring
25 methods. For the focused parameter set, judgment from studying all the relevant simulations was used. This is called here "the eyeball method of scoring" parameter effects. The eyeball method is very illuminating, but it does require judgment from the modeler. As a guard against human bias, simple OLS regression scoring for the focused parameters (173 experiments) and the random parameters (250 experiments) was also used. The basic
30 disadvantage to OLS scoring is that the results are "global": "extreme" parameter values are probably overweighted. Furthermore, subtle non-monotonic interrelationships are obscured by the simple functional forms used for OLS scoring.
The effect of varying a particular parameter on one of the predictions (or "results") of the model typically depends on the inteφlay of two effects: (1) the effect of the parameter change on the size of the recipe space and (2) the effect of the change on the trial (or, recipe sampling) mechanics. Typically, these effects are too complicated to permit an analytic solution, especially since the bias is on the short term and the medium term. Computation is called for.
The first effect of increasing n is to increase the number of recipes, s". This effect (especially for large T) should tend to increase long-term productivity improvement. On the other hand, increasing n decreases (especially when e is small) the expected cost reduction of one-step or few-step changes in the recipe on unit labor costs, because — given the assumption of additive unit costs — with more operations each operation contributes less to the overall cost. This suggests that increasing n might decrease the rate of short-term (and medium- term) productivity improvement.
In FIG. 30, the sample means, p, (over 20 runs) and the sample standard deviations, are plotted. It was observed that for small n the effect of increasing n is to decrease p (i.e., to increase the mean rate of productivity improvement), but for larger n, the effect on/?" of increasing n is positive. As n becomes even larger, the effect on/? of increasing n attenuates. The standard deviation, s , is decreasing in n.
The sole effect of increasing the parameter s on the rate of productivity improvement is through increasing the size of the recipe space and hence through increasing long-run productivity improvement. From FIG. 31, it is apparent that this is confirmed. Increasing s substantially decreases/? while slightly increasing s , but each effect eventually attenuates as s becomes very large. In particular, for s larger than 50 the effects on p of increasing s are negligible.
The most obvious effect of increasing e is increasing the ruggedness of the landscape, thereby reducing the effectiveness of the myopic recipe sampling procedure. As e increases the number of local optima in the landscape increases, thus the probability of being "trapped" increases. This reasoning suggests that p would tend to be increasing in e. On the other hand, increasing the parameter e increases the number of cost-relevant sub-recipes (equal to nse), and thus reduces the expected value of the global minimum labor requirement. Furthermore, increasing e has the effect of speeding the rate of experimentation, since each trial modifies the contribution to the labor requirement of not one, but e different operations within the production recipe. For these last two (related) reasons, one might expect p to tend to be decreasing in e.
Results for e are given in FIG. 32 and FIG. 33. If the recipe space is relatively small (s < 6, n = 100), then increasing e seems to decrease p for e small, although the standard errors suggest that one should be cautious in making this conclusion. The smallest p (and the lowest terminal labor requirements) seems to occur at values of e = 5. If the recipe space is larger (s > 6, n - 100), then p and Hτ are clearly monotonically increasing in e. For s large, the transitory effects of e depend on how much progress has already been made. If £(ω,) is above the expected value of φ(ω) over all of Ω, then increasing e (for small values of e) increases the expected rate of short-run productivity improvement. On the other hand, if C(ω,) is below the expected value of φ(ω), then increasing e (for small values of e), decreases the expected rate of productivity improvement. See Auerswald (1998, Chapter 3) (P. Auerswald,
Organizational Learning, Production Intranalities and Industry Evolution, University of Washington Ph.D. thesis, 1998). This is because the ruggedness is helpful in "bad" (high Q) neighborhoods, but hurtful in good (low i) neighborhoods. This phenomenon is seen in the simulations. This argument also suggests that increasing e tends to increase the curvature mis-specification. This effect is present in the simulations.
Taking bigger steps on a given landscape is somewhat like walking with smaller steps on a more rugged landscape. Hence, increasing δ should be analogous to increasing e. Like e, increasing δ increases p except for some cases with δ small; see FIG. 34. There are suggestions in the data that for small δ and appropriately chosen values of the other parameters,/? is decreasing in δ. Indeed, the parameters e and δ are close cousins. (They would be even closer if by increasing δ one would be increasing the size of every "step" rather than merely increasing the maximum step size.)
The length of the production run would not affect p if the model were perfectly specified, in particular, if a power law fitted the data well. Varying T provides a method for analyzing the curvature mis-specification of the experience curve. See FIG. 35. For the case with a relatively small recipe space (s = 10, n = 100) and a relatively rugged landscape (e = 5), the SFS effect is pronounced. The progress ratio falls from T= 1 to about T= 1000 and then gradually rises out to T- 5000. This pattern is also suggested for the case: s = 100, n = 10, e = 5, but the standard errors are too large for confidence. For the case: s = 10, n = 1000, e = 1, the recipe space is very large and the landscape is smooth. The estimated mean progress ratio is monotonically decreasing (with small standard errors) over T= 1 to T = 5000. The recipe space is so large in this case that the effects of the second Slow response do not kick in sufficiently to increase/? even at T= 5000.
The statistic z (percent of trials that result in improved productivity) is an inverse measure of plateauing. Increasing the number of recipes by increasing n decreases plateauing with no discernible effect on the standard deviation s. (see FIG. 37). Increasing the number of recipes by increasing s decreases plateauing but increases (except for large values of s) the standard deviation s.. Increasing e, in general, increases plateauing (See FIG. 38). Increasing δ, in general, increases plateauing, but for small e, s, and δ, it appears that increasing δ reduces plateauing (see FIG. 39). The positive curvature effect is pronounced in FIGS. 21 & 25 (smooth landscape cases with: n = 1000, s = 10, e = 1, δ = 1, τ= 1, T= 1000). Linear OLS in log/log space at first overstates the actual rate of productivity improvement and then understates it. Ultimately (because of the terminal plateau), linear OLS in log/log space can be expected to overstate the rate of productivity improvement. This is not suφrising for smooth, many- recipe landscapes with n large and δ small. Because n is large, for each improvement the expected reduction in the labor requirement is small. Because the landscape is smooth, the probability of finding an improvement on any given trial is relatively high. The large recipe set implies that the stock of potential improvements is being exhausted at a very slow rate. Hence the resulting productivity plot is likely to be nearly linear in natural units. Therefore it will be strongly concave to the origin in log/log space. This is in agreement with the results of the model and with observed experience curves. The data behind experience curves frequently show curvature bias, usually positive (indicating a concave function), but sometimes negative (indicating a convex function). In the discussion above on the effects of 2" on /J and sp, separate simulation evidence on curvature and SFS are also provided. For the two parameter sets (focused and random), each of the predictions of the model was regressed on the parameters of the model. More efficient grids and pooling of data are known to those of ordinary skill in the art. See, e.g., Judd (1998, Chapter 9) (K. Judd, Numerical Methods in Economics and Finance, 1998) on acceleration techniques for Monte Carlo experiments. Various functional forms were employed. The results are not very sensitive to the functional form. Reported here are the regression results for cases in which the prediction in natural units is regressed on log values of the parameters.
In Table 5 (FIG. 10) and Table 6 (FIG. 11) are summarized the results of OLS scoring for prediction of/?. The R2s are not very high. This is probably because the functional form being used is highly mis-specified. Eyeball scoring indicated that the interaction effects of the parameters can be quite subtle and that the effects of varying even a single parameter are not monotone. Nonetheless, the t tests yield high levels of significance for most of the parameters. The R2s and the t tests are more favorable for the random set. This is probably because the random set does not suffer from "FIG. 29 bias," i.e., (1) the data is more dispersed for the random set and (2) the focused set conditions more on the interesting small values for which the results tend to be non-monotone in the parameters. Both OLS scorings predict that increasing either of the two cousin parameters — e and δ — will increase/?. From the focused parameter set, the prediction is that increasing s decreases/?. The prediction from the random parameter set is the same but at a lower level of significance. From the random set, there is a weak prediction that increasing τ increases p from the focused set, the coefficient is not significant. This weak effect of τ on/Jis probably because of mis- specification of the experience curve: as τ is increased initial productivity improvements can be so rapid that the terminal plateau is reached quickly. The estimated progress ratio then gives a downward biased estimate of the "actual rate of productivity improvement." Another reason that the τ effects might appear to be weaker than expected is that the model of reporting and implementation of quality control trials might not be the best one. For the focused parameter set, increasing the parameter n decreases the prediction p. For the random parameter set, increasing n strongly increases /?. This can be explained by the fact that the random parameter set contains bigger than does the focused parameter set; see Tables 3 and 4. Eyeball scoring strongly suggests that/? is not monotone in n. For smaller n,p is decreasing in n. For larger n,p is increasing in n. Tables 7-8 (FIGS. 12-13) report the prediction of the standard deviation sp.
Increasing n reduces the predicted sp. Increasing s increases the sp predicted from focused parameters. Increasing e decreases the predictions of sp. The predicted effects of varying δ differ between the two parameter sets. Increasing τ decreases the prediction of sp.
See Table 9 (FIG. 14). Increasing n increases the predicted mean improvement percentage, z while increasing e or δ decreases z. If c2 is negative then the estimated quadratic experience curve is a concave function in log/log space. See Table 10. Increasing n, s, or τ decreases predicted c2 and hence increases the curvature effect. Increasing e or δ increases the predicted c2 and hence decreases the curvature effect.
The observed values of/? (from actual firms and industries) are likely to be biased downward, since estimates of the progress ratio also pick up the effects of increasing returns to scale in production and of the development (the D of R&D) activity devoted to improvements in production efficiencies.
It is straightforward to test the predictions of a model in the following way: (1) take some learning curves for a particular industry; (2) adjust the reported statistics (such as p and sp) for the numbers of observations in each run; then (3) find in the set of predictions the parameter sets for which the predictions are near the observed statistics.
FIG. 40 discloses a representative computer system 4010 in conjunction with which the embodiments of the present invention may be implemented. Computer system 4010 may be a personal computer, workstation, or a larger system such as a minicomputer. However, one skilled in the art of computer systems will understand that the present invention is not limited to a particular class or model of computer.
As shown in FIG. 40,representative computer system 4010 includes a central processing unit (CPU) 4012, a memory unit 4014, one or more storage devices 4016, an input device 4018, an output device 4020, and communication interface 4022. A system bus 4024 is provided for communications between these elements. Computer system 4010 may additionally function through use of an operating system such as Windows, DOS, or UNIX. However, one skilled in the art of computer systems will understand that the present invention is not limited to a particular configuration or operating system.
Storage devices 4016 may illustratively include one or more floppy or hard disk drives, CD-ROMs, DVDs, or tapes. Input device 4018 comprises a keyboard, mouse, microphone, or other similar device. Output device 4020 is a computer monitor or any other known computer output device. Communication interface 4022 may be a modem, a network interface, or other connection to external electronic devices, such as a serial or parallel port
While the subject invention has been particularly described with reference to a preferred embodiment, it will be appreciated by those of ordinary skill in the art that various changes and modifications may be made without departing from the spirit and scope of the method. For example, the learning-by-doing model is sufficiently rich to match the reported progress ratios from estimated experience curves. Those of ordinary skill in the art will recognize that certain modifications are routine. For example, if one has data or priors on the values of/7, s , z, s, , c2 , sc, , or any of the other predictions for a particular plant, firm or industry producing a specific good, then one can search for parameters n, s, e, δ and τ that predict the data. One might also have priors on some of the parameters based on engineering considerations or (especially in the case of J) market considerations. From this, one could come up with a best explanation of the observed data. How well the modified model fit observations and priors would measure the usefulness of the modified model of learning-by- doing.
It is intended that the appended claims be inteφreted to cover the foregoing as well as equivalents thereto.

Claims

CLAIMSWhat is claimed is:
1. A method for determining a production plan comprising the steps of: defining a plurality of production recipes such that each of said production recipes is a vector of n operations; selecting a current one of said production recipes; evaluating said current production recipe to determine its cost; modifying said current production recipe to create a trial production recipe; evaluating said trial production recipe to determine its cost; and assigning said trial production recipe to said current production recipe if said cost of said trial production recipe is less than said cost of said current production recipe.
2. A method for determining a production plan as in claim 1 further comprising the steps of repeating said modifying said current production recipe step, said evaluating said trial production recipe step and said assigning said trial production recipe step.
3. A method for determining a production plan as in claim 1 wherein each of said operations of said production recipes has s possible settings.
4. A method for determining a production plan as in claim 1 wherein said cost of said production recipe is a labor cost.
5. A method for determining a production plan as in claim 3 further comprising the step of defining a distance between said current production recipe and said trial production recipe.
6. A method for determining a production plan as in claim 5 further comprising the step of defining a distance between a first one of said operations and a second one of said operations.
7. A method for determining a production plan as in claim 6 wherein said distance between said first one of said operations and said second one of said operations is defined as one if said first operation is not equal to said second operation and as zero otherwise.
8. A method for determining a production plan as in claim 6 wherein said distance between said first one of said operations and said second one of said operations is defined as the difference between said setting of said first operation and said setting of said second operation.
9. A method for determining a production plan as in claim 6 wherein said distance between said current production recipe and said trial production recipe is defined as a sum of said distance between said operations of said current production recipe and corresponding ones of said operations of said trial production recipe.
10. A method for determining a production plan as in claim 9 wherein said modifying step creates said trial production recipe in a neighborhood of said current production recipe.
11. A method for determining a production plan as in claim 10 further comprising the step of defining said neighborhood of said current production recipe as those of said plurality of production recipes that are within a maximum distance from said current production recipe.
12. A method for determining a production plan as in claim 1 further comprising the step of defining a connectivity indicator e^ as 1 if said setting of said operation i affects said cost of said operation j and 0 otherwise.
13. A method for determining a production plan as in claim 12 further comprising the step of defining a number e, of operations that affect said cost of operation i as a sum of said connectivity indicators e^ over j from 1 to n.
14. A method for determining a production plan as in claim 13 further comprising the step of defining said number ei as an externality parameter e.
15. A method for determining a production plan as in claim 1 wherein said modifying step is a non-committed experiment to said trial production recipe.
16. A method for determining a production plan as in claim 1 wherein said modifying step is a commitment to said trial production.
17. A method for determining a production plan as in claim 1 wherein said plurality of production recipes comprise backward technologies.
18. A method for determining a production plan as in claim 1 wherein said plurality of production recipes comprise advanced technologies.
19. A method for determining a production plan as in claim 1 wherein said plurality of production recipes comprise nascent technologies.
20. A method for predicting technological innovation comprising the steps of: defining a model comprising: a plurality of production recipes such that each of said production recipes is a vector of n operations; and a plurality of model parameters; and executing said model comprising the steps of: selecting a current production recipe; evaluating said current production recipe to determine its cost; modifying said current production recipe to create a trail production recipe; and assigning said trial production recipe to said current production recipe if said cost of said trial production recipe is less than said cost of said current production recipe.
21 A method for predicting technological innovation as in claim 20 wherein said executing said model step further comprises the steps of: repeating said modifying said current production recipe step, said evaluating said trial production recipe step and said assigning said trail production recipe step.
22. A method for predicting technological innovation as in claim 20 wherein said plurality of model parameters comprise a possible settings parameter s defining a number of possible settings of said operations of said production recipes.
23. A method for predicting technological innovation as in claim 20 wherein said plurality of model parameters comprise a connectivity indicator e,' defined as 1 if said setting of said operation i affects said cost of said operation j and 0 otherwise.
24. A method for predicting technological innovation as in claim 23 wherein said plurality of model parameters further comprise a number e, of operations that affect said cost of operation i as a sum of said connectivity indicators e ' over j from 1 to n.
25. A method for predicting technological innovation as in claim 24 wherein number e, is defined to be a predetermined externality parameter e.
26. A method for predicting technological innovation as in claim 20 wherein said plurality of model parameters comprise a recipe distance between said current production recipe and said trial production recipe.
27. A method for predicting technological innovation as in claim 26 wherein said plurality of model parameters further comprise an operation distance between a first one of said operations and a second one of said operations.
28. A method for predicting technological innovation as in claim 27 wherein said operation distance between said first one of said operations and said second one of said operations is defined as one if said first operation is not equal to said second operation and as zero otherwise.
29. A method for predicting technological innovation as in claim 27 wherein said operation distance between said first one of said operations and said second one of said operations is defined as the difference between said setting of said first operation and said setting of said second operation.
30. A method for predicting technological innovation as in claim 27 wherein said recipe distance between said current production recipe and said trial production recipe is defined as a sum of said operation distances between said operations of said current production recipe and corresponding ones of said operations of said trial production recipe.
31. A method for predicting technological innovation as in claim 26 wherein said plurality of model parameters further comprise a maximum distance from said current production recipe, said maximum distance defining a neighborhood of said current production recipe.
32. A method for predicting technological innovation as in claim 31 wherein said modifying said current production recipe step comprises the step of creating a trial production recipe within said neighborhood of said current production recipe.
33. A method for predicting technological innovation as in claim 20 wherein said plurality of model parameters further comprise a production run length.
34. A method for predicting technological innovation as in claim 33 wherein said executing said model step is performed for said production run length.
35. A method for predicting technological innovation as in claim 20 further comprising the step of defining values for said plurality of model parameters.
36. A method for predicting technological innovation as in claim 35 further comprising the step of defining at least one innovation prediction variable.
37. A method for predicting technological innovation as in claim 36 wherein said executing said model step determines values of said at least one innovation prediction variable corresponding to said values of said model parameters.
38. A method for predicting technological innovation as in claim 36 wherein said at least one innovation prediction variable comprises a least one estimated progress ratio.
39. A method for predicting technological innovation as in claim 38 wherein said at least one estimated progress ratio is defined as
P = 2'6,
wherein bis the OLS estimate of the learning coefficient b, wherein the learning coefficient b represents the rate at which productivity increases as a firm acquires experience, and wherein (a) b is the absolute value of the slope of the regression line in log/log space given by log 0, = log α - b log 7,.
where C, = #(ω,) is a unit labor requirement for production run t, and wherein a is the labor needed to produce a first batch of a good; t- \
(b) Yt _ , is defined by Yt_χ - ∑ ys , a cumulative output up to (but not i = l
including) production run t;
(c) τ - — , wherein a trials parameter τ satisfies Y' τ = B/R , wherein B is a measured batch size and B is a quality control batch size;
(d) l>(ω,) is defined to be the realization of a random field φ(ω), wherein φ(ω) is defined by
Φ(ω) ~- ∑ Φ'(ω), ι = l
wherein for each /, φ'(ω) is the unit labor cost of operation /.
40. A method for predicting technological innovation as in claim 38 wherein said at least one innovation prediction variable further comprises a sample standard deviation of said at least one estimated progress ratio.
41. A method for predicting technological innovation as in claim 35 further comprising the step of defining at least one measure of model misspecification.
42. A method for predicting technological innovation as in claim 41 wherein said at least one measure of model misspecification comprises a plateau effect.
43. A method for predicting technological innovation as in claim 42 wherein said at least one measure of model misspecification comprises a curvature misspecification.
44. A method for predicting technological innovation as in claim 41 wherein said executing said model step determines values of said at least one measure of model misspecification corresponding to said values of said model parameters.
45. Computer executable software code stored on a computer readable medium, the code for determining a production plan, the code comprising: code to define a plurality of production recipes such that each of said production recipes is a vector of n operations; code to select a current one of said production recipes; code to evaluate said current production recipe to determine its cost; code to modify said current production recipe to create a trial production recipe; code to evaluate said trial production recipe to determine its cost; and code to assign said trial production recipe to said current production recipe if said cost of said trial production recipe is less than said cost of said current production recipe.
46. A programmed computer system for determining a production plan comprising at least one memory having at least one region storing computer executable program code and at least one processor for executing the program code stored in said memory, wherein the program code includes: code to define a plurality of production recipes such that each of said production recipes is a vector of n operations; code to select a current one of said production recipes; code to evaluate said current production recipe to determine its cost; code to modify said current production recipe to create a trial production recipe; code to evaluate said trial production recipe to determine its cost; and code to assign said trial production recipe to said current production recipe if said cost of said trial production recipe is less than said cost of said current production recipe.
47. Computer executable software code stored on a computer readable medium, the code for predicting technological innovation, the code comprising: code to define a model comprising: a plurality of production recipes such that each of said production recipes is a vector of n operations; and a plurality of model parameters; and code to execute said model comprising: code to select a current production recipe; code to evaluate said current production recipe to determine its cost; code to modify said current production recipe to create a trail production recipe; and code to assign said trial production recipe to said current production recipe if said cost of said trial production recipe is less than said cost of said current production recipe.
48. A programmed computer system for predicting technological innovation comprising at least one memory having at least one region storing computer executable program code and at least one processor for executing the program code stored in said memory, wherein the program code includes: code to define a model comprising: a plurality of production recipes such that each of said production recipes is a vector of n operations; and a plurality of model parameters; and code to execute said model comprising: code to select a current production recipe; code to evaluate said current production recipe to determine its cost; code to modify said current production recipe to create a trail production recipe; and code to assign said trial production recipe to said current production recipe if said cost of said trial production recipe is less than said cost of said current production recipe.
PCT/US1999/022911 1998-10-02 1999-10-01 A system and method for determining production plans and for predicting innovation WO2000020983A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU62836/99A AU6283699A (en) 1998-10-02 1999-10-01 A system and method for determining production plans and for predicting innovation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10312898P 1998-10-02 1998-10-02
US60/103,128 1998-10-02

Publications (2)

Publication Number Publication Date
WO2000020983A1 true WO2000020983A1 (en) 2000-04-13
WO2000020983A8 WO2000020983A8 (en) 2000-07-20

Family

ID=22293545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/022911 WO2000020983A1 (en) 1998-10-02 1999-10-01 A system and method for determining production plans and for predicting innovation

Country Status (2)

Country Link
AU (1) AU6283699A (en)
WO (1) WO2000020983A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001039069A1 (en) * 1999-11-24 2001-05-31 Ecorporateprinters Inc. Automated internet quoting and procurement system and process for commercial printing
US6952678B2 (en) 2000-09-01 2005-10-04 Askme Corporation Method, apparatus, and manufacture for facilitating a self-organizing workforce
EP3396475A1 (en) * 2017-04-26 2018-10-31 Siemens Aktiengesellschaft Method for optimizing formula parameters and control device operating according to this method
WO2021043712A1 (en) * 2019-09-06 2021-03-11 Bayer Aktiengesellschaft System for planning, maintaining, managing and optimizing a production process

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4942527A (en) * 1987-12-11 1990-07-17 Schumacher Billy G Computerized management system
US5172313A (en) * 1987-12-11 1992-12-15 Schumacher Billy G Computerized management system
JPH07334576A (en) * 1994-06-10 1995-12-22 Kawasaki Steel Corp System for optimal purchase and distribution of material
US5487133A (en) * 1993-07-01 1996-01-23 Intel Corporation Distance calculating neural network classifier chip and system
US5799286A (en) * 1995-06-07 1998-08-25 Electronic Data Systems Corporation Automated activity-based management system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4942527A (en) * 1987-12-11 1990-07-17 Schumacher Billy G Computerized management system
US5172313A (en) * 1987-12-11 1992-12-15 Schumacher Billy G Computerized management system
US5487133A (en) * 1993-07-01 1996-01-23 Intel Corporation Distance calculating neural network classifier chip and system
JPH07334576A (en) * 1994-06-10 1995-12-22 Kawasaki Steel Corp System for optimal purchase and distribution of material
US5799286A (en) * 1995-06-07 1998-08-25 Electronic Data Systems Corporation Automated activity-based management system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001039069A1 (en) * 1999-11-24 2001-05-31 Ecorporateprinters Inc. Automated internet quoting and procurement system and process for commercial printing
US6330542B1 (en) * 1999-11-24 2001-12-11 Ecorporate Printers, Inc. Automated internet quoting and procurement system and process for commercial printing
US6952678B2 (en) 2000-09-01 2005-10-04 Askme Corporation Method, apparatus, and manufacture for facilitating a self-organizing workforce
EP3396475A1 (en) * 2017-04-26 2018-10-31 Siemens Aktiengesellschaft Method for optimizing formula parameters and control device operating according to this method
WO2021043712A1 (en) * 2019-09-06 2021-03-11 Bayer Aktiengesellschaft System for planning, maintaining, managing and optimizing a production process

Also Published As

Publication number Publication date
WO2000020983A8 (en) 2000-07-20
AU6283699A (en) 2000-04-26

Similar Documents

Publication Publication Date Title
Auerswald et al. The production recipes approach to modeling technological innovation: An application to learning by doing
Öztürk et al. Manufacturing lead time estimation using data mining
US6968326B2 (en) System and method for representing and incorporating available information into uncertainty-based forecasts
Flanagan et al. Life cycle costing and risk management
Azab et al. Simulation methods for changeable manufacturing
Aytac et al. Characterization of demand for short life-cycle technology products
US20120078678A1 (en) Method and system for estimation and analysis of operational parameters in workflow processes
Sharda et al. Robust manufacturing system design using multi objective genetic algorithms, Petri nets and Bayesian uncertainty representation
Yannibelli et al. A comparative analysis of NSGA-II and NSGA-III for autoscaling parameter sweep experiments in the cloud
Rafiee et al. A multistage stochastic programming approach in project selection and scheduling
Li et al. Reinforcement learning algorithms for online single-machine scheduling
Vignesh et al. Factors influencing lean practices in Super market services using interpretive structural modeling
Alibrandi et al. A decision support tool for sustainable and resilient building design
Chien et al. An integrated approach for IC design R&D portfolio decision and project scheduling and a case study
Mehdizadeh et al. A bi-objective multi-item capacitated lot-sizing model: two Pareto-based meta-heuristic algorithms
Magliocca et al. Integrating global sensitivity approaches to deconstruct spatial and temporal sensitivities of complex spatial agent-based models
Voynarenko et al. Applying fuzzy logic to modeling economic emergence
Sampath et al. A generalized decision support framework for large‐scale project portfolio decisions
WO2000020983A1 (en) A system and method for determining production plans and for predicting innovation
US20200226305A1 (en) System and method for performing simulations of uncertain future events
Rojas-Gonzalez et al. A stochastic-kriging-based multiobjective simulation optimization algorithm
Guerra et al. Incorporating uncertainty in financial models
Talbi et al. Application of optimization techniques to parameter set-up in scheduling
Kuzmina et al. Risk Analysis of the Company's Activities by Means of Simulation.
García-Aroca et al. An algorithm for automatic selection and combination of forecast models

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C1

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

WR Later publication of a revised version of an international search report
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase