INTRODUCTION

Food services prepare food ration or food plate on a daily basis using raw materials, in a directed way to closed groups of users, see ^{5}. Productivity assessment has long been considered important for food service companies, see [3]. Data envelope analysis (DEA) corresponds to the most classic form of evaluation of productive efficiency in this and other areas. Interested readers can see ^{2} and ^{4}.

The stochastic approach of DEA allows for both deterministic and random variables (RVs), producing a single relative-to-best productivity index that relates all units under comparison, despite differing combinations of operating characteristics given that operating conditions are similar, see ^{(}^{6}. Because efficiency conceptually is dynamic and evolves over time, DEA models have been developed that capture the updating of inputs and outputs period after period, see ^{7}. However, very few and limited approaches manage to study the random and dynamic aspects that can impact the efficiency frontier in the foodservices industry. Then, we propose a Generalized linear autoregressive moving average (GLARMA) models that consider ARMA components but transforming the mean of the data with a link function in the line of generalized linear models (GLM).

We propose a form of DEA that allows estimating the efficiency of food service with stochastic outputs related to predictive inputs in multiple periods, with possibilities included to predict what is most likely to occur in terms of future efficiency. This study is organized as follows: Section 2 exposes the proposed methodology, whereas Section 3 illustrates it with a real-world case study; and Section 4 provides discussions, conclusions, limitations and future research.

METHODOLOGY

Static DEA to efficiency model

In deterministic DEA model a vector matrix characterizes the efficiency of a reference unit k in a cluster of N units, where each unit j or DMUj has m inputs (X_{ij}) and s outputs (Y_{rj}):

where ǫ is a column vector with N elements, each of which is unity and the prime denotes transpose. Here the input (X_{j}) and output (Y_{j}) vectors (j = 1, 2, · · · , N) are all observed and in this deterministic parametric model assumes that there is no noise in the data like the classical DEA model and that the functional form parameters are deterministic given a priori (Y = f (X, β)), see ^{2}, with an input oriented model. Here the reference unit k is compared with the other (N − 1) units in the cluster. Let λj* = (λj*) and ǫ∗ be the optimal solutions of the above deterministic frontier model with all the slack variables zero. Then the reference unit k or DMUk is technically efficient if ǫ∗ = 1 and the first two sets of inequalities in equation (1) hold with equality. Thus the optimal value of ǫ∗ provides a measure of technical efficiency (TE). If ǫ∗ is positive but less than unity, then it is not technically efficient at the 100% level. To characterize the overall efficiency of the reference unit DMUk one sets up the linear programming (LP) model as follows:

where q is an m-element vector of input prices as observed in the competitive market and x is an input vector to be optimally decided by DMUk along with the weights λ_{j}. Here X_{k} and Y_{k} are the observed input and output vectors for the reference unit k, whereas x is the unknown decision vector to be optimally determined. Let λ∗ and x∗ be the optimal solution of LP model equation (2) with all slacks zero. Then the minimal input cost is given by c_{K}c_{K}* = q′x∗, whereas the observed cost of the reference unit is c_{K} = q′

The dual problems corresponding to equations (2) and (1) appear as follows:

and,

Let asterisks denote optimal values and let DMU_{k} be efficient. Then it must follow from equation (3) that the production frontier for the k−th unit is as follows: but since γ* is constrained as , we must have . Thus so long as the actual inputs Xk are not equal to their optimal levels x, this efficiency gap measured by may persist.

Dynamic efficiency with DEA model

Dynamic efficiency arises in most modern production processes, because they take several periods to adjust their inputs and outputs to the desired levels. When new technology is the source of productivity gain, the firms may take several periods to learn about the new technology and adopt it fully. Consider a dynamic extension of the overall efficiency model equation (4) in a simplified framework, see ^{6}. The overall efficiency model is then of the form:

The decision variables are λ = λ(t), x(t) and the observed data are the input and output vectors X= X_{j} (t), Y = Y_{j} (t), where ρ is a positive rate of discount assumed to be known. Equations 3 and 4 that in 3 and 4, let asterisks indicate optimal values and the reference unit k be efficient over time. Then it must satisfy the following equality:

Dynamic DEA model over time with modelling of outputs by GLARMA model

We intro-duce randomness in the dynamic DEA model by generalized linear autoregressive moving average regression model (GLARMA) that relationships between the outputs and inputs variables, when there have stochastic behavior over time. The assumption of independent and identically distributed (IID) RV for outputs Y (t) depending over time t, for t = 1, . . . , T , can be violated in DEA. Unlike the classical DEA model where residuals are assumed for a regression model that relates output and inputs (ν), as well as residuals to model the efficiency (γ), either additive (Y = f (X; β) + ν − γ) or multiplicative (Y = f (X; β) exp(ν) exp(−γ)), with unknown β parameters determined via maximum likelihood estimation, GLARMA models are derived from generalized linear model (GLM). Information on past time Ω(t - 1) is summarized in the state variable η(t). The systemic component of model GLARMA (p, q) is described by a link function of the mean where, η(t) = g(μY (t)) corresponds to the mean (μ) of the variable of interest to be predicted conditional to past information Ω(t − 1) given by

where φ and θ corresponds to the ARMA components of a model of orders p and q, respectively. Note that β(t)⊤ = (β0, β1,.. βn) is a vector of coefficients associated with n observed covariates with dependence over time denoted by X(t)⊤ = ( X 0 , 𝑋 1,𝑡 , 𝑋 2,𝑡 , . . . , 𝑋 𝑛,𝑡 ), with 𝑋 0 = 1. This covariates can be inputs of dynamic DEA model as well as other determinants of output, such as dummy variables that indicate the presence or absence of some attribute or condition. The link function of the GLARMA model generally is the identity or logarithmic function and the corresponding model variance is assumed to be constant over time, see ^{1}.

Then this type of model have versatility to fit an expected value predict to each time t = 1, · · · , T and also forecast to h−forward step t = T +1, · · · , T +h. In particular if link function is identity GLARMA model to the expected value in each time t is expressed by:

and forecast in T + 1 with GLARMA identity is expressed by:

**Data Set** We selected 10 food services that carry out business in closed care centers in the commune of Valparaiso, Chile, from 2001 and 2016 year. Based in the study of Reynolds and Thompson (2007) we selected the following variables to dynamic stochastic DEA models, see Table 1:

Computational framework

We have implemented the methodology in the R software (http://www.r-project.org/). Specifically, we have used the base package for descriptive statistics, glarma and forecast packages to perform statistical analysis on expected output and forecast of sales in food service value using time series data, respectively. We use Benchmarking package to solve the programming in the stochastic dynamic DEA method based on equation (5).

RESULTS

Figure 1 shows the autocorrelation function ACF plots for the time series of sales to each one of the companies of food service. The plots indicate that practically all of the components have time-dependence for their sales. A similar behaviour can be observed in the Partial ACF (PACF) plots, omitted due to lack of space.

Figure 2 show changes in dynamic stochastic DEA frontier of 9 food services in selected years, also of forecast the change in the frontier of efficiency in a future year.

Table 2 shows the set of indicated companies with the best fit to GLARMA models with parameters of the normal distribution chosen by the Akaike information criteria (AIC), displaying: parameter of standard deviation (SD) conditional over time to normal distribution of DPUT, autoregressive coefficient of order one (AR1), coefficient of explanatory variables showed in Table 1, intercept coefficients with it is standard error (SE) in parenthesis, and value of AIC.

DISCUSSIONS

The present proposal is useful for evaluating and forecasting changes in the frontiers of efficiency in the area of food service.

Through GLARMA modeling of time series that have a random behavior ordered in time of sales, seen as an output of the food service production process, it is possible to predict the future results of these companies employing process inputs as related predictors. The modeling offered allows updating the temporary characterization of the efficiency in this type of companies, which is a fundamental tool for the strategic planning that the managers perform, as much of operations as in general level in these companies. The model is characterized by being very flexible regarding diverse probabilistic distributions that could be presenting the data of sale.

In this respect, classical regression models have important constraints on their assumptions, which are largely overcome with models such as GLARMA, which go in the GLM line, where variability is assumed directly on the response variable. The GLARMA statistical framework is very robust, providing important tools for diagnosing, selecting, relating, fitting and fore casting through this approach. In the present work, we have not gone into these aspects in order not to deviate from the objective to be addressed in order to construct a dynamic stochastic DEA model applicable to food services. Nevertheless the deepening in its thematic applicable in the areas of DEA and stochastic frontier analysis, can represent an excellent way of promoting research in the future.

The analysis of our results shows clearly that the efficiency frontier can change in the different periods (Figure 2), and that this is not maintained inactive in the different periods of time, nor in the future to forecast. Also, we can link the results of the different food service companies analyzed, know their temporal variability, and link their relationship predictors that explain either significantly or not significantly, using the GLARMA coefficients for each company under analysis (Table 2).

The methodology presented has the limitation of having been developed as a univariate model, that is, it only considers a response variable, unlike the classical and deterministic DEA that was conceived to consider a vector of multiple variables both as input and outputs. This is an extension field that must be explored to arrive at a multivariate stochastic model within the time series framework.