Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Accuracy of Hourly Demand Forecasting of Micro Mobility for Effective Rebalancing Strategies

Accuracy of Hourly Demand Forecasting of Micro Mobility for Effective Rebalancing Strategies The imbalance in bike-sharing systems between supply and demand is significant. Therefore, these systems need to relocate bikes to meet customer needs. The objective of this research is to increase the efficiency of bike- sharing systems regarding rebalancing problems. The prediction of the demand for bike sharing can enhance the efficiency of a bike-sharing system for the operation process of rebalancing in terms of the information used in planning by proposing an evaluation of algorithms for forecasting the demand for bikes in a bike-sharing network. The historical, weather and holiday data from three distinct databases are used in the dataset and three funda- mental prediction models are adopted and compared. In addition, statistical approaches are included for selecting variables that improve the accuracy of the model. This work proposes the accuracy of different models of artificial intelligence techniques to predict the demand for bike sharing. The results of this research will assist the operators of bike-sharing companies in determining data concerning the demand for bike sharing to plan for the future. Thus, these data can contribute to creating appropriate plans for managing the rebalancing process. Key words: bike systems, demand forecasting, mobility service, modal efficiency INTRODUCTION each station. Moreover, bike-sharing systems suffer se- Mobility services provide mobility options capable of hav- verely in the efficiency of service if there are no available ing unprecedented effects on the sustainable development bikes or docks at any one moment. The redistribution of of urban planning [1]. A transportation network facilitates bikes among all stations is frequently too expensive and shared mobility choices that enable new mobility services. unnecessary as some stations have relatively low bike de- Most notably, it employs ingenuity, which many currently mand. see as the most important aspect of new transportation This research presents a methodology for the optimisa- services. Moreover, they present the reality of road capac- tion of the operations necessary to maintain a balanced ity, traffic conditions and urban planning, especially in the bike-sharing network. The suggested approach is based absence of strict rules and domination. Mobility as a Service on bike utilisation estimates and a system is created to es- (MaaS) is a system that provides customers with a broad timate the number of bikes in demand at network stations range of mobility services provided by a mobility operator. to optimise the necessary routing and rebalance the net- The service provider handles and delivers transportation work based on these forecasts. The findings of the fore- demands through a single interface [2, 3, 4]. casting system employing real-world data from Jersey City Bike-sharing services are a kind of MaaS that have become are presented in this study. particularly popular in recent years. Usually, a bike-shar- ing service is used by customers for a short period and dis- LITERATURE REVIEW tance, which is known as micro mobility. The systems use The forecasting of the demand for bike sharing is an im- bikes with GPS tracking, which allow customers to rent portant part of the solution to the problem of bike-sharing bikes at a station. The customers can then return the bikes relocation. To solve this issue successfully, the accuracy of at any station using a smartphone application to keep such forecasts is equally critical. Several research studies track of the available spots. Bike-sharing systems still face have focused on demand forecasting in the context of certain obstacles because of the complicated and dynamic bike sharing and have suggested methods for demand demand for bikes and the frequently unbalanced docks at forecasting at each station. The basic models to solve the © 2022 Author(s). This is an open access article licensed under the Creative Commons BY 4.0 (https://creativecommons.org/licenses/by/4.0/) K. BOONJUBUT, H. HASEGAWA – Accuracy of Hourly Demand Forecasting of Micro Mobility… 247 forecasting challenge were provided and they included the administration of operations for bike-sharing compa- fundamental models for time series data forecasting nies. However, this research focuses on the accuracy of methodologies, such as exponential smoothing, the mov- machine learning models and input factor considerations ing average model and the autoregressive integrated for accurate forecasting. Figure 1 describes an overview moving average model (ARIMA). For instance, Kaltenbrun- of the effective demand forecasting framework in a bike- ner et al. [5] demonstrated that the number of bikes avail- sharing system. Since each of the monitors of the predic- able in the stations of the community cycling program tion system produces predictions on the operator's re- “Bicing” in Barcelona is used to discover temporal and ge- quest, the rebalancing system creates a list of rebalancing ographic movement patterns throughout the city and tasks for operation. The rebalancing system is based on analyses human mobility data in an urban region using the the forecasted information data. The operators might see autoregressive moving average model. Sharma and Sikka the rebalancing missions that need to be completed, as [6] presented the accuracy of forecasting the daily count well as a list that must be completed according to the re- of rental bikes and they compared and contrasted differ- balancing algorithm's shortest paths. ent autoregressive algorithms and concluded that the ARIMA model was the most accurate in terms of fitting and predicting. Artificial neural networks (ANNs) have been widely em- ployed in time series forecasting in recent years. When it comes to fitting nonlinear data, ANNs have significant power and robustness since they can estimate a nonlinear function with arbitrary precision [7]. The recurrent neural networks (RNNs) have lately proved to be exceptionally successful in sequence prediction, with popular variations Fig. 1 Overview of framework such as long short-term memory (LSTM) [8] and gated re- current units (GRUs) [9]. Ljubenkov [10] demonstrated a The forecast is an important part of the overall framework convolutional neural network and a LSTM artificial RNN and its precision is critical for enabling bike-sharing com- for urban city planning and for bike-sharing companies to panies to identify possible empty stations ahead of time save time and money by predicting bike flows for each and provide appropriate information to the rebalancing node in a possible future subgraph configuration, thereby mechanism. informing owners of bike-sharing systems to plan accord- Many variables were considered in the experiment, which ingly. Furthermore, Chen et al. [11] demonstrated a com- consists of the date, season, hour, holiday weekday, parison to the random prediction approach and the pro- weather, temperature, humidity, and windspeed. Some posed study used LSTM to model and forecast Chinese factors may affect the forecasting process and therefore stock returns, thereby increasing the accuracy of model the factors affecting the dependent variables were ana- prediction for stock. Moreover, Fu et al. [12] applied LSTM lysed. Regression analysis is an analytical statistic used to and GRU neural network approaches to forecast short- study the causal relationships resulting in the results of term traffic flow and the results showed that the LSTM the data analysis used to compare the direction of the in- and GRU outperform the ARIMA model. fluence of each parent variable on the dependent varia- Nevertheless, the technique was not appropriate for all ble. There is a method for selecting variables in the equa- problems, which is undesirable for operations. According tion so that the equation can predict the criterion varia- to related work, Sharma and Sikka proposed that the bles as much as possible. This research used stepwise mul- ARIMA model produces high accuracy performance. In tiple regression analysis to evaluate the input variables contrast, Chengcheng Xu et al. [13] demonstrated that the used in demand forecasting that had significant effects on LSTM model provides performance better than the the dependent variables. ARIMA model. In addition, Cho et al. [14] explained that the GRU is similar to the LSTM network. On this note, the Data preparation question remains over which strategy is superior for pre- This work used data from the “Citi bike” bike-sharing sys- dicting the demand for bike sharing. Therefore, our re- tem in Jersey City. The dataset between January 1st, 2020 search focuses on the hourly demand for bike sharing to and December 31st, 2020 contains 333,802 observations, forecast future demand. The performance of ARIMA, which were used in the following prediction and are LSTM and GRU is compared in this study. shown in Table 1. The trip data had to be significantly altered before they METHODOLOGY could be manipulated. The original trip data recorded the Framework overview facts of each historical journey, which were arranged in This work presents a comprehensive framework built to chronological order. In comparison, the structure of the aid bike-sharing companies in managing their field opera- data frame would be that each row has the fields Station tions. Ruffieux, Mugellini, and Abou Khaled [15] proposed ID, Date, Hour, Departures and Arrivals. the superiority of the Multi-Layer Perceptron method for forecasting available bikes, which was created to improve 248 Management Systems in Production Engineering 2022, Volume 30, Issue 3 Table 1 Table 3 Description of dataset Summary of model Feature Type Measurement Adjust R Std. error Model R R square Date Month-day- 1/1/2020 to 12/31/2020 square of estimate year 1 0.436 0.190 0.190 38.2414 Season Nominal scale Spring = 1, summer = 2, fall = 2 0.542 0.294 0.294 35.6995 3, 3 0.588 0.345 0.345 34.38309 winter = 4 4 0.619 0.383 0.383 33.37007 Hour Ratio scale 0, 1, 2, …, 23 5 0.621 0.386 0.386 33.29624 Holiday Nominal scale Holiday = 1, workday = 0 6 0.623 0.388 0.388 33.24388 Weekday Nominal scale Sunday = 0, Monday = 1, …, a. Predictors: (Constant) and temperature Saturday = 6 b. Predictors: (Constant), temperature and hour Weather Nominal scale Sunny and fair = 1, foggy c. Predictors: (Constant), temperature, hour and humidity and cloudy = 2, showers = 3, d. Predictors: (Constant), temperature, hour, humidity and snow = 4 season Tempera- Ratio scale Fahrenheit e. Predictors: (Constant), temperature, hour, humidity, season ture and weekday f. Predictors: (Constant), temperature, hour, humidity, season, Humidity Ratio scale % weekday and weather Windspeed Ratio scale mph Count Ratio scale 0, 1, 2, …, 338 Table 4 Stepwise multiple regression coefficient analysis of factors Additionally, weather [16] and holidays [17] were com- affecting bike usage bined with the dataset. Regarding the meteorological Variable B Std. error Beta t Sig data, we carried out a basic feature selection using intui- (Constant) -53.785 3.138 -17.138 0.000 tion and maintained a group of qualities that were most Temperature 1.346 .026 .534 51.117 0.000 relevant to our research. Concerning feature engineering, Hour 1.547 .054 .252 28.874 0.000 it was comprised of transforming data that had been Humidity -.456 .019 -.219 -24.223 0.000 transformed wrongly, which might result in erroneous Season 11.463 .489 .242 23.444 0.000 conclusions. Afterward, the data were split for the exper- Weekday 1.139 .178 .054 6.390 0.000 iment in a 70:30 ratio by training and testing data, respec- Weather -4.024 .752 -.047 -5.355 0.000 tively, and the dimensions of the training and testing sets are presented in Table 2. Efroymson [20] presented automated stages to choose the explanatory factors for a multiple regression model Table 2 from a set of candidate variables. The candidate variables Measurement values in training and test sets were examined one by one at each stage, with the t sta- Dataset Measurement values tistics for the coefficients of the variables being consid- Training set 6149 datasets ered used in most cases. Test set 2635 datasets Forward selection and backward elimination were com- bined in the stepwise regression approach. In stepwise re- Input data analysis gression formulas, controlled variables are added and re- Several factors that could be related and affect the use of moved as needed for each step. When all variables not in the bikes were considered. Nevertheless, taking factors the model have p-values less than the provided Enter into account in forecasting may cause the model used in value and all variables in the model have p-values larger forecasting to be affected internally, thereby causing in- than or equal to the specified Remove value, the process accuracies. has completed its operation. To generate a forecasting model by verifying data as- The dependent variable with logarithmic, the correlation sumptions, normality, linearity, heteroscedasticity and analysis allowed for the rapid identification of dependen- multicollinearity are the four assumptions that should be cies. The values presented in Fig. 2 show the Pearson cor- checked. The major goal of this section is to choose ap- relation coefficients. The variables statistically significant propriate controlled variables for forecasting. The ap- for p-0.05 are indicated by *** and those statistically sig- proach of statistics in regression known as multiple linear nificant for p-0.01 are indicated by **. regressions was used to examine the correlation between The correlation between each feature is shown in Fig. 2 a single response or dependent variable and two or more and the results show that the count, hour, weekday, tem- controlled or independent variables (Table 3). perature and wind exhibit a positive correlation. Further- Stepwise regression is a popular approach [18, 19] and more, the results show that the two variables are related most statistical software offer it, which clearly illustrates in the same direction. When the correlation between the its demand and ironically may inspire researchers to count, season, holiday, weather and hum is negative, it adopt it (Table 4). means that one variable increases and the other de- creases. K. BOONJUBUT, H. HASEGAWA – Accuracy of Hourly Demand Forecasting of Micro Mobility… 249 Fig. 2 Coefficient of correlation for each feature ARIMA Exponential smoothing and ARIMA models are the strate- gies most often used for time series forecasting and take opposing approaches to the problem. The ARIMA models attempt to reflect the autocorrelations of the data, whereas exponential smoothing approaches attempt to characterise the trend and seasonality of the data. In an autoregressive integrated moving average model, the fu- ture value of a variable is assumed to be a linear function of many earlier observations and random errors. The flowchart of the ARIMA model is shown in Fig. 3, where the number of lags utilised in the autoregressive model is represented by p, which is the order of the autoregressive model, AR(p). The number of lags utilised in the moving average model is represented by q, which is the moving average order, MA(q), and d is the degree of differencing or the number of times previous values have been sub- tracted from the data, where d is the number of times past values have been removed from the data. Fig. 3 Flowchart of ARIMA model 250 Management Systems in Production Engineering 2022, Volume 30, Issue 3 LSTM in a variety of fields, including finance, social sciences, and Examining the error flow in the RNN led to the develop- engineering. The ARIMA, LSTM and GRU models were ment of the LSTM model [21]. Long-sequence LSTMs have used in this experiment. The model structures are an or- been shown to work well in long-sequence applications. der from the input layer to the hidden layer for LSTM and An LSTM layer's memory blocks are a collection of repeat- GRU, which are single neural network models. This exper- edly linked blocks that are a differentiable variant of the iment used an Adam optimiser [23] to train the models, memory chips seen in digital computers. Each one has one with a learning rate and epoch set to 0.0001 and 100, re- or more memory cells (C_t) that are linked predictably. spectively. To avoid overfitting, a dropout layer was intro- Three multiplicative units, input (x_t), output (O_t) and duced. Each model's experimental design has three repli- forget gates (f_t), provide continuous analogues of write, cates [24, 25], with the same approach used to average read and reset operations for the cells. When the input the values to represent the model. Considering the pre- gate is activated, the input to the cells is multiplied, the diction errors of MAE, RMSE and R , it was then deter- output to the net is multiplied and the prior cell values are mined that the accuracy had been successfully predicted. multiplied by the forget gate. The gates are the sole From the experiments comparing the three models, the means for the net to communicate with the cells, as model with the highest accuracy was identified. However, shown by the structure of the LSTM in Fig. 4. in the correlation analysis, some variables were found to be insignificant. Therefore, this experiment was con- ducted to compare the efficiency of each model that dif- fered in inputs by eliminating some insignificant variables. Stepwise multiple regression analysis was used to find the factors affecting bike usage. The seven factors, namely, temperature, hour, humidity, season, weekday, weather, holidays and wind speed, were used to find the factors af- fecting the cycling, as shown in Table 3. Based on the step- wise multiple regression analysis, the results of the analy- sis of the data revealed that there were factors related to the use of bikes as follows: temperature, hour, humidity, Fig. 4 Structure of LSTM season, weekday and weather. The factors that did not significantly affect the number of bikes used included hol- GRU idays and wind speed. Therefore, the experiment was The GRU approach addresses the issue of timescale de- conducted by eliminating the aforementioned factors. pendency by manipulating the flow of information with the gate unit utilising a variety of cycle units [12]. The GRU RESULTS technique alters the LSTM's reset gate. It has two gates: a The selection of input variables was carried out by the reset gate and an update gate that is updated to consider stepwise regression method. As a result, these statistics how much data from the previous stage are merged into show which control variables (temperature, hour, humid- the input and forget gate. The reset gate is linked to the ity, season, weekday and weather) best fit the data. Fur- preceding stage's hidden state and because the GRU lacks thermore, the R coefficient determination results pertain an output gate, it discloses all of the memory as shown in to the model that was fitted to the data. The summary of Fig. 5. the model is based on the results in Table 3. By comparing each model, it was found that model 6 showed the highest 2 2 R and adjusted R . Thus, model 6 is the best of all the models for fitting the data. As shown in Table 4, the coef- ficient was obtained from the input data analysis with the stepwise regression method. For the model, this work tried to adopt a forecasting model in the future period in terms of hourly demand by testing and comparing the three models (ARIMA, LSTM and GRU). We constructed the prediction models and generated hourly forecasts. For the evaluation of the se- lected algorithms, a detailed approach was devised. The Fig. 5 Structure of GRU three models were evaluated using the same training and test data, as well as the same set of input parameters. In EXPERIMENT this study, the results of the ideal are expressed as the This research aimed to find the optimal model for demand 2 RMSE, MAE and R . forecasting of bike-sharing service. Time Series forecast- The prediction errors of the ARIMA, LSTM and GRU mod- ing is an essential field in machine learning since many els are shown in Table 5. forecasting problems have a time component. A Time Se- ries is a collection of observations organized sequentially in time [22]. Time series forecasting has been widely used K. BOONJUBUT, H. HASEGAWA – Accuracy of Hourly Demand Forecasting of Micro Mobility… 251 Table 5 experimental results reveal that the GRU model outper- Comparison of forecast errors of different models forms the ARIMA and LSTM models in terms of efficiency. Model R RMSE MAE This research used a stepwise technique in which varia- ARIMA 0.32 37.5 24.75 bles may be selected and the accuracy of the model can LSTM 0.62 17.58 15.73 be very important. The presence of a large number of pos- GRU 0.77 16.90 15.59 sible indicators can cause the model to reduce its accu- GRU* 0.82 15.44 13.45 racy. These problems arise when there are more variables *Selected input variable using stepwise regression than necessary that may have no relationship or influence on the dependent variable . However, future studies may Even when employing the assessment technique, the GRU require more information regarding the factors that influ- model had lower prediction errors, outperforming both ence the demand for bike sharing or other statistical tech- the LSTM and GRU models. Additionally, Fig. 6 depicts niques to be used in the dataset. some of the predicted findings from the testing dataset. REFERENCES [1] Y.Z. Wong, D.A. Hensher, and C. Mulley, “Mobility as a ser- vice (MaaS): Charting a future context”, Transp. Res. Part A Policy Pract., vol. 131, pp. 5-19, 2020. [2] D.A. Hensher, C. Mulley, C. Ho, Y. Wong, G. Smith, and J. D. Nelson, Understanding Mobility as a Service (MaaS): Past, present and future. Elsevier, 2020. [3] S. Heikkilä et al., “Mobility as a service-a proposal for ac- tion for the public administration, case Helsinki”, 2014. [4] S. Hietanen, “Mobility as a Service”. The new transport model, vol. 12, no. 2, pp. 2-4, 2014. Fig. 6 Comparison of forecast results between actual [5] A. Kaltenbrunner, R. Meza, J. Grivolla, J. Codina, and R. and predicted values of each model Banchs, “Urban cycles and mobility patterns: Exploring and predicting trends in a bicycle-based public transport Furthermore, it was found that when comparing each system”, Pervasive Mob. Comput., vol. 6, no. 4, pp. 455- forecasting model with all the input data to forecast and 466, 2010. [6] Priyavrat, N. Sharma, and G. Sikka, “Autoregressive Tech- the selection of input variables by the stepwise multiple niques for Forecasting Applications”, in 2021 2nd Interna- analysis, the accuracy of the forecast improved. tional Conference on Secure Cyber Computing and Commu- nications (ICSCCC), 2021. DISCUSSION [7] K. Hornik, M. Stinchcombe, and H. White, “Multilayer When this approach is followed, the stepwise multiple re- feedforward networks are universal approximators”, Neu- gression models were substantially more successful. The ral Netw., vol. 2, no. 5, pp. 359-366, 1989. results of this stepwise application show that employing a [8] S. Hochreiter and J. Schmidhuber, “Long short-term theoretical expert opinion to select the initial set of input memory”. Neural computation vol. 9, no. 8, pp. 1735- data for forecasting is beneficial. 1780, 1997. [9] K.L. Clarkson, “Algorithms for closest-point problems Other approaches for selecting features have been pre- (computational geometry)”. Ph.D. dissertation. Stanford sented [26]. In addition to procedural regression, the University, 1985. number of features used to cope with the error increases [10] D. Ljubenkov, F. Kon, and C. Ratti, “Optimizing bike sharing as the number of features increases. In practice, this may system flows using graph mining, convolutional and recur- not be practicable. It also leads to oversampling and in- rent neural networks”, in 2020 IEEE European Technology correct predictions, owing to inconvenient variable com- and Engineering Management Summit (E-TEMS), 2020. binations. When feature selection methods are evaluated [11] K. Chen, Y. Zhou, and F. Dai, “A LSTM-based method for using out-of-sample data, deleting recursive features with stock returns prediction: A case study of china stock cross-validation is one of the benefits and will have a market”, 2015 IEEE International Conference on Big Data (Big Data), 2015. higher chance of being effective. [12] R. Fu, Z. Zhang, and L. Li, “Using LSTM and GRU neural net- st work methods for traffic flow prediction”, in 2016 31 CONCLUSIONS Youth Academic Annual Conference of Chinese Association Artificial intelligence methods were used to investigate a of Automation (YAC), 2016. model for anticipating the demand for bike sharing, with [13] C. Xu, J. Ji, and P. Liu, “The station-free sharing bike de- the aim of rebalancing bike-sharing networks by improv- mand forecasting with a deep learning approach and ing the efficiency of demand forecast for associate opera- large-scale datasets”, Transp. Res. Part C Emerg. Technol., tors. This study compared the differences between three vol. 95, pp. 47-60, 2018. models: ARIMA, LSTM and GRU. Using MAE, RMSE and R , [14] K. Cho et al., “Learning phrase representations using RNN encoder-decoder for statistical machine translation”, arXiv we then compared the performance of the models. Fur- [cs.CL], 2014. thermore, the “Citi bike” dataset experiment in Jersey City combined weather and holiday data. The technique can more precisely forecast the demand for bike sharing by combining such statistics with general information. The 252 Management Systems in Production Engineering 2022, Volume 30, Issue 3 [15] S. Ruffieux, E. Mugellini, and O. Abou Khaled, “Bike usage [21] Sathishkumar and Y. Cho, “A rule-based model for Seoul forecasting for optimal rebalancing operations in bike- Bike sharing demand prediction using weather data”, Eur. th sharing systems”, in 2018 IEEE 30 International Confer- J. Remote Sens., vol. 53, no. sup1, pp. 166-183, 2020. ence on Tools with Artificial Intelligence (ICTAI), 2018. [22] G.E.P. Box and E. Al, Time series analysis: Forecasting and [16] “Local weather forecast, news and conditions”, Wunder- control. Hoboken, New Jersey: John Wiley & Sons, 2015. ground.com. [Online]. Available: http://www.wunder- [23] D.P. Kingma and J. Ba, “Adam: A method for stochastic op- ground.com. [Accessed: 05-Dec-2021]. timization”. arXiv preprint arXiv:1412.6980, 2014. [17] “CalendarDate.com”, Calendardate.com. [Online]. Availa- [24] S.V.J. Park, and Y. Cho, “Using data mining techniques for ble: http://www.calendardate.com. [Accessed: 05-Dec- bike sharing demand prediction in metropolitan city”, 2021]. Comput. Commun., vol. 153, pp. 353-366, 2020. [18] S.H. Ewaid, S.A. Abed, and S.A. Kadhum, “Predicting the Ti- [25] R.C. Team, “A language and environment for statistical gris River water quality within Baghdad, Iraq by using wa- computing. Vienna, Austria: R foundation for statistical ter quality index and regression analysis”, Environ. technol. computing”, URL http://www.R-project.org, 2013. innov., vol. 11, pp. 390-398, 2018. [26] Y. Fourcade, A.G. Besnard, and J. Secondi, “Paintings pre- [19] P.-F. Tsai et al., “A classification algorithm to predict dict the distribution of species, or the challenge of select- chronic pain using both regression and machine learning – ing environmental predictors and evaluation statistics”, A stepwise approach”, Appl. Nurs. Res., vol. 62, no. Glob. Ecol. Biogeogr., vol. 27, no. 2, pp. 245-256, 2018. 151504, p. 151504, 2021. [20] M.A. Efroymson, “Multiple regression analysis”, A. Ralston and H. S. Wilf, Eds. New York: Wiley, 1960. Kanokporn Boonjubut Shibaura Institute of Technology Department of Functional Control Systems Fukasaku 307, Minuma-ku 337-8570 Saitama-shi, Japan e-mail: nb19103@shibaura-it.ac.jp Hiroshi Hasegawa Shibaura Institute of Technology Department of Machinery and Control Systems Fukasaku 307, Minuma-ku 337-8570 Saitama-shi, Japan e-mail: h-hase@shibaura-it.ac.jp http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Management Systems in Production Engineering de Gruyter

Accuracy of Hourly Demand Forecasting of Micro Mobility for Effective Rebalancing Strategies

Loading next page...
 
/lp/de-gruyter/accuracy-of-hourly-demand-forecasting-of-micro-mobility-for-effective-5HCwK0IxHd

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
de Gruyter
Copyright
© 2022 Kanokporn Boonjubut et al., published by Sciendo
eISSN
2450-5781
DOI
10.2478/mspe-2022-0031
Publisher site
See Article on Publisher Site

Abstract

The imbalance in bike-sharing systems between supply and demand is significant. Therefore, these systems need to relocate bikes to meet customer needs. The objective of this research is to increase the efficiency of bike- sharing systems regarding rebalancing problems. The prediction of the demand for bike sharing can enhance the efficiency of a bike-sharing system for the operation process of rebalancing in terms of the information used in planning by proposing an evaluation of algorithms for forecasting the demand for bikes in a bike-sharing network. The historical, weather and holiday data from three distinct databases are used in the dataset and three funda- mental prediction models are adopted and compared. In addition, statistical approaches are included for selecting variables that improve the accuracy of the model. This work proposes the accuracy of different models of artificial intelligence techniques to predict the demand for bike sharing. The results of this research will assist the operators of bike-sharing companies in determining data concerning the demand for bike sharing to plan for the future. Thus, these data can contribute to creating appropriate plans for managing the rebalancing process. Key words: bike systems, demand forecasting, mobility service, modal efficiency INTRODUCTION each station. Moreover, bike-sharing systems suffer se- Mobility services provide mobility options capable of hav- verely in the efficiency of service if there are no available ing unprecedented effects on the sustainable development bikes or docks at any one moment. The redistribution of of urban planning [1]. A transportation network facilitates bikes among all stations is frequently too expensive and shared mobility choices that enable new mobility services. unnecessary as some stations have relatively low bike de- Most notably, it employs ingenuity, which many currently mand. see as the most important aspect of new transportation This research presents a methodology for the optimisa- services. Moreover, they present the reality of road capac- tion of the operations necessary to maintain a balanced ity, traffic conditions and urban planning, especially in the bike-sharing network. The suggested approach is based absence of strict rules and domination. Mobility as a Service on bike utilisation estimates and a system is created to es- (MaaS) is a system that provides customers with a broad timate the number of bikes in demand at network stations range of mobility services provided by a mobility operator. to optimise the necessary routing and rebalance the net- The service provider handles and delivers transportation work based on these forecasts. The findings of the fore- demands through a single interface [2, 3, 4]. casting system employing real-world data from Jersey City Bike-sharing services are a kind of MaaS that have become are presented in this study. particularly popular in recent years. Usually, a bike-shar- ing service is used by customers for a short period and dis- LITERATURE REVIEW tance, which is known as micro mobility. The systems use The forecasting of the demand for bike sharing is an im- bikes with GPS tracking, which allow customers to rent portant part of the solution to the problem of bike-sharing bikes at a station. The customers can then return the bikes relocation. To solve this issue successfully, the accuracy of at any station using a smartphone application to keep such forecasts is equally critical. Several research studies track of the available spots. Bike-sharing systems still face have focused on demand forecasting in the context of certain obstacles because of the complicated and dynamic bike sharing and have suggested methods for demand demand for bikes and the frequently unbalanced docks at forecasting at each station. The basic models to solve the © 2022 Author(s). This is an open access article licensed under the Creative Commons BY 4.0 (https://creativecommons.org/licenses/by/4.0/) K. BOONJUBUT, H. HASEGAWA – Accuracy of Hourly Demand Forecasting of Micro Mobility… 247 forecasting challenge were provided and they included the administration of operations for bike-sharing compa- fundamental models for time series data forecasting nies. However, this research focuses on the accuracy of methodologies, such as exponential smoothing, the mov- machine learning models and input factor considerations ing average model and the autoregressive integrated for accurate forecasting. Figure 1 describes an overview moving average model (ARIMA). For instance, Kaltenbrun- of the effective demand forecasting framework in a bike- ner et al. [5] demonstrated that the number of bikes avail- sharing system. Since each of the monitors of the predic- able in the stations of the community cycling program tion system produces predictions on the operator's re- “Bicing” in Barcelona is used to discover temporal and ge- quest, the rebalancing system creates a list of rebalancing ographic movement patterns throughout the city and tasks for operation. The rebalancing system is based on analyses human mobility data in an urban region using the the forecasted information data. The operators might see autoregressive moving average model. Sharma and Sikka the rebalancing missions that need to be completed, as [6] presented the accuracy of forecasting the daily count well as a list that must be completed according to the re- of rental bikes and they compared and contrasted differ- balancing algorithm's shortest paths. ent autoregressive algorithms and concluded that the ARIMA model was the most accurate in terms of fitting and predicting. Artificial neural networks (ANNs) have been widely em- ployed in time series forecasting in recent years. When it comes to fitting nonlinear data, ANNs have significant power and robustness since they can estimate a nonlinear function with arbitrary precision [7]. The recurrent neural networks (RNNs) have lately proved to be exceptionally successful in sequence prediction, with popular variations Fig. 1 Overview of framework such as long short-term memory (LSTM) [8] and gated re- current units (GRUs) [9]. Ljubenkov [10] demonstrated a The forecast is an important part of the overall framework convolutional neural network and a LSTM artificial RNN and its precision is critical for enabling bike-sharing com- for urban city planning and for bike-sharing companies to panies to identify possible empty stations ahead of time save time and money by predicting bike flows for each and provide appropriate information to the rebalancing node in a possible future subgraph configuration, thereby mechanism. informing owners of bike-sharing systems to plan accord- Many variables were considered in the experiment, which ingly. Furthermore, Chen et al. [11] demonstrated a com- consists of the date, season, hour, holiday weekday, parison to the random prediction approach and the pro- weather, temperature, humidity, and windspeed. Some posed study used LSTM to model and forecast Chinese factors may affect the forecasting process and therefore stock returns, thereby increasing the accuracy of model the factors affecting the dependent variables were ana- prediction for stock. Moreover, Fu et al. [12] applied LSTM lysed. Regression analysis is an analytical statistic used to and GRU neural network approaches to forecast short- study the causal relationships resulting in the results of term traffic flow and the results showed that the LSTM the data analysis used to compare the direction of the in- and GRU outperform the ARIMA model. fluence of each parent variable on the dependent varia- Nevertheless, the technique was not appropriate for all ble. There is a method for selecting variables in the equa- problems, which is undesirable for operations. According tion so that the equation can predict the criterion varia- to related work, Sharma and Sikka proposed that the bles as much as possible. This research used stepwise mul- ARIMA model produces high accuracy performance. In tiple regression analysis to evaluate the input variables contrast, Chengcheng Xu et al. [13] demonstrated that the used in demand forecasting that had significant effects on LSTM model provides performance better than the the dependent variables. ARIMA model. In addition, Cho et al. [14] explained that the GRU is similar to the LSTM network. On this note, the Data preparation question remains over which strategy is superior for pre- This work used data from the “Citi bike” bike-sharing sys- dicting the demand for bike sharing. Therefore, our re- tem in Jersey City. The dataset between January 1st, 2020 search focuses on the hourly demand for bike sharing to and December 31st, 2020 contains 333,802 observations, forecast future demand. The performance of ARIMA, which were used in the following prediction and are LSTM and GRU is compared in this study. shown in Table 1. The trip data had to be significantly altered before they METHODOLOGY could be manipulated. The original trip data recorded the Framework overview facts of each historical journey, which were arranged in This work presents a comprehensive framework built to chronological order. In comparison, the structure of the aid bike-sharing companies in managing their field opera- data frame would be that each row has the fields Station tions. Ruffieux, Mugellini, and Abou Khaled [15] proposed ID, Date, Hour, Departures and Arrivals. the superiority of the Multi-Layer Perceptron method for forecasting available bikes, which was created to improve 248 Management Systems in Production Engineering 2022, Volume 30, Issue 3 Table 1 Table 3 Description of dataset Summary of model Feature Type Measurement Adjust R Std. error Model R R square Date Month-day- 1/1/2020 to 12/31/2020 square of estimate year 1 0.436 0.190 0.190 38.2414 Season Nominal scale Spring = 1, summer = 2, fall = 2 0.542 0.294 0.294 35.6995 3, 3 0.588 0.345 0.345 34.38309 winter = 4 4 0.619 0.383 0.383 33.37007 Hour Ratio scale 0, 1, 2, …, 23 5 0.621 0.386 0.386 33.29624 Holiday Nominal scale Holiday = 1, workday = 0 6 0.623 0.388 0.388 33.24388 Weekday Nominal scale Sunday = 0, Monday = 1, …, a. Predictors: (Constant) and temperature Saturday = 6 b. Predictors: (Constant), temperature and hour Weather Nominal scale Sunny and fair = 1, foggy c. Predictors: (Constant), temperature, hour and humidity and cloudy = 2, showers = 3, d. Predictors: (Constant), temperature, hour, humidity and snow = 4 season Tempera- Ratio scale Fahrenheit e. Predictors: (Constant), temperature, hour, humidity, season ture and weekday f. Predictors: (Constant), temperature, hour, humidity, season, Humidity Ratio scale % weekday and weather Windspeed Ratio scale mph Count Ratio scale 0, 1, 2, …, 338 Table 4 Stepwise multiple regression coefficient analysis of factors Additionally, weather [16] and holidays [17] were com- affecting bike usage bined with the dataset. Regarding the meteorological Variable B Std. error Beta t Sig data, we carried out a basic feature selection using intui- (Constant) -53.785 3.138 -17.138 0.000 tion and maintained a group of qualities that were most Temperature 1.346 .026 .534 51.117 0.000 relevant to our research. Concerning feature engineering, Hour 1.547 .054 .252 28.874 0.000 it was comprised of transforming data that had been Humidity -.456 .019 -.219 -24.223 0.000 transformed wrongly, which might result in erroneous Season 11.463 .489 .242 23.444 0.000 conclusions. Afterward, the data were split for the exper- Weekday 1.139 .178 .054 6.390 0.000 iment in a 70:30 ratio by training and testing data, respec- Weather -4.024 .752 -.047 -5.355 0.000 tively, and the dimensions of the training and testing sets are presented in Table 2. Efroymson [20] presented automated stages to choose the explanatory factors for a multiple regression model Table 2 from a set of candidate variables. The candidate variables Measurement values in training and test sets were examined one by one at each stage, with the t sta- Dataset Measurement values tistics for the coefficients of the variables being consid- Training set 6149 datasets ered used in most cases. Test set 2635 datasets Forward selection and backward elimination were com- bined in the stepwise regression approach. In stepwise re- Input data analysis gression formulas, controlled variables are added and re- Several factors that could be related and affect the use of moved as needed for each step. When all variables not in the bikes were considered. Nevertheless, taking factors the model have p-values less than the provided Enter into account in forecasting may cause the model used in value and all variables in the model have p-values larger forecasting to be affected internally, thereby causing in- than or equal to the specified Remove value, the process accuracies. has completed its operation. To generate a forecasting model by verifying data as- The dependent variable with logarithmic, the correlation sumptions, normality, linearity, heteroscedasticity and analysis allowed for the rapid identification of dependen- multicollinearity are the four assumptions that should be cies. The values presented in Fig. 2 show the Pearson cor- checked. The major goal of this section is to choose ap- relation coefficients. The variables statistically significant propriate controlled variables for forecasting. The ap- for p-0.05 are indicated by *** and those statistically sig- proach of statistics in regression known as multiple linear nificant for p-0.01 are indicated by **. regressions was used to examine the correlation between The correlation between each feature is shown in Fig. 2 a single response or dependent variable and two or more and the results show that the count, hour, weekday, tem- controlled or independent variables (Table 3). perature and wind exhibit a positive correlation. Further- Stepwise regression is a popular approach [18, 19] and more, the results show that the two variables are related most statistical software offer it, which clearly illustrates in the same direction. When the correlation between the its demand and ironically may inspire researchers to count, season, holiday, weather and hum is negative, it adopt it (Table 4). means that one variable increases and the other de- creases. K. BOONJUBUT, H. HASEGAWA – Accuracy of Hourly Demand Forecasting of Micro Mobility… 249 Fig. 2 Coefficient of correlation for each feature ARIMA Exponential smoothing and ARIMA models are the strate- gies most often used for time series forecasting and take opposing approaches to the problem. The ARIMA models attempt to reflect the autocorrelations of the data, whereas exponential smoothing approaches attempt to characterise the trend and seasonality of the data. In an autoregressive integrated moving average model, the fu- ture value of a variable is assumed to be a linear function of many earlier observations and random errors. The flowchart of the ARIMA model is shown in Fig. 3, where the number of lags utilised in the autoregressive model is represented by p, which is the order of the autoregressive model, AR(p). The number of lags utilised in the moving average model is represented by q, which is the moving average order, MA(q), and d is the degree of differencing or the number of times previous values have been sub- tracted from the data, where d is the number of times past values have been removed from the data. Fig. 3 Flowchart of ARIMA model 250 Management Systems in Production Engineering 2022, Volume 30, Issue 3 LSTM in a variety of fields, including finance, social sciences, and Examining the error flow in the RNN led to the develop- engineering. The ARIMA, LSTM and GRU models were ment of the LSTM model [21]. Long-sequence LSTMs have used in this experiment. The model structures are an or- been shown to work well in long-sequence applications. der from the input layer to the hidden layer for LSTM and An LSTM layer's memory blocks are a collection of repeat- GRU, which are single neural network models. This exper- edly linked blocks that are a differentiable variant of the iment used an Adam optimiser [23] to train the models, memory chips seen in digital computers. Each one has one with a learning rate and epoch set to 0.0001 and 100, re- or more memory cells (C_t) that are linked predictably. spectively. To avoid overfitting, a dropout layer was intro- Three multiplicative units, input (x_t), output (O_t) and duced. Each model's experimental design has three repli- forget gates (f_t), provide continuous analogues of write, cates [24, 25], with the same approach used to average read and reset operations for the cells. When the input the values to represent the model. Considering the pre- gate is activated, the input to the cells is multiplied, the diction errors of MAE, RMSE and R , it was then deter- output to the net is multiplied and the prior cell values are mined that the accuracy had been successfully predicted. multiplied by the forget gate. The gates are the sole From the experiments comparing the three models, the means for the net to communicate with the cells, as model with the highest accuracy was identified. However, shown by the structure of the LSTM in Fig. 4. in the correlation analysis, some variables were found to be insignificant. Therefore, this experiment was con- ducted to compare the efficiency of each model that dif- fered in inputs by eliminating some insignificant variables. Stepwise multiple regression analysis was used to find the factors affecting bike usage. The seven factors, namely, temperature, hour, humidity, season, weekday, weather, holidays and wind speed, were used to find the factors af- fecting the cycling, as shown in Table 3. Based on the step- wise multiple regression analysis, the results of the analy- sis of the data revealed that there were factors related to the use of bikes as follows: temperature, hour, humidity, Fig. 4 Structure of LSTM season, weekday and weather. The factors that did not significantly affect the number of bikes used included hol- GRU idays and wind speed. Therefore, the experiment was The GRU approach addresses the issue of timescale de- conducted by eliminating the aforementioned factors. pendency by manipulating the flow of information with the gate unit utilising a variety of cycle units [12]. The GRU RESULTS technique alters the LSTM's reset gate. It has two gates: a The selection of input variables was carried out by the reset gate and an update gate that is updated to consider stepwise regression method. As a result, these statistics how much data from the previous stage are merged into show which control variables (temperature, hour, humid- the input and forget gate. The reset gate is linked to the ity, season, weekday and weather) best fit the data. Fur- preceding stage's hidden state and because the GRU lacks thermore, the R coefficient determination results pertain an output gate, it discloses all of the memory as shown in to the model that was fitted to the data. The summary of Fig. 5. the model is based on the results in Table 3. By comparing each model, it was found that model 6 showed the highest 2 2 R and adjusted R . Thus, model 6 is the best of all the models for fitting the data. As shown in Table 4, the coef- ficient was obtained from the input data analysis with the stepwise regression method. For the model, this work tried to adopt a forecasting model in the future period in terms of hourly demand by testing and comparing the three models (ARIMA, LSTM and GRU). We constructed the prediction models and generated hourly forecasts. For the evaluation of the se- lected algorithms, a detailed approach was devised. The Fig. 5 Structure of GRU three models were evaluated using the same training and test data, as well as the same set of input parameters. In EXPERIMENT this study, the results of the ideal are expressed as the This research aimed to find the optimal model for demand 2 RMSE, MAE and R . forecasting of bike-sharing service. Time Series forecast- The prediction errors of the ARIMA, LSTM and GRU mod- ing is an essential field in machine learning since many els are shown in Table 5. forecasting problems have a time component. A Time Se- ries is a collection of observations organized sequentially in time [22]. Time series forecasting has been widely used K. BOONJUBUT, H. HASEGAWA – Accuracy of Hourly Demand Forecasting of Micro Mobility… 251 Table 5 experimental results reveal that the GRU model outper- Comparison of forecast errors of different models forms the ARIMA and LSTM models in terms of efficiency. Model R RMSE MAE This research used a stepwise technique in which varia- ARIMA 0.32 37.5 24.75 bles may be selected and the accuracy of the model can LSTM 0.62 17.58 15.73 be very important. The presence of a large number of pos- GRU 0.77 16.90 15.59 sible indicators can cause the model to reduce its accu- GRU* 0.82 15.44 13.45 racy. These problems arise when there are more variables *Selected input variable using stepwise regression than necessary that may have no relationship or influence on the dependent variable . However, future studies may Even when employing the assessment technique, the GRU require more information regarding the factors that influ- model had lower prediction errors, outperforming both ence the demand for bike sharing or other statistical tech- the LSTM and GRU models. Additionally, Fig. 6 depicts niques to be used in the dataset. some of the predicted findings from the testing dataset. REFERENCES [1] Y.Z. Wong, D.A. Hensher, and C. Mulley, “Mobility as a ser- vice (MaaS): Charting a future context”, Transp. Res. Part A Policy Pract., vol. 131, pp. 5-19, 2020. [2] D.A. Hensher, C. Mulley, C. Ho, Y. Wong, G. Smith, and J. D. Nelson, Understanding Mobility as a Service (MaaS): Past, present and future. Elsevier, 2020. [3] S. Heikkilä et al., “Mobility as a service-a proposal for ac- tion for the public administration, case Helsinki”, 2014. [4] S. Hietanen, “Mobility as a Service”. The new transport model, vol. 12, no. 2, pp. 2-4, 2014. Fig. 6 Comparison of forecast results between actual [5] A. Kaltenbrunner, R. Meza, J. Grivolla, J. Codina, and R. and predicted values of each model Banchs, “Urban cycles and mobility patterns: Exploring and predicting trends in a bicycle-based public transport Furthermore, it was found that when comparing each system”, Pervasive Mob. Comput., vol. 6, no. 4, pp. 455- forecasting model with all the input data to forecast and 466, 2010. [6] Priyavrat, N. Sharma, and G. Sikka, “Autoregressive Tech- the selection of input variables by the stepwise multiple niques for Forecasting Applications”, in 2021 2nd Interna- analysis, the accuracy of the forecast improved. tional Conference on Secure Cyber Computing and Commu- nications (ICSCCC), 2021. DISCUSSION [7] K. Hornik, M. Stinchcombe, and H. White, “Multilayer When this approach is followed, the stepwise multiple re- feedforward networks are universal approximators”, Neu- gression models were substantially more successful. The ral Netw., vol. 2, no. 5, pp. 359-366, 1989. results of this stepwise application show that employing a [8] S. Hochreiter and J. Schmidhuber, “Long short-term theoretical expert opinion to select the initial set of input memory”. Neural computation vol. 9, no. 8, pp. 1735- data for forecasting is beneficial. 1780, 1997. [9] K.L. Clarkson, “Algorithms for closest-point problems Other approaches for selecting features have been pre- (computational geometry)”. Ph.D. dissertation. Stanford sented [26]. In addition to procedural regression, the University, 1985. number of features used to cope with the error increases [10] D. Ljubenkov, F. Kon, and C. Ratti, “Optimizing bike sharing as the number of features increases. In practice, this may system flows using graph mining, convolutional and recur- not be practicable. It also leads to oversampling and in- rent neural networks”, in 2020 IEEE European Technology correct predictions, owing to inconvenient variable com- and Engineering Management Summit (E-TEMS), 2020. binations. When feature selection methods are evaluated [11] K. Chen, Y. Zhou, and F. Dai, “A LSTM-based method for using out-of-sample data, deleting recursive features with stock returns prediction: A case study of china stock cross-validation is one of the benefits and will have a market”, 2015 IEEE International Conference on Big Data (Big Data), 2015. higher chance of being effective. [12] R. Fu, Z. Zhang, and L. Li, “Using LSTM and GRU neural net- st work methods for traffic flow prediction”, in 2016 31 CONCLUSIONS Youth Academic Annual Conference of Chinese Association Artificial intelligence methods were used to investigate a of Automation (YAC), 2016. model for anticipating the demand for bike sharing, with [13] C. Xu, J. Ji, and P. Liu, “The station-free sharing bike de- the aim of rebalancing bike-sharing networks by improv- mand forecasting with a deep learning approach and ing the efficiency of demand forecast for associate opera- large-scale datasets”, Transp. Res. Part C Emerg. Technol., tors. This study compared the differences between three vol. 95, pp. 47-60, 2018. models: ARIMA, LSTM and GRU. Using MAE, RMSE and R , [14] K. Cho et al., “Learning phrase representations using RNN encoder-decoder for statistical machine translation”, arXiv we then compared the performance of the models. Fur- [cs.CL], 2014. thermore, the “Citi bike” dataset experiment in Jersey City combined weather and holiday data. The technique can more precisely forecast the demand for bike sharing by combining such statistics with general information. The 252 Management Systems in Production Engineering 2022, Volume 30, Issue 3 [15] S. Ruffieux, E. Mugellini, and O. Abou Khaled, “Bike usage [21] Sathishkumar and Y. Cho, “A rule-based model for Seoul forecasting for optimal rebalancing operations in bike- Bike sharing demand prediction using weather data”, Eur. th sharing systems”, in 2018 IEEE 30 International Confer- J. Remote Sens., vol. 53, no. sup1, pp. 166-183, 2020. ence on Tools with Artificial Intelligence (ICTAI), 2018. [22] G.E.P. Box and E. Al, Time series analysis: Forecasting and [16] “Local weather forecast, news and conditions”, Wunder- control. Hoboken, New Jersey: John Wiley & Sons, 2015. ground.com. [Online]. Available: http://www.wunder- [23] D.P. Kingma and J. Ba, “Adam: A method for stochastic op- ground.com. [Accessed: 05-Dec-2021]. timization”. arXiv preprint arXiv:1412.6980, 2014. [17] “CalendarDate.com”, Calendardate.com. [Online]. Availa- [24] S.V.J. Park, and Y. Cho, “Using data mining techniques for ble: http://www.calendardate.com. [Accessed: 05-Dec- bike sharing demand prediction in metropolitan city”, 2021]. Comput. Commun., vol. 153, pp. 353-366, 2020. [18] S.H. Ewaid, S.A. Abed, and S.A. Kadhum, “Predicting the Ti- [25] R.C. Team, “A language and environment for statistical gris River water quality within Baghdad, Iraq by using wa- computing. Vienna, Austria: R foundation for statistical ter quality index and regression analysis”, Environ. technol. computing”, URL http://www.R-project.org, 2013. innov., vol. 11, pp. 390-398, 2018. [26] Y. Fourcade, A.G. Besnard, and J. Secondi, “Paintings pre- [19] P.-F. Tsai et al., “A classification algorithm to predict dict the distribution of species, or the challenge of select- chronic pain using both regression and machine learning – ing environmental predictors and evaluation statistics”, A stepwise approach”, Appl. Nurs. Res., vol. 62, no. Glob. Ecol. Biogeogr., vol. 27, no. 2, pp. 245-256, 2018. 151504, p. 151504, 2021. [20] M.A. Efroymson, “Multiple regression analysis”, A. Ralston and H. S. Wilf, Eds. New York: Wiley, 1960. Kanokporn Boonjubut Shibaura Institute of Technology Department of Functional Control Systems Fukasaku 307, Minuma-ku 337-8570 Saitama-shi, Japan e-mail: nb19103@shibaura-it.ac.jp Hiroshi Hasegawa Shibaura Institute of Technology Department of Machinery and Control Systems Fukasaku 307, Minuma-ku 337-8570 Saitama-shi, Japan e-mail: h-hase@shibaura-it.ac.jp

Journal

Management Systems in Production Engineeringde Gruyter

Published: Sep 1, 2022

Keywords: bike systems; demand forecasting; mobility service; modal efficiency

There are no references for this article.