Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Identification of Food/Nonfood Visual Stimuli from Event-Related Brain Potentials

Identification of Food/Nonfood Visual Stimuli from Event-Related Brain Potentials Hindawi Applied Bionics and Biomechanics Volume 2021, Article ID 6472586, 11 pages https://doi.org/10.1155/2021/6472586 Research Article Identification of Food/Nonfood Visual Stimuli from Event- Related Brain Potentials 1 1 2 3 Selen Güney, Sema Arslan, Adil Deniz Duru , and Dilek Göksel Duru Marmara University, Institute of Health Sciences, Istanbul, Turkey Marmara University, Sports Science Faculty, Istanbul, Turkey Department of Molecular Biotechnology, Turkish-German University, Istanbul, Turkey Correspondence should be addressed to Adil Deniz Duru; deniz.duru@marmara.edu.tr Received 6 June 2021; Accepted 24 August 2021; Published 24 September 2021 Academic Editor: Francesca Cordella Copyright © 2021 Selen Güney et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Although food consumption is one of the most basic human behaviors, the factors underlying nutritional preferences are not yet clear. The use of classification algorithms can clarify the understanding of these factors. This study was aimed at measuring electrophysiological responses to food/nonfood stimuli and applying classification techniques to discriminate the responses using a single-sweep dataset. Twenty-one right-handed male athletes with body mass index (BMI) levels between 18.5% and 25% (mean age: 21:05 ± 2:5) participated in this study voluntarily. The participants were asked to focus on the food and nonfood images that were randomly presented on the monitor without performing any motor task, and EEG data have been collected using a 16-channel amplifier with a sampling rate of 1024 Hz. The SensoMotoric Instruments (SMI) iView XTM RED eye tracking technology was used simultaneously with the EEG to measure the participants’ attention to the presented stimuli. Three datasets were generated using the amplitude, time-frequency decomposition, and time-frequency connectivity metrics of P300 and LPP components to separate food and nonfood stimuli. We have implemented k-nearest neighbor (kNN), support vector machine (SVM), Linear Discriminant Analysis (LDA), Logistic Regression (LR), Bayesian classifier, decision tree (DT), and Multilayer Perceptron (MLP) classifiers on these datasets. Finally, the response to food-related stimuli in the hunger state is discriminated from nonfood with an accuracy value close to 78% for each dataset. The results obtained in this study motivate us to employ classifier algorithms using the features obtained from single-trial measurements in amplitude and time- frequency space instead of applying more complex ones like connectivity metrics. 1. Introduction Imaging (fMRI), and Positron Emission Tomography (PET)) are very useful for showing changes in cerebral blood Although food consumption is one of the most basic human flow that occurred during cognitive processing, hemody- behaviors, the factors underlying nutritional preferences are namic responses are insufficient to explain the temporal not yet apparent. Many factors, such as taste, texture, dynamics of fast electrophysiological activity in the neural appearance, food deprivation, and smell of a meal, play an network [6, 7]. Electroencephalogram (EEG) has a high essential role in the attention to food [1–3]. Several studies temporal resolution that allows measurement of the brain’s point out increased attention given to food-related stimuli, electrical activity [8–10] and varies concerning the presence mainly due to food deprivation [4, 5]. It is significant to of visual, somatosensory, and auditory stimuli [1, 11]. Event- identify both the activated brain regions and the temporal Related Potential (ERP) recordings consist of sudden voltage microstructure of the information flow between these fluctuations as a response to the stimulus [12, 13]. Researchers observed several ERP components according regions to understand the neural foundations of a cognitive process such as the attention given to these types of stimuli to the time delay after the occurrence of a stimulus. For [6]. Even though the methods of imaging (Magnetic Reso- instance, the P300 component, which is measured as a pos- nance Imaging (MRI), Functional Magnetic Resonance itive waveform approximately 300 ms after the stimulus, has 2 Applied Bionics and Biomechanics to food/nonfood stimuli and applying classification tech- been extensively studied in the literature due to its potential to reveal the dynamics of cognitive processes [14–19]. More- niques to discriminate the responses using a single-sweep over, Late Positive Potentials (LPP) are observed 550-700 ms time series. after the stimulus that might be the projection of the focused attention or detailed stimulus analysis. Moreover, it reflects 2. Materials and Methods the conscious stimulus recognition phase. Wavelet trans- 2.1. Participants. Twenty-one right-handed male athletes form (WT) is one of the methods that are capable of estimat- with BMI levels between 18.5% and 25% (mean age: 21:05 ing the ERP components. WT has a more significant ±2:5) participated in this study voluntarily. All participants advantage than classical spectral analysis because it is had a minimum training in a week of 10 hours and com- suitable for the analysis of nonstationary signals in the peted in karate or rowing. None of the participants had a time-frequency domain. WT can be used to analyze various lack of food intake, head injuries, neurological and psychiat- transient events in biological signals with the structure of ric disorders, or other illness history. representation and feature extraction [20]. Each ERP com- ponent derived by WT can be associated with different 2.2. Experimental Design. More specifically, participants situations and tasks [21–24]. In several studies, ERP compo- were asked not to eat after 09.00 pm before the test day. nents have been elucidated in response to food stimuli. For We performed EEG measurements at 09.00-10.00 am instance, Hachl et al. [25] conducted a study with a group before breakfast. Before the start of the experiment, we of subjects who ate their last meal 3 hours or 6 hours before asked participants to focus on the food and nonfood the ERP measurements where they used food images as images without large motor movements that can nega- stimuli. In another study, the effects of attention to food- tively affect the signal. We presented the stimuli randomly related word stimuli in the absence of food were investigated using in-house developed software. In our study, standard- [26]. Similarly, Channon and Hayward [27] investigated ized and contrast-color-adjusted images were selected from P300 and LPP responses to food and flower images in the the study of Charbonnier et al. to minimize the adverse hunger state. Furthermore, many researchers have con- effects of food images on the ERP [42]. In this study, we ducted various Stroop studies in which the naming of the separated the images according to their nutrient content color of food words is used as stimuli [28–31]. Moreover, [43] into five groups. Since our aim is not to classify the Kitamura et al. [32] observed the effect of hypoglycemic glu- response to the images through calorie content, we just sepa- cose drink intake on a P300 response. As a result, the P300 rated the groups as food and nonfood ones. In the experi- component varied as a response to food and nonfood stimuli ment, we have shown images for 800 ms and inserted a in the hunger state. This variation motivated us to investi- negligible time of two adjacent stimuli that are shown in gate the differences that occurred in the ERP components Figure 1. The number of neutral images was 28 × 5, while it extracted from single-epoch electrical recordings. was 73 × 5 for food images. The resolution of the images In recent decades, the detection of the mental status was adjusted to 1280 × 1024. via EEG measurements had been performed via the imple- mentation of machine learning algorithms [33, 34]. In 2.3. Data Collection. We used a 16-channel V-AMP ampli- most of the studies, researchers computed the features fier (Brain Products TM, Germany) with a sampling rate of from ongoing EEG time series, and those features were 1024 Hz. In this study, we collected EEG from FP1, FP2, subjected to classifiers to detect whether the subject is nor- FP1, FP2, F3, Fz, F4, P3, P4, Pz, C3, C4, Cz, O1, O2 Oz, mal or not [35, 36]. This procedure necessitated the use of T7, and T8 channels with two electrodes as the reference known features while the modern approach, the deep and ground, as shown in Figure 2. Impedances of the chan- learning mechanism, enables us to figure out the filters nels have been kept below 5 khm. which can be used to classify the labelled measured data. The SensoMotoric Instruments (SMI) iView XTM RED A gross review has been given in [37] where the brain sig- eye tracking technology was used simultaneously with the nals were used as inputs in various problems, including the EEG. A 22” LCD screen with 1920 × 1080 resolution and seizure, emotion detection, motor imagery identification, the eye-tracker system are shown in Figure 3. The frequency and evoked potentials. of the SMI eye-tracking system is 60 Hz, and it can record In addition, eye tracking technology is used in atten- eye movements with a 0.5-degree recording error. tion studies to understand whether the participant pays attention to the stimulus presented. Eye tracking technol- 2.4. Data Analysis. Eye movements are analyzed to check if ogy is the name given to a set of methods and techniques the subjects focused on the visual stimuli using SMI BeGaze used to detect and record the activity of eye movements (Behavioral and Gaze Analysis) software. Next, noisy com- [38]. Studies have shown that eye tracking data provide ponents are removed from the EEG signal and the relevant reliable measures of attention to the stimulus in complex properties of the data are extracted based on signal process- situations [39, 40]. ing techniques. In this step, if the extracted features are not There are a few studies in the literature that classify appropriate, inaccurate findings can be achieved. Thus, it is food-related stimuli [32, 41]. Unfortunately, none of the pre- necessary to find and extract suitable features from the raw vious studies have examined electrophysiological responses signals to obtain accurate classification results [44, 45]. The to food-related stimuli using classification techniques. This last step is the use of various machine learning techniques study is aimed at measuring electrophysiological responses (like a decision tree and support vector machine) to classify Applied Bionics and Biomechanics 3 removed for further processing. After the preprocessing step, a total of 4754 single epochs remained. Next, EEG data are epoched with a length of 200 ms before and 800 ms post to 0.8 s each stimulus marker. In the second step, for both food 0.2 s images and nonfood images, the features are extracted using 0.8 s the data collected from 21 subjects. The feature vector 0.2 s consists of both time and frequency domain features. Data- 0.8 s sets of essential features obtained from EEG for food and 0.2 s nonfood images are as follows: the amplitude, time- frequency power, and time-frequency connectivity metrics. 0.8 s Datasets have formed as follows. DataSet1: 16 attributes ð16 electrodesÞ × 4754 row values are computed for the LPP Figure 1: Graphical rendition of task. and P300 amplitude. DataSet2: wavelet transform (WT) is used to compute 16 attributes ð16 electrodesÞ × 4754 row values for each frequency band (delta, theta, alpha, beta, 16 channel and gamma) for the LPP and P300. DataSet3: wavelet coher- ence is applied to form 120 attributes ð15 × ð15 + 1Þ/2 electrodesÞ × 4754 row values in each frequency band (delta, Fp1 Fp2 theta, alpha, beta, and gamma) for the LPP and P300. The k-nearest neighbor (kNN), support vector machine (SVM), Linear Discriminant Analysis (LDA), Logistic Regression (LR), Bayesian classifier, decision tree (DT), F3 Fz F4 and Multilayer Perceptron (MLP) classifiers are imple- mented using each dataset. The first classifier used in this study is the kNN, which is a nonparametric supervised learning algorithm. The new sample to be tested with the T7 C3 Cz C4 T8 features extracted that occur during the classification is assigned to the most appropriate class according to its prox- imity to the k-nearest neighbors [46]. The second classifier, P3 Pz P4 SVM, uses a distinctive hyperplane to determine classes. The hyperplane is the one that maximizes the margins using the distance from the nearest training points of the class. As a linear classifier, LDA (also known as Fisher’s LDA) is an O1 O2 Oz enhanced version of principal component analysis. The Bayesian classifier is a supervised statistical method for clas- sification. It uses the probability to assign the most likely Figure 2: 16 channel electrodes are distributed on the scalp. class of a given example described by its feature vector. MLP is a classifier based on artificial neural networks. The logistic regression used in this study is a statistical technique for binary classification. A tree-like structure containing the rules for classification in DT is produced using the mutual information hidden in the dataset. All of these classifiers were implemented in Python using the Scikit package. 3. Results As a result of the analysis, the heat map of food/nonfood images obtained from the eye-tracking technology proves that the participants focused their attention on the presented images during the study as shown in Figures 4 and 5. The grand average ERP components obtained from 21 Figure 3: SensoMotoric Instruments (SMI) Iview XTM RED and subjects in the study are summarized in terms of P300 and 22” LCD 1920 × 1080 screen. LPP amplitudes as shown in Table 1. and Figure 6. We investigated the amplitude differences that occurred as a the EEG signal using the characteristics obtained from the result of the presence of the food and nonfood stimuli using feature extraction process. Preprocessing of data is very sub- paired t-tests for each electrode. stantial for improving the noise ratio of the EEG signal. We Oz and T7 electrodes differed between food and applied a low-pass filter at 40 Hz and a high-pass filter at nonfood stimuli significantly in the absence of a multiple 0.1 Hz. Artifacts have been marked on the EEG data and test correction procedure while none of the electrodes’ 4 Applied Bionics and Biomechanics Figure 4: Heat map of food images. Figure 5: Heat map of nonfood images. LPP components differed between stimuli. Further, this Furthermore, we computed the coherence between the electrodes in each frequency band and performed t-tests to result motivated us to infer the mechanism of the mea- sured ERP by the computation of the frequency decompo- check the significance of the differences for food and non- sition. The increased occipital activity of the P300 food stimuli. In the theta band, P300 coherence between observed concerning food stimuli agrees with our previous Fp1 and Fp2 (p <0:0003) and delta band LPP coherence of studies [47]. After the frequency decomposition of the Fp2-Fz (p <0:00037) are observed to differ between stimuli. EEG time series, we computed the statistical tests to eluci- After the descriptive investigation of the features, we focused date the differences between food and nonfood stimuli. on the classification procedures. For the P300 component, in the delta band, Pz In this study, we achieved accuracy values close to 80% (p <0:032) and Oz (p <0:002); in the theta band, T7 for the discrimination of the electrophysiological responses (p <0:03); and in the alpha band, FP2 (p <0:014), elec- given to food-related stimuli versus nonfood stimuli in a trodes differed between food and nonfood stimuli. On hunger state, using various classification algorithms for the other hand, for LPP, differences were observed just datasets. The classification accuracy values are summarized in the alpha band for Fp2 (p <0:038), Fz (p <0:016), T7 in Tables 2–4 for the amplitudes of P300/LPP (DataSet1), (p <0:025), and T8 (p <0:041). for time-frequency-derived components of P300/LPP Applied Bionics and Biomechanics 5 Table 1: Amplitudes of P300 and LPP components are summarized (microvolt). Channel P300 (Food) mean/std P300 (Non-Food) mean/std p LPP (Food) mean/std LPP (Non-Food) mean/std p Fp1 1.205/1.144 1.623/1.119 0.3695 2.315/1.142 2.114/1.091 0.7773 Fp2 -0.027 / 1.172 0.054/1.135 0.8795 1.020/1.171 0.445 /1.138 0.3008 F3 -6.537/1.155 -6.286/ 1.14 0.5663 -5.781/ 1.173 -5.822 /1.142 0.9162 Fz 7.298/ 1.008 7.462 / 1.001 0.7081 6.812 / 1.016 6.413 / 0.992 0.3246 F4 0.721/1.014 0.676 /1.019 0.9343 1.368 / 1.026 1.438/ 1.006 0.879 P3 4.107/ 1.008 4.461/ 0.989 0.3675 4.955 / 1.030 5.054/ 1.015 0.8533 P4 -15.839 / 1.282 -15.967/1.3 0.8282 -15.452/ 1.3 -14.816 / 1.329 0.2634 Pz -8.574 / 1.186 -8.037/ 1.193 0.3823 -8.047/ 1.2 -8.468 / 1.197 0.4037 C3 2.556 / 0.960 3.079 / 0.954 0.2059 2.109 / 0.964 1.722 / 0.963 0.485 C4 -1.077 /0.946 -1.092/ 0.932 0.9672 -1.349 / 0.955 -1.37/0.938 0.9524 Cz 7.233 /0.963 7.405 /0.949 0.7175 6.215 / 0.964 6.177 / 0.940 0.9365 O1 1.193 / 0.999 0.899 / 0.996 0.3825 0.657 / 1.006 0.739 /1.035 0.8176 O2 4.099 / 0.982 3.856 / 0.989 0.5896 3.194 / 0.990 3.286 / 0.997 0.8204 Oz∗ 5.218 / 0.952 4.275 /0.943 0.0122 4.752 /0.958 4.681 /0.953 0.8566 T7∗ -1.646 / 1.187 -2.662 / 1.151 0.0394 -2.08 / 1.19 -1.683 / 1.192 0.5251 T8 0.069/ 1.171 0.254 / 1.123 0.7408 -0.688 / 0.091 1.149 / 1.185 0.2419 LPP Amplitudes P300 Amplitudes 10 10 5 5 0 0 −5 −5 𝜇 𝜇 −10 −10 −15 −15 Fp1 Fp2 F3 Fz F4 P3 P4 Pz C3 C4 Cz O1 O2 Oz T7 T8 Fp1 Fp2 F3 Fz F4 P3 P4 Pz C3 C4 Cz O1 O2 Oz T7 T8 −20 −20 02468 10 12 14 16 0 2 4 6 8 10121416 Food Non-Food Figure 6: P300 and LPP amplitudes for food and nonfood stimuli. (DataSet2), and for connectivity metrics of the elec- phies regarding different time-frequency components trodes in the time-frequency domain of P300/LPP are visualized in Figure 8. (DataSet3), respectively. A sample topography image is We repeated the classification procedures based on indi- shown in Figure 7 for P300 and LPP while topogra- vidual subjects’ data and reported the results (mean and V 6 Applied Bionics and Biomechanics Table 2: Accuracy of classifiers for P300 and LPP amplitude (%). Method/Feature P300 LPP k-NN 76 76 LR 78 77 DT 65 66 LDA 78 77 NB 68 68 SVM 78 77 MLP 77 76 Table 3: Accuracy of classifiers for P300 and LPP power (%). Method/Feature P300 (%) LPP (%) P300 (%) LPP (%) P300 (%) LPP(%) P300 (%) LPP (%) k-NN 77 76 76 77 77 76 75 77 LR 77 76 76 77 78 76 76 78 DT 62 62 63 63 66 64 63 68 LDA 77 76 76 77 78 76 76 78 NB 74 73 76 76 78 76 76 78 SVM 77 76 76 77 78 76 76 78 MLP 77 75 76 77 78 76 76 77 Table 4: Accuracy of classifiers for P300 and LPP coherence (%). Method/Feature P300 (%) LPP (%) P300 (%) LPP (%) P300 (%) LPP(%) P300 (%) LPP (%) k-NN 77 76 76 77 77 77 77 77 LR 77 77 77 77 77 77 77 77 DT 64 62 63 64 63 64 64 65 LDA 77 77 77 77 77 77 77 77 NB 65 67 66 67 69 71 74 74 SVM 77 77 77 77 78 78 77 78 MLP 69 74 73 74 62 73 70 73 nonfood stimuli in posterior regions [49]. Similar to this, standard deviation) in Table 5. In Figure 9, classification accuracy values of all algorithms are visualized. Geisler and Polich reported P300 differences due to the food deprivation [31]. In contradiction to these findings, when the participants ingest hypoglycemic glucose, P300 changes 4. Discussion were not observed [31]. In another study, LPP increased when the responses to food images and flower images were Up to our knowledge, the present study is the first one that compared. In that study, P300 amplitude increased over classifies the electrophysiological responses to food and non- the occipital, temporal, and centroparietal areas [26]. In food stimuli in a hunger state. For this, the first dataset con- our study, the maximum classification accuracy was 78% sists of the amplitudes of the P300 and LPP components when the amplitudes of the P300 and LPP derived from from single epochs. The dataset was formed by pooling the single-trial measurements were used as features, separately. rows computed for each subject. As stated by Blankertz The differences in P300 or LPP components in the presence et al. [48], the investigation of ERP components from of the food/nonfood stimuli varied, as reported in previous single-trial measurements is a complex problem because of studies. In ERP studies, averaging of the responses causes trial variability and background noise. Thus, each row was an increase in the signal-noise ratio of the signal and normalized to avoid the amplitude differences within sub- enhances the contrast between the cases. jects and single-trial epochs. In the hunger state, P300 and However, in the concept of our study, a remarkable LPP amplitudes were found to differ concerning food and accuracy value (78%) has been obtained from the use of Applied Bionics and Biomechanics 7 P300 Amplitude LPP −20 −40 −60 −200 −100 0 100 200 300 400 500 600 700 800 −5.99 0 5.99 Figure 7: Topographies of amplitude values of P300 and LPP. Power values P300 LPP Hz S 6 S −200 −100 0 100 200 300 400 500 600 700 ms 0.00 9.73 Figure 8: Topographies of power values of P300 and LPP for each frequency band (μV) (δ 0.5-4 Hz, θ 4-8 Hz, α 8-13 Hz, and β 13- 30 Hz). single-trial P300 and LPP amplitude components, sepa- EEG measurements can provide valuable information in rately. In the ERP literature, in a classification study, the the presence of adequate contrast mechanisms. For average accuracy value increased to 86% based on the instance, in the comparison of the resting-state EEG data N170 component. In that study, single-trial measurements with the brain dynamics measured during an increased as responses to pictures having positive and negative emo- mental workload state, high classification accuracy results tions were the input data to the classifier [50]. Single-trial are achieved [51]. In our study, the consistent accuracy 8 Applied Bionics and Biomechanics Table 5: The mean and standard deviation of accuracy values computed from each individual subject. Accuracy k-NN LR DT LDA NB SVM MLP Mean 73.7 75.1 71 75.1 71.7 73.9 61.6 P300 Std. Dev. 1.2 1.3 2.9 1.6 2.1 5.7 6.4 Dataset 1 Mean 74.1 75 70.6 75 71.8 76 61.5 LPP Std. Dev. 1.4 1.1 2.5 1.3 2 3.1 8.9 Mean 73.6 75.3 69.9 74.9 71.8 77.4 58.6 P300 (Delta) Std. Dev. 1.6 1.6 2.9 1.5 2 0 13.5 Mean 73.6 74.9 70 74.9 71.3 77.4 54.1 P300 (Theta) Std. Dev. 1.8 1.5 2.5 1.4 2.6 0 13.7 Mean 73.5 74.8 70.7 74.9 70.9 77.4 59.2 P300 (Alpha) Std. Dev. 1.8 1.4 2.6 1.3 2.2 0 9.6 Mean 73.4 75 70.9 74.8 72.5 77.4 57.7 P300 (Beta) Std. Dev. 1.7 1.7 2.6 1.5 2.3 0 9.6 Dataset 2 Mean 73.7 65.2 69.2 59.3 62.1 70 58.1 LPP (Delta) Std. Dev. 2.2 2.6 3.3 3.2 3.3 3 10.5 Mean 73.7 67.7 69.1 59.7 61.2 74.8 59.9 LPP (Theta) Std. Dev. 1.5 2.5 3 3.4 5.1 1.5 9.7 Mean 73.2 66.7 68.6 59.7 60.7 74 56.1 LPP (Alpha) Std. Dev. 2 2.7 2.7 2.8 5 2.2 8.9 Mean 73.6 67.3 67.6 59.8 61.8 75.6 61.7 LPP (Beta) Std. Dev. 1.8 2.6 2.8 2.9 5.7 1.5 7.1 Mean 73.3 65.7 69.8 58.4 60.8 69.3 55 P300 (Delta) Coh Std. Dev. 2 2.6 2.8 3.3 4.2 2.5 9.6 Mean 73.6 67.6 68.5 58.1 61.8 74 58.1 P300 (Theta) Coh Std. Dev. 1.4 3.6 3.3 4.3 4.4 2.2 10.9 Mean 73.4 66.4 68.4 59.6 58.9 73.4 58.8 P300 (Alpha) Coh Std. Dev. 1.7 3 2.3 3.3 5.9 1.7 7.4 Mean 74.1 67.2 68.9 60.1 59.8 73.3 58.5 P300 (Beta) Coh Std. Dev. 2.1 2.4 2.5 3.9 5.4 2.3 9.6 Dataset 3 Mean 73.7 65.2 69.2 59.3 62.1 70 58.1 LPP (Delta) Coh Std. Dev. 2.2 2.6 3.3 3.2 3.3 3 10.5 Mean 73.7 67.7 69.1 59.7 61.2 74.8 59.9 LPP (Theta) Coh Std. Dev. 1.5 2.5 3 3.4 5.1 1.5 9.7 Mean 73.2 66.7 68.6 59.7 60.7 74 56.1 LPP (Alpha) Coh Std. Dev. 2 2.7 2.7 2.8 5 2.2 8.9 Mean 73.6 67.3 67.6 59.8 61.8 75.6 61.7 LPP (Beta) Coh Std. Dev. 1.8 2.6 2.8 2.9 5.7 1.5 7.1 values obtained using several techniques exhibit the limita- time can be thought of as the time needed to compute P300 tion of the stimulus identification. DT outputs the lowest and LPP features. On the other hand, the classification proce- accuracy in classification, which might be due to the low dures consist of a training phase where several realizations of number of levels of the tree. the labelled data are being used. For the estimation of the com- For the ERP data collection, one needs to perform an aver- putational complexity, the number of features (f ) and the aging procedure over several responses given to the same or number of samples (n) have a crucial role. For instance, in k- similar stimulus. Thus, conducting ERP experiments is a NN, in the test phase, the complexity is directly related to f time-requiring process. On the other hand, in our study, we ∗ n, while it is just affected by f in DT. Since the complexity values are on the order of the square of sample size, the train- concentrated just on the single sweeps which last less than a second. So, the data that we need is limited by physiological ing phase is time-consuming for DT, MLP, and SVM. On the mechanisms for the testing phase of the classification. There- other hand, LR is much faster. When we pool the data, our fore, for real-time implementation, the minimum detection sample size becomes more than thousands. Applied Bionics and Biomechanics 9 78 78 78 78 78 78 78 78 77 77 77 77 77 77 77 77 77 77 77 77 77 77 77 76 76 76 76 76 76 76 76 74 74 74 74 68 68 68 67 67 66 66 65 65 64 64 64 64 63 63 62 62 62 k-N N algorithm LDA algorithm MLP algorithm NB algorithm LR algorithm SVM algorithm DT algorithm Figure 9: Classification accuracy for all algorithms with 10-fold cross-validation. 5. Conclusion Ethical Approval The study was approved by the Ethical Review Board of the In the ERP literature, the common sense is to analyze the Medical Faculty, Marmara University (approval number electrical activity in different frequency bands. Thus, in the 09.2018.380). concept of this study, the time series were decomposed into a time-frequency space using wavelet transform. Moreover, Consent the connectivity approach was adopted to multichannel ERP measurements in the time window of P300 and LPP Informed consent was obtained from all individual partici- to deduce the coherence information. Based on our findings, pants included in the study prior to measurement. we can propose that the use of complex features is not nec- essary since the usage of them does not overcome the basic Conflicts of Interest amplitude features. There are still many gaps in our understanding of the The authors declare no conflict of interest directly related to brain responses given to visual stimuli. The concept of visual the submitted work. stimuli cannot directly be classified with high-accuracy values. On the other hand, it is more straightforward for Acknowledgments mental illness detection or motor imagery studies. Thus, in future studies, one should focus on the feature engineering This work was supported by the Research Fund of the side of the EEG. In particular, deep learning with convolu- Marmara University (Project No. SAG-A-100713-0296). tional neural networks can be adopted to develop spatial The article processing charge was funded by Hindawi. filters on the topography images. This process may yield researchers to exhibit valuable information from the mea- References sured ERP signals. [1] L. J. Karhunen, E. J. Vanninen, J. T. Kuikka, R. I. Lappalainen, J. Tiiho-Nen, and M. I. Uusitupa, “Regional cerebral blood Data Availability flow during exposure to food in obese binge eating women,” Psychiatry Research: Neuroimaging, vol. 99, no. 1, pp. 29–42, The EEG and eye tracker data used to support the findings 2000. of this study are available from the corresponding author [2] W. D. Killgore, A. D. Young, L. A. Femia, P. Bogorodzki, upon request. J. Rogowska, and D. A. Yurgelun-Todd, “Cortical and limbic P300_Amplitude LPP_Amplitude P300_Delta_power P300_Theta_power P300_Alpha_power P300_Beta_power LPP_Delta_power LPP_Theta_power LPP_Alpha_power LPP_Beta_power P300_Delta_COH P300_Theta_COH P300_Alpha_COH P300_Beta_COH LPP_Delta_COH LPP_Theta_COH LPP_Alpha_COH LPP_Beta_COH 10 Applied Bionics and Biomechanics [21] B. Kopp, F. Rist, and U. W. E. Mattler, “N200 in the flanker activation during viewing of high- versus low-calorie foods,” NeuroImage, vol. 19, no. 4, pp. 1381–1394, 2003. task as a neurobehavioral tool for investigating executive con- trol,” Psychophysiology, vol. 33, no. 3, pp. 282–294, 1996. [3] N. Tashiro, H. Sugata, T. Ikeda et al., “Effect of individual food preferences on oscillatory brain activity,” Brain and Behavior, [22] E. K. Vogel and S. J. Luck, “The visual N1 component as an vol. 9, no. 5, p. e01262, 2019. index of a discrimination process,” Psychophysiology, vol. 37, no. 2, pp. 190–203, 2000. [4] W. Plihal, C. Haenschel, P. Hachl, J. Born, and R. Pietrowsky, “The effect of food deprivation on ERP during identification of [23] S. Iceta, J. Benoit, P. Cristini et al., “Attentional bias and tachistoscopically presented food-related words,” Journal of response inhibition in severe obesity with food disinhibition: Psychophysiology, vol. 15, no. 3, pp. 163–172, 2001. a study of P300 and N200 event-related potential,” Interna- tional Journal of Obesity, vol. 44, 2020. [5] J. Sänger, “Can't take my eyes off you - How task irrelevant pic- tures of food influence attentional selection,” Appetite, [24] M. W. Geisler and J. Polich, “P300 and time of day: circadian vol. 133, pp. 313–323, 2019. rhythms, food intake, and body temperature,” Biological Psy- chology, vol. 31, no. 2, pp. 117–136, 1990. [6] S. A. Hillyard and L. Anllo-Vento, “Event-related brain poten- tials in the study of visual selective attention,” Proceedings of [25] P. Hachl, C. Hempel, and R. Pietrowsky, “ERPs to stimulus the National Academy of Sciences, vol. 95, no. 3, pp. 781–787, identification in persons with restrained eating behavior,” 1998. International Journal of Psychophysiology, vol. 49, no. 2, pp. 111–121, 2003. [7] P. T. Fox and M. G. Woldorff, “Integrating human brain maps,” Current Opinion in Neurobiology, vol. 4, no. 2, [26] J. Stockburger, R. Schmälzle, T. Flaisch, F. Bublatzky, and H. T. pp. 151–156, 1994. Schupp, “The impact of hunger on food cue processing: an event-related brain potential study,” Neuroimage, vol. 47, [8] P. Ritter and A. Villringer, “Simultaneous EEG-fMRI,” Neuro- no. 4, pp. 1819–1829, 2009. science & Biobehavioral Reviews, vol. 30, no. 6, pp. 823–838, [27] S. Channon and A. Hayward, “The effect of short-term fasting on processing of food cues in normal subjects,” International [9] R. Srinivasan, W. R. Winter, and P. L. Nunez, “Source analysis Journal of Eating Disorders, vol. 9, no. 4, pp. 447–452, 1990. of EEG oscillations using high-resolution EEG and MEG,” Progress in Brain Research, vol. 159, pp. 29–42, 2006. [28] E. H. Lavy and M. A. van den Hout, “Attentional bias for appe- titive cues: effects of fasting in normal subjects,” Behavioural [10] B. M. Savers, H. A. Beagley, and W. R. Henshall, “The mecha- and Cognitive Psychotherapy, vol. 21, no. 4, pp. 297–310, 1993. nism of auditory evoked EEG responses,” Nature, vol. 247, no. 5441, pp. 481–483, 1974. [29] K. S. Dobson and D. J. Dozois, “Attentional biases in eating disorders: a meta-analytic review of Stroop performance,” [11] K. Elf, E. Ronne-Engström, R. Semnic, E. Rostami-Berglund, Clinical Psychology Review, vol. 23, no. 8, pp. 1001–1022, 2004. J. Sundblom, and M. Zetterling, “Continuous EEG monitoring after brain tumor surgery,” Acta neurochirurgica, vol. 161, [30] S. Hollitt, E. Kemps, M. Tiggemann, E. Smeets, and J. S. Mills, no. 9, pp. 1835–1843, 2019. “Components of attentional bias for food cues among restrained eaters,” Appetite, vol. 54, no. 2, pp. 309–313, 2010. [12] S. J. Luck, G. F. Woodman, and E. K. Vogel, “Event-related potential studies of attention,” Trends in Cognitive Sciences, [31] M. W. Geisler and J. Polich, “P300 is unaffected by glucose vol. 4, no. 11, pp. 432–440, 2000. increase,” Biological Psychology, vol. 37, no. 3, pp. 235–245, [13] H. T. Schupp, T. Flaisch, J. Stockburger, and M. Jungh Ofer, “Emotion and attention: event-related brain potential studies,” [32] K. Kitamura, T. Yamasaki, and K. Aizawa, “Food log by ana- Progress in Brain Research, vol. 156, pp. 31–51, 2006. lyzing food images,” in In: Proceedings of the 16th ACM inter- [14] T. W. Picton, “The P300 wave of the human event-related national conference on Mul- timedia, pp. 999-1000, 2008. potential,” Journal of Clinical Neurophysiology, vol. 9, no. 4, [33] Y. Zhang, G. Zhou, J. Jin, Q. Zhao, X. Wang, and A. Cichocki, pp. 456–479, 1992. “Sparse Bayesian classification of EEG for brain–computer interface,” in IEEE Transactions on Neural Networks and [15] K. McDowell, S. E. Kerick, D. L. Santa Maria, and B. D. Hatfield, “Aging, physical activity, and cognitive processing: Learning Systems, vol. 27, no. 11, pp. 2256–2267, 2016. an examination of P300,” Neurobiology of Aging, vol. 24, [34] Y. Zhang, G. Zhou, Q. Zhao, J. Jin, X. Wang, and A. Cichocki, no. 4, pp. 597–606, 2003. “Spatial-temporal discriminant analysis for ERP-based brain- [16] V. Dodin and J. L. Nandrino, “Cognitive processing of computer interface,” in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 21, no. 2, pp. 233–243, anorexic patients in recognition tasks: an event-related poten- tials study,” International Jour- nal of Eating Disorders, vol. 33, 2013. no. 3, pp. 299–307, 2003. [35] D. G. Duru and A. D. Duru, “Classification of Event Related [17] Bressler, The Handbook of Brain Theory and Neural Networks, Potential Patterns using Deep Learning,” 2018 Medical Tech- nologies National Congress (TIPTEKNO), pp. 1–4, 2018. MIT press, London, 2003. [18] S. J. Luck and Kappenman, The Oxford Handbook of Event- [36] S. A. Shaban, O. N. Ucan, and A. D. Duru, “Classification of Related Potential Components, Oxford university press, 2011. lactate level using resting-state EEG measurements,” Applied Bionics and Biomechanics, vol. 2021, Article ID 6662074, 2021. [19] M. Giraldo, G. Buodo, and M. Sarlo, “Food processing and emotion regulation in vegetarians and omnivores: An event- [37] X. Zhang, L. Yao, X. Wang, J. J. M. Monaghan, D. Mcalpine, related potential investigation,” Appetite, vol. 141, p. 104334, and Y. Zhang, “A survey on deep learning-based non- 2019. invasive brain signals: recent advances and new frontiers,” Journal of Neural Engineering, vol. 18, 2020. [20] L. J. Karhunen, E. J. Vanninen, J. T. Kuikka, R. I. Lappalainen, J. Tiihonen, and M. I. J. Uusitupa, “Regional cerebral blood [38] M. L. Mele and S. Federici, “Gaze and eye-tracking solutions flow during exposure to food in obese binge eating women,” for psychological research,” Cognitive Processing, vol. 13, Neuroimaging, vol. 99, no. 1, pp. 120–124, 2006. no. S1, pp. 261–265, 2012. Applied Bionics and Biomechanics 11 [39] E. Koç, O. Bayat, D. G. Duru, and A. D. Duru, “Design of Brain Computer Interface Based on Eye Movements,” International Journal of Engineering Research and Development, vol. 12, no. 1, pp. 176–188. [40] G. Lohse and E. Johnson, “A comparison of two process trac- ing methods for choice tasks,” Organizational Behavior And Human Decision Pro- cesses, Sa- yı, vol. 68, no. 1, pp. 28–43, [41] H. Kagaya and K. Aizawa, “Highly accurate food/non-food image classification based on a deep convolutional neural net- work,” in In: International Conference on Image Analysis and Processing, pp. 350–357, 2015. [42] L. Charbonnier, F. van Meer, L. N. van der Laan, M. A. Viergever, and P. A. Smeets, “Standardized food images: a photographing protocol and image database,” Appetite, vol. 96, pp. 166–173, 2016. [43] S. E. de Bruijn, Y. C. de Vries, C. de Graaf, S. Boesveldt, and G. Jager, “The reliability and validity of the Macronutrient and Taste Preference Ranking Task: a new method to measure food preferences,” Food Quality and Preference, vol. 57, pp. 32–40, 2017. [44] H. U. Amin, W. Mumtaz, A. R. Subhani, M. N. M. Saad, and A. S. Malik, “Classification of EEG signals based on pattern recognition approach,” Frontiers in Computational Neurosci- ence, vol. 11, 2017. [45] M. Ahmed, A. Mohamed, O. N. Uçan, O. Bayat, and A. D. Duru, “Classification of resting-state status based on sample entropy and power spectrum of electroencephalography (EEG),” Applied Bionics and Biomechanics, vol. 2020, Article ID 8853238, 2020. [46] D. Torse, V. Desai, and R. Khanai, “A review on seizure detec- tion systems with emphasis on multi-domain feature extrac- tion and classification using machine learning, BRAIN,” Broad Research in Artificial Intelligence and Neuroscience, vol. 8, no. 4, pp. 109–129, 2017. [47] S. Arslan, S. Güney, K. Tan, S. B. Yücel, H. B. Çotuk, and A. D. Duru, “Event related potential responses to food pictures in hunger,” in 2018 Electric Electronics, Computer Science, Bio- medical Engineerings' Meeting (EBBT), 2018. [48] B. Blankertz, S. Lemm, M. Treder, S. Haufe, and K. R. Müller, “Single-trial analysis and classification of ERP components – A tutorial,” NeuroIm- age, vol. 56, no. 2, pp. 814–825, 2011. [49] J. Polich and A. Kok, “Cognitive and biological determinants of P300: an integrative review,” Biological Psychology, vol. 41, no. 2, pp. 103–146, 1995. [50] T. Yin, Z. Huiling, P. Yu, and L. Jinzhao, “Classification for single-trial N170 during responding to facial picture with emotion,” Frontiers in Computational Neuroscience, vol. 12, [51] A. D. Duru, “Determination of increased mental workload condition from EEG by the use of classification techniques,” International Journal of Advances in Engineering and Pure Sci- ences, vol. 1, pp. 47–52, 2019. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Bionics and Biomechanics Hindawi Publishing Corporation

Identification of Food/Nonfood Visual Stimuli from Event-Related Brain Potentials

Loading next page...
 
/lp/hindawi-publishing-corporation/identification-of-food-nonfood-visual-stimuli-from-event-related-brain-9BJ3WVK5B1
Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2021 Selen Güney et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
1176-2322
eISSN
1754-2103
DOI
10.1155/2021/6472586
Publisher site
See Article on Publisher Site

Abstract

Hindawi Applied Bionics and Biomechanics Volume 2021, Article ID 6472586, 11 pages https://doi.org/10.1155/2021/6472586 Research Article Identification of Food/Nonfood Visual Stimuli from Event- Related Brain Potentials 1 1 2 3 Selen Güney, Sema Arslan, Adil Deniz Duru , and Dilek Göksel Duru Marmara University, Institute of Health Sciences, Istanbul, Turkey Marmara University, Sports Science Faculty, Istanbul, Turkey Department of Molecular Biotechnology, Turkish-German University, Istanbul, Turkey Correspondence should be addressed to Adil Deniz Duru; deniz.duru@marmara.edu.tr Received 6 June 2021; Accepted 24 August 2021; Published 24 September 2021 Academic Editor: Francesca Cordella Copyright © 2021 Selen Güney et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Although food consumption is one of the most basic human behaviors, the factors underlying nutritional preferences are not yet clear. The use of classification algorithms can clarify the understanding of these factors. This study was aimed at measuring electrophysiological responses to food/nonfood stimuli and applying classification techniques to discriminate the responses using a single-sweep dataset. Twenty-one right-handed male athletes with body mass index (BMI) levels between 18.5% and 25% (mean age: 21:05 ± 2:5) participated in this study voluntarily. The participants were asked to focus on the food and nonfood images that were randomly presented on the monitor without performing any motor task, and EEG data have been collected using a 16-channel amplifier with a sampling rate of 1024 Hz. The SensoMotoric Instruments (SMI) iView XTM RED eye tracking technology was used simultaneously with the EEG to measure the participants’ attention to the presented stimuli. Three datasets were generated using the amplitude, time-frequency decomposition, and time-frequency connectivity metrics of P300 and LPP components to separate food and nonfood stimuli. We have implemented k-nearest neighbor (kNN), support vector machine (SVM), Linear Discriminant Analysis (LDA), Logistic Regression (LR), Bayesian classifier, decision tree (DT), and Multilayer Perceptron (MLP) classifiers on these datasets. Finally, the response to food-related stimuli in the hunger state is discriminated from nonfood with an accuracy value close to 78% for each dataset. The results obtained in this study motivate us to employ classifier algorithms using the features obtained from single-trial measurements in amplitude and time- frequency space instead of applying more complex ones like connectivity metrics. 1. Introduction Imaging (fMRI), and Positron Emission Tomography (PET)) are very useful for showing changes in cerebral blood Although food consumption is one of the most basic human flow that occurred during cognitive processing, hemody- behaviors, the factors underlying nutritional preferences are namic responses are insufficient to explain the temporal not yet apparent. Many factors, such as taste, texture, dynamics of fast electrophysiological activity in the neural appearance, food deprivation, and smell of a meal, play an network [6, 7]. Electroencephalogram (EEG) has a high essential role in the attention to food [1–3]. Several studies temporal resolution that allows measurement of the brain’s point out increased attention given to food-related stimuli, electrical activity [8–10] and varies concerning the presence mainly due to food deprivation [4, 5]. It is significant to of visual, somatosensory, and auditory stimuli [1, 11]. Event- identify both the activated brain regions and the temporal Related Potential (ERP) recordings consist of sudden voltage microstructure of the information flow between these fluctuations as a response to the stimulus [12, 13]. Researchers observed several ERP components according regions to understand the neural foundations of a cognitive process such as the attention given to these types of stimuli to the time delay after the occurrence of a stimulus. For [6]. Even though the methods of imaging (Magnetic Reso- instance, the P300 component, which is measured as a pos- nance Imaging (MRI), Functional Magnetic Resonance itive waveform approximately 300 ms after the stimulus, has 2 Applied Bionics and Biomechanics to food/nonfood stimuli and applying classification tech- been extensively studied in the literature due to its potential to reveal the dynamics of cognitive processes [14–19]. More- niques to discriminate the responses using a single-sweep over, Late Positive Potentials (LPP) are observed 550-700 ms time series. after the stimulus that might be the projection of the focused attention or detailed stimulus analysis. Moreover, it reflects 2. Materials and Methods the conscious stimulus recognition phase. Wavelet trans- 2.1. Participants. Twenty-one right-handed male athletes form (WT) is one of the methods that are capable of estimat- with BMI levels between 18.5% and 25% (mean age: 21:05 ing the ERP components. WT has a more significant ±2:5) participated in this study voluntarily. All participants advantage than classical spectral analysis because it is had a minimum training in a week of 10 hours and com- suitable for the analysis of nonstationary signals in the peted in karate or rowing. None of the participants had a time-frequency domain. WT can be used to analyze various lack of food intake, head injuries, neurological and psychiat- transient events in biological signals with the structure of ric disorders, or other illness history. representation and feature extraction [20]. Each ERP com- ponent derived by WT can be associated with different 2.2. Experimental Design. More specifically, participants situations and tasks [21–24]. In several studies, ERP compo- were asked not to eat after 09.00 pm before the test day. nents have been elucidated in response to food stimuli. For We performed EEG measurements at 09.00-10.00 am instance, Hachl et al. [25] conducted a study with a group before breakfast. Before the start of the experiment, we of subjects who ate their last meal 3 hours or 6 hours before asked participants to focus on the food and nonfood the ERP measurements where they used food images as images without large motor movements that can nega- stimuli. In another study, the effects of attention to food- tively affect the signal. We presented the stimuli randomly related word stimuli in the absence of food were investigated using in-house developed software. In our study, standard- [26]. Similarly, Channon and Hayward [27] investigated ized and contrast-color-adjusted images were selected from P300 and LPP responses to food and flower images in the the study of Charbonnier et al. to minimize the adverse hunger state. Furthermore, many researchers have con- effects of food images on the ERP [42]. In this study, we ducted various Stroop studies in which the naming of the separated the images according to their nutrient content color of food words is used as stimuli [28–31]. Moreover, [43] into five groups. Since our aim is not to classify the Kitamura et al. [32] observed the effect of hypoglycemic glu- response to the images through calorie content, we just sepa- cose drink intake on a P300 response. As a result, the P300 rated the groups as food and nonfood ones. In the experi- component varied as a response to food and nonfood stimuli ment, we have shown images for 800 ms and inserted a in the hunger state. This variation motivated us to investi- negligible time of two adjacent stimuli that are shown in gate the differences that occurred in the ERP components Figure 1. The number of neutral images was 28 × 5, while it extracted from single-epoch electrical recordings. was 73 × 5 for food images. The resolution of the images In recent decades, the detection of the mental status was adjusted to 1280 × 1024. via EEG measurements had been performed via the imple- mentation of machine learning algorithms [33, 34]. In 2.3. Data Collection. We used a 16-channel V-AMP ampli- most of the studies, researchers computed the features fier (Brain Products TM, Germany) with a sampling rate of from ongoing EEG time series, and those features were 1024 Hz. In this study, we collected EEG from FP1, FP2, subjected to classifiers to detect whether the subject is nor- FP1, FP2, F3, Fz, F4, P3, P4, Pz, C3, C4, Cz, O1, O2 Oz, mal or not [35, 36]. This procedure necessitated the use of T7, and T8 channels with two electrodes as the reference known features while the modern approach, the deep and ground, as shown in Figure 2. Impedances of the chan- learning mechanism, enables us to figure out the filters nels have been kept below 5 khm. which can be used to classify the labelled measured data. The SensoMotoric Instruments (SMI) iView XTM RED A gross review has been given in [37] where the brain sig- eye tracking technology was used simultaneously with the nals were used as inputs in various problems, including the EEG. A 22” LCD screen with 1920 × 1080 resolution and seizure, emotion detection, motor imagery identification, the eye-tracker system are shown in Figure 3. The frequency and evoked potentials. of the SMI eye-tracking system is 60 Hz, and it can record In addition, eye tracking technology is used in atten- eye movements with a 0.5-degree recording error. tion studies to understand whether the participant pays attention to the stimulus presented. Eye tracking technol- 2.4. Data Analysis. Eye movements are analyzed to check if ogy is the name given to a set of methods and techniques the subjects focused on the visual stimuli using SMI BeGaze used to detect and record the activity of eye movements (Behavioral and Gaze Analysis) software. Next, noisy com- [38]. Studies have shown that eye tracking data provide ponents are removed from the EEG signal and the relevant reliable measures of attention to the stimulus in complex properties of the data are extracted based on signal process- situations [39, 40]. ing techniques. In this step, if the extracted features are not There are a few studies in the literature that classify appropriate, inaccurate findings can be achieved. Thus, it is food-related stimuli [32, 41]. Unfortunately, none of the pre- necessary to find and extract suitable features from the raw vious studies have examined electrophysiological responses signals to obtain accurate classification results [44, 45]. The to food-related stimuli using classification techniques. This last step is the use of various machine learning techniques study is aimed at measuring electrophysiological responses (like a decision tree and support vector machine) to classify Applied Bionics and Biomechanics 3 removed for further processing. After the preprocessing step, a total of 4754 single epochs remained. Next, EEG data are epoched with a length of 200 ms before and 800 ms post to 0.8 s each stimulus marker. In the second step, for both food 0.2 s images and nonfood images, the features are extracted using 0.8 s the data collected from 21 subjects. The feature vector 0.2 s consists of both time and frequency domain features. Data- 0.8 s sets of essential features obtained from EEG for food and 0.2 s nonfood images are as follows: the amplitude, time- frequency power, and time-frequency connectivity metrics. 0.8 s Datasets have formed as follows. DataSet1: 16 attributes ð16 electrodesÞ × 4754 row values are computed for the LPP Figure 1: Graphical rendition of task. and P300 amplitude. DataSet2: wavelet transform (WT) is used to compute 16 attributes ð16 electrodesÞ × 4754 row values for each frequency band (delta, theta, alpha, beta, 16 channel and gamma) for the LPP and P300. DataSet3: wavelet coher- ence is applied to form 120 attributes ð15 × ð15 + 1Þ/2 electrodesÞ × 4754 row values in each frequency band (delta, Fp1 Fp2 theta, alpha, beta, and gamma) for the LPP and P300. The k-nearest neighbor (kNN), support vector machine (SVM), Linear Discriminant Analysis (LDA), Logistic Regression (LR), Bayesian classifier, decision tree (DT), F3 Fz F4 and Multilayer Perceptron (MLP) classifiers are imple- mented using each dataset. The first classifier used in this study is the kNN, which is a nonparametric supervised learning algorithm. The new sample to be tested with the T7 C3 Cz C4 T8 features extracted that occur during the classification is assigned to the most appropriate class according to its prox- imity to the k-nearest neighbors [46]. The second classifier, P3 Pz P4 SVM, uses a distinctive hyperplane to determine classes. The hyperplane is the one that maximizes the margins using the distance from the nearest training points of the class. As a linear classifier, LDA (also known as Fisher’s LDA) is an O1 O2 Oz enhanced version of principal component analysis. The Bayesian classifier is a supervised statistical method for clas- sification. It uses the probability to assign the most likely Figure 2: 16 channel electrodes are distributed on the scalp. class of a given example described by its feature vector. MLP is a classifier based on artificial neural networks. The logistic regression used in this study is a statistical technique for binary classification. A tree-like structure containing the rules for classification in DT is produced using the mutual information hidden in the dataset. All of these classifiers were implemented in Python using the Scikit package. 3. Results As a result of the analysis, the heat map of food/nonfood images obtained from the eye-tracking technology proves that the participants focused their attention on the presented images during the study as shown in Figures 4 and 5. The grand average ERP components obtained from 21 Figure 3: SensoMotoric Instruments (SMI) Iview XTM RED and subjects in the study are summarized in terms of P300 and 22” LCD 1920 × 1080 screen. LPP amplitudes as shown in Table 1. and Figure 6. We investigated the amplitude differences that occurred as a the EEG signal using the characteristics obtained from the result of the presence of the food and nonfood stimuli using feature extraction process. Preprocessing of data is very sub- paired t-tests for each electrode. stantial for improving the noise ratio of the EEG signal. We Oz and T7 electrodes differed between food and applied a low-pass filter at 40 Hz and a high-pass filter at nonfood stimuli significantly in the absence of a multiple 0.1 Hz. Artifacts have been marked on the EEG data and test correction procedure while none of the electrodes’ 4 Applied Bionics and Biomechanics Figure 4: Heat map of food images. Figure 5: Heat map of nonfood images. LPP components differed between stimuli. Further, this Furthermore, we computed the coherence between the electrodes in each frequency band and performed t-tests to result motivated us to infer the mechanism of the mea- sured ERP by the computation of the frequency decompo- check the significance of the differences for food and non- sition. The increased occipital activity of the P300 food stimuli. In the theta band, P300 coherence between observed concerning food stimuli agrees with our previous Fp1 and Fp2 (p <0:0003) and delta band LPP coherence of studies [47]. After the frequency decomposition of the Fp2-Fz (p <0:00037) are observed to differ between stimuli. EEG time series, we computed the statistical tests to eluci- After the descriptive investigation of the features, we focused date the differences between food and nonfood stimuli. on the classification procedures. For the P300 component, in the delta band, Pz In this study, we achieved accuracy values close to 80% (p <0:032) and Oz (p <0:002); in the theta band, T7 for the discrimination of the electrophysiological responses (p <0:03); and in the alpha band, FP2 (p <0:014), elec- given to food-related stimuli versus nonfood stimuli in a trodes differed between food and nonfood stimuli. On hunger state, using various classification algorithms for the other hand, for LPP, differences were observed just datasets. The classification accuracy values are summarized in the alpha band for Fp2 (p <0:038), Fz (p <0:016), T7 in Tables 2–4 for the amplitudes of P300/LPP (DataSet1), (p <0:025), and T8 (p <0:041). for time-frequency-derived components of P300/LPP Applied Bionics and Biomechanics 5 Table 1: Amplitudes of P300 and LPP components are summarized (microvolt). Channel P300 (Food) mean/std P300 (Non-Food) mean/std p LPP (Food) mean/std LPP (Non-Food) mean/std p Fp1 1.205/1.144 1.623/1.119 0.3695 2.315/1.142 2.114/1.091 0.7773 Fp2 -0.027 / 1.172 0.054/1.135 0.8795 1.020/1.171 0.445 /1.138 0.3008 F3 -6.537/1.155 -6.286/ 1.14 0.5663 -5.781/ 1.173 -5.822 /1.142 0.9162 Fz 7.298/ 1.008 7.462 / 1.001 0.7081 6.812 / 1.016 6.413 / 0.992 0.3246 F4 0.721/1.014 0.676 /1.019 0.9343 1.368 / 1.026 1.438/ 1.006 0.879 P3 4.107/ 1.008 4.461/ 0.989 0.3675 4.955 / 1.030 5.054/ 1.015 0.8533 P4 -15.839 / 1.282 -15.967/1.3 0.8282 -15.452/ 1.3 -14.816 / 1.329 0.2634 Pz -8.574 / 1.186 -8.037/ 1.193 0.3823 -8.047/ 1.2 -8.468 / 1.197 0.4037 C3 2.556 / 0.960 3.079 / 0.954 0.2059 2.109 / 0.964 1.722 / 0.963 0.485 C4 -1.077 /0.946 -1.092/ 0.932 0.9672 -1.349 / 0.955 -1.37/0.938 0.9524 Cz 7.233 /0.963 7.405 /0.949 0.7175 6.215 / 0.964 6.177 / 0.940 0.9365 O1 1.193 / 0.999 0.899 / 0.996 0.3825 0.657 / 1.006 0.739 /1.035 0.8176 O2 4.099 / 0.982 3.856 / 0.989 0.5896 3.194 / 0.990 3.286 / 0.997 0.8204 Oz∗ 5.218 / 0.952 4.275 /0.943 0.0122 4.752 /0.958 4.681 /0.953 0.8566 T7∗ -1.646 / 1.187 -2.662 / 1.151 0.0394 -2.08 / 1.19 -1.683 / 1.192 0.5251 T8 0.069/ 1.171 0.254 / 1.123 0.7408 -0.688 / 0.091 1.149 / 1.185 0.2419 LPP Amplitudes P300 Amplitudes 10 10 5 5 0 0 −5 −5 𝜇 𝜇 −10 −10 −15 −15 Fp1 Fp2 F3 Fz F4 P3 P4 Pz C3 C4 Cz O1 O2 Oz T7 T8 Fp1 Fp2 F3 Fz F4 P3 P4 Pz C3 C4 Cz O1 O2 Oz T7 T8 −20 −20 02468 10 12 14 16 0 2 4 6 8 10121416 Food Non-Food Figure 6: P300 and LPP amplitudes for food and nonfood stimuli. (DataSet2), and for connectivity metrics of the elec- phies regarding different time-frequency components trodes in the time-frequency domain of P300/LPP are visualized in Figure 8. (DataSet3), respectively. A sample topography image is We repeated the classification procedures based on indi- shown in Figure 7 for P300 and LPP while topogra- vidual subjects’ data and reported the results (mean and V 6 Applied Bionics and Biomechanics Table 2: Accuracy of classifiers for P300 and LPP amplitude (%). Method/Feature P300 LPP k-NN 76 76 LR 78 77 DT 65 66 LDA 78 77 NB 68 68 SVM 78 77 MLP 77 76 Table 3: Accuracy of classifiers for P300 and LPP power (%). Method/Feature P300 (%) LPP (%) P300 (%) LPP (%) P300 (%) LPP(%) P300 (%) LPP (%) k-NN 77 76 76 77 77 76 75 77 LR 77 76 76 77 78 76 76 78 DT 62 62 63 63 66 64 63 68 LDA 77 76 76 77 78 76 76 78 NB 74 73 76 76 78 76 76 78 SVM 77 76 76 77 78 76 76 78 MLP 77 75 76 77 78 76 76 77 Table 4: Accuracy of classifiers for P300 and LPP coherence (%). Method/Feature P300 (%) LPP (%) P300 (%) LPP (%) P300 (%) LPP(%) P300 (%) LPP (%) k-NN 77 76 76 77 77 77 77 77 LR 77 77 77 77 77 77 77 77 DT 64 62 63 64 63 64 64 65 LDA 77 77 77 77 77 77 77 77 NB 65 67 66 67 69 71 74 74 SVM 77 77 77 77 78 78 77 78 MLP 69 74 73 74 62 73 70 73 nonfood stimuli in posterior regions [49]. Similar to this, standard deviation) in Table 5. In Figure 9, classification accuracy values of all algorithms are visualized. Geisler and Polich reported P300 differences due to the food deprivation [31]. In contradiction to these findings, when the participants ingest hypoglycemic glucose, P300 changes 4. Discussion were not observed [31]. In another study, LPP increased when the responses to food images and flower images were Up to our knowledge, the present study is the first one that compared. In that study, P300 amplitude increased over classifies the electrophysiological responses to food and non- the occipital, temporal, and centroparietal areas [26]. In food stimuli in a hunger state. For this, the first dataset con- our study, the maximum classification accuracy was 78% sists of the amplitudes of the P300 and LPP components when the amplitudes of the P300 and LPP derived from from single epochs. The dataset was formed by pooling the single-trial measurements were used as features, separately. rows computed for each subject. As stated by Blankertz The differences in P300 or LPP components in the presence et al. [48], the investigation of ERP components from of the food/nonfood stimuli varied, as reported in previous single-trial measurements is a complex problem because of studies. In ERP studies, averaging of the responses causes trial variability and background noise. Thus, each row was an increase in the signal-noise ratio of the signal and normalized to avoid the amplitude differences within sub- enhances the contrast between the cases. jects and single-trial epochs. In the hunger state, P300 and However, in the concept of our study, a remarkable LPP amplitudes were found to differ concerning food and accuracy value (78%) has been obtained from the use of Applied Bionics and Biomechanics 7 P300 Amplitude LPP −20 −40 −60 −200 −100 0 100 200 300 400 500 600 700 800 −5.99 0 5.99 Figure 7: Topographies of amplitude values of P300 and LPP. Power values P300 LPP Hz S 6 S −200 −100 0 100 200 300 400 500 600 700 ms 0.00 9.73 Figure 8: Topographies of power values of P300 and LPP for each frequency band (μV) (δ 0.5-4 Hz, θ 4-8 Hz, α 8-13 Hz, and β 13- 30 Hz). single-trial P300 and LPP amplitude components, sepa- EEG measurements can provide valuable information in rately. In the ERP literature, in a classification study, the the presence of adequate contrast mechanisms. For average accuracy value increased to 86% based on the instance, in the comparison of the resting-state EEG data N170 component. In that study, single-trial measurements with the brain dynamics measured during an increased as responses to pictures having positive and negative emo- mental workload state, high classification accuracy results tions were the input data to the classifier [50]. Single-trial are achieved [51]. In our study, the consistent accuracy 8 Applied Bionics and Biomechanics Table 5: The mean and standard deviation of accuracy values computed from each individual subject. Accuracy k-NN LR DT LDA NB SVM MLP Mean 73.7 75.1 71 75.1 71.7 73.9 61.6 P300 Std. Dev. 1.2 1.3 2.9 1.6 2.1 5.7 6.4 Dataset 1 Mean 74.1 75 70.6 75 71.8 76 61.5 LPP Std. Dev. 1.4 1.1 2.5 1.3 2 3.1 8.9 Mean 73.6 75.3 69.9 74.9 71.8 77.4 58.6 P300 (Delta) Std. Dev. 1.6 1.6 2.9 1.5 2 0 13.5 Mean 73.6 74.9 70 74.9 71.3 77.4 54.1 P300 (Theta) Std. Dev. 1.8 1.5 2.5 1.4 2.6 0 13.7 Mean 73.5 74.8 70.7 74.9 70.9 77.4 59.2 P300 (Alpha) Std. Dev. 1.8 1.4 2.6 1.3 2.2 0 9.6 Mean 73.4 75 70.9 74.8 72.5 77.4 57.7 P300 (Beta) Std. Dev. 1.7 1.7 2.6 1.5 2.3 0 9.6 Dataset 2 Mean 73.7 65.2 69.2 59.3 62.1 70 58.1 LPP (Delta) Std. Dev. 2.2 2.6 3.3 3.2 3.3 3 10.5 Mean 73.7 67.7 69.1 59.7 61.2 74.8 59.9 LPP (Theta) Std. Dev. 1.5 2.5 3 3.4 5.1 1.5 9.7 Mean 73.2 66.7 68.6 59.7 60.7 74 56.1 LPP (Alpha) Std. Dev. 2 2.7 2.7 2.8 5 2.2 8.9 Mean 73.6 67.3 67.6 59.8 61.8 75.6 61.7 LPP (Beta) Std. Dev. 1.8 2.6 2.8 2.9 5.7 1.5 7.1 Mean 73.3 65.7 69.8 58.4 60.8 69.3 55 P300 (Delta) Coh Std. Dev. 2 2.6 2.8 3.3 4.2 2.5 9.6 Mean 73.6 67.6 68.5 58.1 61.8 74 58.1 P300 (Theta) Coh Std. Dev. 1.4 3.6 3.3 4.3 4.4 2.2 10.9 Mean 73.4 66.4 68.4 59.6 58.9 73.4 58.8 P300 (Alpha) Coh Std. Dev. 1.7 3 2.3 3.3 5.9 1.7 7.4 Mean 74.1 67.2 68.9 60.1 59.8 73.3 58.5 P300 (Beta) Coh Std. Dev. 2.1 2.4 2.5 3.9 5.4 2.3 9.6 Dataset 3 Mean 73.7 65.2 69.2 59.3 62.1 70 58.1 LPP (Delta) Coh Std. Dev. 2.2 2.6 3.3 3.2 3.3 3 10.5 Mean 73.7 67.7 69.1 59.7 61.2 74.8 59.9 LPP (Theta) Coh Std. Dev. 1.5 2.5 3 3.4 5.1 1.5 9.7 Mean 73.2 66.7 68.6 59.7 60.7 74 56.1 LPP (Alpha) Coh Std. Dev. 2 2.7 2.7 2.8 5 2.2 8.9 Mean 73.6 67.3 67.6 59.8 61.8 75.6 61.7 LPP (Beta) Coh Std. Dev. 1.8 2.6 2.8 2.9 5.7 1.5 7.1 values obtained using several techniques exhibit the limita- time can be thought of as the time needed to compute P300 tion of the stimulus identification. DT outputs the lowest and LPP features. On the other hand, the classification proce- accuracy in classification, which might be due to the low dures consist of a training phase where several realizations of number of levels of the tree. the labelled data are being used. For the estimation of the com- For the ERP data collection, one needs to perform an aver- putational complexity, the number of features (f ) and the aging procedure over several responses given to the same or number of samples (n) have a crucial role. For instance, in k- similar stimulus. Thus, conducting ERP experiments is a NN, in the test phase, the complexity is directly related to f time-requiring process. On the other hand, in our study, we ∗ n, while it is just affected by f in DT. Since the complexity values are on the order of the square of sample size, the train- concentrated just on the single sweeps which last less than a second. So, the data that we need is limited by physiological ing phase is time-consuming for DT, MLP, and SVM. On the mechanisms for the testing phase of the classification. There- other hand, LR is much faster. When we pool the data, our fore, for real-time implementation, the minimum detection sample size becomes more than thousands. Applied Bionics and Biomechanics 9 78 78 78 78 78 78 78 78 77 77 77 77 77 77 77 77 77 77 77 77 77 77 77 76 76 76 76 76 76 76 76 74 74 74 74 68 68 68 67 67 66 66 65 65 64 64 64 64 63 63 62 62 62 k-N N algorithm LDA algorithm MLP algorithm NB algorithm LR algorithm SVM algorithm DT algorithm Figure 9: Classification accuracy for all algorithms with 10-fold cross-validation. 5. Conclusion Ethical Approval The study was approved by the Ethical Review Board of the In the ERP literature, the common sense is to analyze the Medical Faculty, Marmara University (approval number electrical activity in different frequency bands. Thus, in the 09.2018.380). concept of this study, the time series were decomposed into a time-frequency space using wavelet transform. Moreover, Consent the connectivity approach was adopted to multichannel ERP measurements in the time window of P300 and LPP Informed consent was obtained from all individual partici- to deduce the coherence information. Based on our findings, pants included in the study prior to measurement. we can propose that the use of complex features is not nec- essary since the usage of them does not overcome the basic Conflicts of Interest amplitude features. There are still many gaps in our understanding of the The authors declare no conflict of interest directly related to brain responses given to visual stimuli. The concept of visual the submitted work. stimuli cannot directly be classified with high-accuracy values. On the other hand, it is more straightforward for Acknowledgments mental illness detection or motor imagery studies. Thus, in future studies, one should focus on the feature engineering This work was supported by the Research Fund of the side of the EEG. In particular, deep learning with convolu- Marmara University (Project No. SAG-A-100713-0296). tional neural networks can be adopted to develop spatial The article processing charge was funded by Hindawi. filters on the topography images. This process may yield researchers to exhibit valuable information from the mea- References sured ERP signals. [1] L. J. Karhunen, E. J. Vanninen, J. T. Kuikka, R. I. Lappalainen, J. Tiiho-Nen, and M. I. Uusitupa, “Regional cerebral blood Data Availability flow during exposure to food in obese binge eating women,” Psychiatry Research: Neuroimaging, vol. 99, no. 1, pp. 29–42, The EEG and eye tracker data used to support the findings 2000. of this study are available from the corresponding author [2] W. D. Killgore, A. D. Young, L. A. Femia, P. Bogorodzki, upon request. J. Rogowska, and D. A. Yurgelun-Todd, “Cortical and limbic P300_Amplitude LPP_Amplitude P300_Delta_power P300_Theta_power P300_Alpha_power P300_Beta_power LPP_Delta_power LPP_Theta_power LPP_Alpha_power LPP_Beta_power P300_Delta_COH P300_Theta_COH P300_Alpha_COH P300_Beta_COH LPP_Delta_COH LPP_Theta_COH LPP_Alpha_COH LPP_Beta_COH 10 Applied Bionics and Biomechanics [21] B. Kopp, F. Rist, and U. W. E. Mattler, “N200 in the flanker activation during viewing of high- versus low-calorie foods,” NeuroImage, vol. 19, no. 4, pp. 1381–1394, 2003. task as a neurobehavioral tool for investigating executive con- trol,” Psychophysiology, vol. 33, no. 3, pp. 282–294, 1996. [3] N. Tashiro, H. Sugata, T. Ikeda et al., “Effect of individual food preferences on oscillatory brain activity,” Brain and Behavior, [22] E. K. Vogel and S. J. Luck, “The visual N1 component as an vol. 9, no. 5, p. e01262, 2019. index of a discrimination process,” Psychophysiology, vol. 37, no. 2, pp. 190–203, 2000. [4] W. Plihal, C. Haenschel, P. Hachl, J. Born, and R. Pietrowsky, “The effect of food deprivation on ERP during identification of [23] S. Iceta, J. Benoit, P. Cristini et al., “Attentional bias and tachistoscopically presented food-related words,” Journal of response inhibition in severe obesity with food disinhibition: Psychophysiology, vol. 15, no. 3, pp. 163–172, 2001. a study of P300 and N200 event-related potential,” Interna- tional Journal of Obesity, vol. 44, 2020. [5] J. Sänger, “Can't take my eyes off you - How task irrelevant pic- tures of food influence attentional selection,” Appetite, [24] M. W. Geisler and J. Polich, “P300 and time of day: circadian vol. 133, pp. 313–323, 2019. rhythms, food intake, and body temperature,” Biological Psy- chology, vol. 31, no. 2, pp. 117–136, 1990. [6] S. A. Hillyard and L. Anllo-Vento, “Event-related brain poten- tials in the study of visual selective attention,” Proceedings of [25] P. Hachl, C. Hempel, and R. Pietrowsky, “ERPs to stimulus the National Academy of Sciences, vol. 95, no. 3, pp. 781–787, identification in persons with restrained eating behavior,” 1998. International Journal of Psychophysiology, vol. 49, no. 2, pp. 111–121, 2003. [7] P. T. Fox and M. G. Woldorff, “Integrating human brain maps,” Current Opinion in Neurobiology, vol. 4, no. 2, [26] J. Stockburger, R. Schmälzle, T. Flaisch, F. Bublatzky, and H. T. pp. 151–156, 1994. Schupp, “The impact of hunger on food cue processing: an event-related brain potential study,” Neuroimage, vol. 47, [8] P. Ritter and A. Villringer, “Simultaneous EEG-fMRI,” Neuro- no. 4, pp. 1819–1829, 2009. science & Biobehavioral Reviews, vol. 30, no. 6, pp. 823–838, [27] S. Channon and A. Hayward, “The effect of short-term fasting on processing of food cues in normal subjects,” International [9] R. Srinivasan, W. R. Winter, and P. L. Nunez, “Source analysis Journal of Eating Disorders, vol. 9, no. 4, pp. 447–452, 1990. of EEG oscillations using high-resolution EEG and MEG,” Progress in Brain Research, vol. 159, pp. 29–42, 2006. [28] E. H. Lavy and M. A. van den Hout, “Attentional bias for appe- titive cues: effects of fasting in normal subjects,” Behavioural [10] B. M. Savers, H. A. Beagley, and W. R. Henshall, “The mecha- and Cognitive Psychotherapy, vol. 21, no. 4, pp. 297–310, 1993. nism of auditory evoked EEG responses,” Nature, vol. 247, no. 5441, pp. 481–483, 1974. [29] K. S. Dobson and D. J. Dozois, “Attentional biases in eating disorders: a meta-analytic review of Stroop performance,” [11] K. Elf, E. Ronne-Engström, R. Semnic, E. Rostami-Berglund, Clinical Psychology Review, vol. 23, no. 8, pp. 1001–1022, 2004. J. Sundblom, and M. Zetterling, “Continuous EEG monitoring after brain tumor surgery,” Acta neurochirurgica, vol. 161, [30] S. Hollitt, E. Kemps, M. Tiggemann, E. Smeets, and J. S. Mills, no. 9, pp. 1835–1843, 2019. “Components of attentional bias for food cues among restrained eaters,” Appetite, vol. 54, no. 2, pp. 309–313, 2010. [12] S. J. Luck, G. F. Woodman, and E. K. Vogel, “Event-related potential studies of attention,” Trends in Cognitive Sciences, [31] M. W. Geisler and J. Polich, “P300 is unaffected by glucose vol. 4, no. 11, pp. 432–440, 2000. increase,” Biological Psychology, vol. 37, no. 3, pp. 235–245, [13] H. T. Schupp, T. Flaisch, J. Stockburger, and M. Jungh Ofer, “Emotion and attention: event-related brain potential studies,” [32] K. Kitamura, T. Yamasaki, and K. Aizawa, “Food log by ana- Progress in Brain Research, vol. 156, pp. 31–51, 2006. lyzing food images,” in In: Proceedings of the 16th ACM inter- [14] T. W. Picton, “The P300 wave of the human event-related national conference on Mul- timedia, pp. 999-1000, 2008. potential,” Journal of Clinical Neurophysiology, vol. 9, no. 4, [33] Y. Zhang, G. Zhou, J. Jin, Q. Zhao, X. Wang, and A. Cichocki, pp. 456–479, 1992. “Sparse Bayesian classification of EEG for brain–computer interface,” in IEEE Transactions on Neural Networks and [15] K. McDowell, S. E. Kerick, D. L. Santa Maria, and B. D. Hatfield, “Aging, physical activity, and cognitive processing: Learning Systems, vol. 27, no. 11, pp. 2256–2267, 2016. an examination of P300,” Neurobiology of Aging, vol. 24, [34] Y. Zhang, G. Zhou, Q. Zhao, J. Jin, X. Wang, and A. Cichocki, no. 4, pp. 597–606, 2003. “Spatial-temporal discriminant analysis for ERP-based brain- [16] V. Dodin and J. L. Nandrino, “Cognitive processing of computer interface,” in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 21, no. 2, pp. 233–243, anorexic patients in recognition tasks: an event-related poten- tials study,” International Jour- nal of Eating Disorders, vol. 33, 2013. no. 3, pp. 299–307, 2003. [35] D. G. Duru and A. D. Duru, “Classification of Event Related [17] Bressler, The Handbook of Brain Theory and Neural Networks, Potential Patterns using Deep Learning,” 2018 Medical Tech- nologies National Congress (TIPTEKNO), pp. 1–4, 2018. MIT press, London, 2003. [18] S. J. Luck and Kappenman, The Oxford Handbook of Event- [36] S. A. Shaban, O. N. Ucan, and A. D. Duru, “Classification of Related Potential Components, Oxford university press, 2011. lactate level using resting-state EEG measurements,” Applied Bionics and Biomechanics, vol. 2021, Article ID 6662074, 2021. [19] M. Giraldo, G. Buodo, and M. Sarlo, “Food processing and emotion regulation in vegetarians and omnivores: An event- [37] X. Zhang, L. Yao, X. Wang, J. J. M. Monaghan, D. Mcalpine, related potential investigation,” Appetite, vol. 141, p. 104334, and Y. Zhang, “A survey on deep learning-based non- 2019. invasive brain signals: recent advances and new frontiers,” Journal of Neural Engineering, vol. 18, 2020. [20] L. J. Karhunen, E. J. Vanninen, J. T. Kuikka, R. I. Lappalainen, J. Tiihonen, and M. I. J. Uusitupa, “Regional cerebral blood [38] M. L. Mele and S. Federici, “Gaze and eye-tracking solutions flow during exposure to food in obese binge eating women,” for psychological research,” Cognitive Processing, vol. 13, Neuroimaging, vol. 99, no. 1, pp. 120–124, 2006. no. S1, pp. 261–265, 2012. Applied Bionics and Biomechanics 11 [39] E. Koç, O. Bayat, D. G. Duru, and A. D. Duru, “Design of Brain Computer Interface Based on Eye Movements,” International Journal of Engineering Research and Development, vol. 12, no. 1, pp. 176–188. [40] G. Lohse and E. Johnson, “A comparison of two process trac- ing methods for choice tasks,” Organizational Behavior And Human Decision Pro- cesses, Sa- yı, vol. 68, no. 1, pp. 28–43, [41] H. Kagaya and K. Aizawa, “Highly accurate food/non-food image classification based on a deep convolutional neural net- work,” in In: International Conference on Image Analysis and Processing, pp. 350–357, 2015. [42] L. Charbonnier, F. van Meer, L. N. van der Laan, M. A. Viergever, and P. A. Smeets, “Standardized food images: a photographing protocol and image database,” Appetite, vol. 96, pp. 166–173, 2016. [43] S. E. de Bruijn, Y. C. de Vries, C. de Graaf, S. Boesveldt, and G. Jager, “The reliability and validity of the Macronutrient and Taste Preference Ranking Task: a new method to measure food preferences,” Food Quality and Preference, vol. 57, pp. 32–40, 2017. [44] H. U. Amin, W. Mumtaz, A. R. Subhani, M. N. M. Saad, and A. S. Malik, “Classification of EEG signals based on pattern recognition approach,” Frontiers in Computational Neurosci- ence, vol. 11, 2017. [45] M. Ahmed, A. Mohamed, O. N. Uçan, O. Bayat, and A. D. Duru, “Classification of resting-state status based on sample entropy and power spectrum of electroencephalography (EEG),” Applied Bionics and Biomechanics, vol. 2020, Article ID 8853238, 2020. [46] D. Torse, V. Desai, and R. Khanai, “A review on seizure detec- tion systems with emphasis on multi-domain feature extrac- tion and classification using machine learning, BRAIN,” Broad Research in Artificial Intelligence and Neuroscience, vol. 8, no. 4, pp. 109–129, 2017. [47] S. Arslan, S. Güney, K. Tan, S. B. Yücel, H. B. Çotuk, and A. D. Duru, “Event related potential responses to food pictures in hunger,” in 2018 Electric Electronics, Computer Science, Bio- medical Engineerings' Meeting (EBBT), 2018. [48] B. Blankertz, S. Lemm, M. Treder, S. Haufe, and K. R. Müller, “Single-trial analysis and classification of ERP components – A tutorial,” NeuroIm- age, vol. 56, no. 2, pp. 814–825, 2011. [49] J. Polich and A. Kok, “Cognitive and biological determinants of P300: an integrative review,” Biological Psychology, vol. 41, no. 2, pp. 103–146, 1995. [50] T. Yin, Z. Huiling, P. Yu, and L. Jinzhao, “Classification for single-trial N170 during responding to facial picture with emotion,” Frontiers in Computational Neuroscience, vol. 12, [51] A. D. Duru, “Determination of increased mental workload condition from EEG by the use of classification techniques,” International Journal of Advances in Engineering and Pure Sci- ences, vol. 1, pp. 47–52, 2019.

Journal

Applied Bionics and BiomechanicsHindawi Publishing Corporation

Published: Sep 24, 2021

References