• <bdo id='LAOlb'></bdo><ul id='LAOlb'></ul>
    <legend id='LAOlb'><style id='LAOlb'><dir id='LAOlb'><q id='LAOlb'></q></dir></style></legend>

        <small id='LAOlb'></small><noframes id='LAOlb'>

      1. <i id='LAOlb'><tr id='LAOlb'><dt id='LAOlb'><q id='LAOlb'><span id='LAOlb'><b id='LAOlb'><form id='LAOlb'><ins id='LAOlb'></ins><ul id='LAOlb'></ul><sub id='LAOlb'></sub></form><legend id='LAOlb'></legend><bdo id='LAOlb'><pre id='LAOlb'><center id='LAOlb'></center></pre></bdo></b><th id='LAOlb'></th></span></q></dt></tr></i><div id='LAOlb'><tfoot id='LAOlb'></tfoot><dl id='LAOlb'><fieldset id='LAOlb'></fieldset></dl></div>

      2. <tfoot id='LAOlb'></tfoot>

      3. 如何在KERAS中将未来值包括在RNN的时间序列预测中

        时间:2024-08-11

          <legend id='9ufT0'><style id='9ufT0'><dir id='9ufT0'><q id='9ufT0'></q></dir></style></legend>
            <bdo id='9ufT0'></bdo><ul id='9ufT0'></ul>

              1. <tfoot id='9ufT0'></tfoot>

                  <tbody id='9ufT0'></tbody>

                <i id='9ufT0'><tr id='9ufT0'><dt id='9ufT0'><q id='9ufT0'><span id='9ufT0'><b id='9ufT0'><form id='9ufT0'><ins id='9ufT0'></ins><ul id='9ufT0'></ul><sub id='9ufT0'></sub></form><legend id='9ufT0'></legend><bdo id='9ufT0'><pre id='9ufT0'><center id='9ufT0'></center></pre></bdo></b><th id='9ufT0'></th></span></q></dt></tr></i><div id='9ufT0'><tfoot id='9ufT0'></tfoot><dl id='9ufT0'><fieldset id='9ufT0'></fieldset></dl></div>
              2. <small id='9ufT0'></small><noframes id='9ufT0'>

                  本文介绍了如何在KERAS中将未来值包括在RNN的时间序列预测中的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                  问题描述

                  我目前有一个用于时间序列预测的RNN模型。它使用最后96个时间步长中的3个输入要素&Quot;Value&Quot;、&Quot;Temperature&Quot;和&Quot;Hour of the day&Quot;来预测要素&Quot;Value&Quot;的下96个时间步长。

                  在这里您可以看到它的架构:

                  这里有当前代码:

                  #Import modules
                  import pandas as pd
                  import numpy as np
                  import tensorflow as tf
                  from sklearn.preprocessing import StandardScaler
                  from sklearn.metrics import mean_squared_error
                  from tensorflow import keras
                  
                  # Define the parameters of the RNN and the training
                  epochs = 1
                  batch_size = 50
                  steps_backwards = 96
                  steps_forward = 96
                  split_fraction_trainingData = 0.70
                  split_fraction_validatinData = 0.90
                  randomSeedNumber = 50
                  
                  #Read dataset
                  df = pd.read_csv('C:/Users/Desktop/TestData.csv', sep=';', header=0, low_memory=False, infer_datetime_format=True, parse_dates={'datetime':[0]}, index_col=['datetime'])
                  
                  # standardize data
                  
                  data = df.values
                  indexWithYLabelsInData = 0
                  data_X = data[:, 0:3]
                  data_Y = data[:, indexWithYLabelsInData].reshape(-1, 1)
                  
                  
                  scaler_standardized_X = StandardScaler()
                  data_X = scaler_standardized_X.fit_transform(data_X)
                  data_X = pd.DataFrame(data_X)
                  scaler_standardized_Y = StandardScaler()
                  data_Y = scaler_standardized_Y.fit_transform(data_Y)
                  data_Y = pd.DataFrame(data_Y)
                  
                  
                  # Prepare the input data for the RNN
                  
                  series_reshaped_X =  np.array([data_X[i:i + (steps_backwards+steps_forward)].copy() for i in range(len(data) - (steps_backwards+steps_forward))])
                  series_reshaped_Y =  np.array([data_Y[i:i + (steps_backwards+steps_forward)].copy() for i in range(len(data) - (steps_backwards+steps_forward))])
                  
                  
                  timeslot_x_train_end = int(len(series_reshaped_X)* split_fraction_trainingData)
                  timeslot_x_valid_end = int(len(series_reshaped_X)* split_fraction_validatinData)
                  
                  X_train = series_reshaped_X[:timeslot_x_train_end, :steps_backwards] 
                  X_valid = series_reshaped_X[timeslot_x_train_end:timeslot_x_valid_end, :steps_backwards] 
                  X_test = series_reshaped_X[timeslot_x_valid_end:, :steps_backwards] 
                  
                     
                  Y_train = series_reshaped_Y[:timeslot_x_train_end, steps_backwards:] 
                  Y_valid = series_reshaped_Y[timeslot_x_train_end:timeslot_x_valid_end, steps_backwards:] 
                  Y_test = series_reshaped_Y[timeslot_x_valid_end:, steps_backwards:]                                
                     
                     
                  # Build the model and train it
                  
                  np.random.seed(randomSeedNumber)
                  tf.random.set_seed(randomSeedNumber)
                  
                  model = keras.models.Sequential([
                  keras.layers.SimpleRNN(10, return_sequences=True, input_shape=[None, 3]),
                  keras.layers.SimpleRNN(10, return_sequences=True),
                  keras.layers.TimeDistributed(keras.layers.Dense(1))
                  ])
                  
                  model.compile(loss="mean_squared_error", optimizer="adam", metrics=['mean_absolute_percentage_error'])
                  history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size, validation_data=(X_valid, Y_valid))
                  
                  #Predict the test data
                  Y_pred = model.predict(X_test)
                  
                  
                  # Inverse the scaling (traInv: transformation inversed)
                  
                  data_X_traInv = scaler_standardized_X.inverse_transform(data_X)
                  data_Y_traInv = scaler_standardized_Y.inverse_transform(data_Y)
                  series_reshaped_X_notTransformed =  np.array([data_X_traInv[i:i + (steps_backwards+steps_forward)].copy() for i in range(len(data) - (steps_backwards+steps_forward))])
                  X_test_notTranformed = series_reshaped_X_notTransformed[timeslot_x_valid_end:, :steps_backwards] 
                  Y_pred_traInv = scaler_standardized_Y.inverse_transform (Y_pred)
                  Y_test_traInv = scaler_standardized_Y.inverse_transform (Y_test)
                  
                  
                  # Calculate errors for every time slot of the multiple predictions
                  
                  abs_diff = np.abs(Y_pred_traInv - Y_test_traInv)
                  abs_diff_perPredictedSequence = np.zeros((len (Y_test_traInv)))
                  average_LoadValue_testData_perPredictedSequence = np.zeros((len (Y_test_traInv)))
                  abs_diff_perPredictedTimeslot_ForEachSequence = np.zeros((len (Y_test_traInv)))
                  absoluteError_Load_Ratio_allPredictedSequence = np.zeros((len (Y_test_traInv)))
                  absoluteError_Load_Ratio_allPredictedTimeslots = np.zeros((len (Y_test_traInv)))
                  
                  mse_perPredictedSequence = np.zeros((len (Y_test_traInv)))
                  rmse_perPredictedSequence = np.zeros((len(Y_test_traInv)))
                  
                  for i in range (0, len(Y_test_traInv)):
                      for j in range (0, len(Y_test_traInv [0])):
                          abs_diff_perPredictedSequence [i] = abs_diff_perPredictedSequence [i] + abs_diff [i][j]
                      mse_perPredictedSequence [i] = mean_squared_error(Y_pred_traInv[i] , Y_test_traInv [i] )
                      rmse_perPredictedSequence [i] = np.sqrt(mse_perPredictedSequence [i])
                      abs_diff_perPredictedTimeslot_ForEachSequence [i] = abs_diff_perPredictedSequence [i] / len(Y_test_traInv [0])
                      average_LoadValue_testData_perPredictedSequence [i] = np.mean (Y_test_traInv [i])
                      absoluteError_Load_Ratio_allPredictedSequence [i] = abs_diff_perPredictedSequence [i] / average_LoadValue_testData_perPredictedSequence [i]
                      absoluteError_Load_Ratio_allPredictedTimeslots [i] = abs_diff_perPredictedTimeslot_ForEachSequence [i]  / average_LoadValue_testData_perPredictedSequence [i]
                  
                  rmse_average_allPredictictedSequences  = np.mean (rmse_perPredictedSequence)
                  absoluteAverageError_Load_Ratio_allPredictedSequence = np.mean (absoluteError_Load_Ratio_allPredictedSequence)
                  absoluteAverageError_Load_Ratio_allPredictedTimeslots = np.mean (absoluteError_Load_Ratio_allPredictedTimeslots)
                  absoluteAverageError_allPredictedSequences =  np.mean (abs_diff_perPredictedSequence)
                  absoluteAverageError_allPredictedTimeslots =  np.mean (abs_diff_perPredictedTimeslot_ForEachSequence)
                                              
                  

                  这里有一些测试数据Download Test Data

                  因此,现在我实际上不仅希望将要素的过去值包括到预测中,还希望将要素的未来值&温度和一天中的小时值包括到预测中。(#**$$}{##**$$}。例如,可以从外部天气预报服务中获取功能&Quot;Temperature&Quot;的未来值,而对于功能&Quot;Hour&Quot;,未来值以前是已知的(在测试数据中,我已经包含了非实际预测的&Quot;Forecast&Quot;;我只是随机更改了这些值)。

                  这样,我可以假设-对于几个应用程序和数据-预测可以改进。

                  在架构中,它将如下所示:

                  谁能告诉我,我如何使用RNN(或LSTM)在KERAS中做到这一点?一种方法可以是将未来值作为独立的功能作为输入包含在内。但我希望模型知道要素的未来值与要素的过去值相关联。

                  提醒:有人知道如何执行此操作吗?我将高度感谢您的每条评论。

                  推荐答案

                  标准方法是使用编解码器架构(例如,参见1和2):

                  • 编码器将要素和目标的过去值作为输入,并返回输出表示形式。
                  • 解码器将编码器输出和功能的未来值作为输入,并返回目标的预测值。

                  您可以对编码器和解码器使用任何架构,也可以考虑将编码器输出传递给解码器的不同方法(例如,将其添加或串联到解码器输入功能,将其添加或串联到某个中间解码器层的输出,或者将其添加到最终解码器输出),下面的代码只是一个示例。

                  import pandas as pd
                  import numpy as np
                  import matplotlib.pyplot as plt
                  from sklearn.preprocessing import StandardScaler
                  from tensorflow.keras.layers import Input, Dense, LSTM, TimeDistributed, Concatenate, Add
                  from tensorflow.keras.models import Model
                  from tensorflow.keras.optimizers import Adam
                  
                  # define the inputs
                  target = ['value']
                  features = ['temperatures', 'hour of the day']
                  sequence_length = 96
                  
                  # import the data
                  df = pd.read_csv('TestData.csv', sep=';', header=0, low_memory=False, infer_datetime_format=True, parse_dates={'datetime': [0]}, index_col=['datetime'])
                  
                  # scale the data
                  target_scaler = StandardScaler().fit(df[target])
                  features_scaler = StandardScaler().fit(df[features])
                  
                  df[target] = target_scaler.transform(df[target])
                  df[features] = features_scaler.transform(df[features])
                  
                  # extract the input and output sequences
                  X_encoder = []  # past features and target values
                  X_decoder = []  # future features values
                  y = []          # future target values
                  
                  for i in range(sequence_length, df.shape[0] - sequence_length):
                      X_encoder.append(df[features + target].iloc[i - sequence_length: i])
                      X_decoder.append(df[features].iloc[i: i + sequence_length])
                      y.append(df[target].iloc[i: i + sequence_length])
                  
                  X_encoder = np.array(X_encoder)
                  X_decoder = np.array(X_decoder)
                  y = np.array(y)
                  
                  # define the encoder and decoder
                  def encoder(encoder_features):
                      y = LSTM(units=100, return_sequences=True)(encoder_features)
                      y = TimeDistributed(Dense(units=1))(y)
                      return y
                  
                  def decoder(decoder_features, encoder_outputs):
                      x = Concatenate(axis=-1)([decoder_features, encoder_outputs])
                      # x = Add()([decoder_features, encoder_outputs]) 
                      y = TimeDistributed(Dense(units=100, activation='relu'))(x)
                      y = TimeDistributed(Dense(units=1))(y)
                      return y
                  
                  # build the model
                  encoder_features = Input(shape=X_encoder.shape[1:])
                  decoder_features = Input(shape=X_decoder.shape[1:])
                  encoder_outputs = encoder(encoder_features)
                  decoder_outputs = decoder(decoder_features, encoder_outputs)
                  model = Model([encoder_features, decoder_features], decoder_outputs)
                  
                  # train the model
                  model.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
                  model.fit([X_encoder, X_decoder], y, epochs=100, batch_size=128)
                  
                  # extract the last predicted sequence
                  y_true = target_scaler.inverse_transform(y[-1, :])
                  y_pred = target_scaler.inverse_transform(model.predict([X_encoder, X_decoder])[-1, :])
                  
                  # plot the last predicted sequence
                  plt.plot(y_true.flatten(), label='actual')
                  plt.plot(y_pred.flatten(), label='predicted')
                  plt.show()
                  

                  在上面的示例中,模型接受两个输入X_encoderX_decoder,因此在生成预测时,您可以使用X_encoder中的过去观测温度和X_decoder中的未来温度预测。

                  这篇关于如何在KERAS中将未来值包括在RNN的时间序列预测中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                  上一篇:什么等同于++递增运算符? 下一篇:pandas -更改重新采样的时间序列的开始和结束日期

                  相关文章

                  1. <legend id='bxZt6'><style id='bxZt6'><dir id='bxZt6'><q id='bxZt6'></q></dir></style></legend>
                      <tfoot id='bxZt6'></tfoot>
                    1. <i id='bxZt6'><tr id='bxZt6'><dt id='bxZt6'><q id='bxZt6'><span id='bxZt6'><b id='bxZt6'><form id='bxZt6'><ins id='bxZt6'></ins><ul id='bxZt6'></ul><sub id='bxZt6'></sub></form><legend id='bxZt6'></legend><bdo id='bxZt6'><pre id='bxZt6'><center id='bxZt6'></center></pre></bdo></b><th id='bxZt6'></th></span></q></dt></tr></i><div id='bxZt6'><tfoot id='bxZt6'></tfoot><dl id='bxZt6'><fieldset id='bxZt6'></fieldset></dl></div>

                    2. <small id='bxZt6'></small><noframes id='bxZt6'>

                      • <bdo id='bxZt6'></bdo><ul id='bxZt6'></ul>