[1] |
PIPES L A. An Operational Analysis of Traffic Dynamics[J]. Journal of Applied Physics, 1953, 24(3):274-281.
doi: 10.1063/1.1721265
|
[2] |
FORBES T W. Human Factor Considerations in Traffic Flow Theory[J]. Highway Research Record, 1963(15):60-66.
|
[3] |
任殿波, 张继业. 基于 Lyapunov 函数方法的时滞车辆纵向跟随控制[J]. 控制与决策, 2007, 22(8):918-921.
|
|
REN Dianbo, ZHANG Jiye. Lyapunov Function Approach to Longitudinal Following Control of Vehicles in Platoon with Delays[J]. Control and Decision, 2007, 22(8):918-921.
|
[4] |
LIN Y, MCPHEE J, AZAD N L. Comparison of Deep Reinforcement Learningand Model Predictive Control for Adaptive Cruise Control[J]. IEEE Transactions on Intelligent Vehicles, 2021, 6(2):221-231.
doi: 10.1109/TIV.2020.3012947
|
[5] |
李腾, 曹世杰, 尹思薇, 等. 应用Q学习决策的最优攻击路径生成方法[J]. 西安电子科技大学学报, 2021, 48(1):160-167.
|
|
LI Teng, CAO Shijie, YIN Siwei, et al. Optimal Method for the Generation of the Attack Path Based on the Q-Learning Decision[J]. Journal of Xidian University, 2021, 48(1):160-167.
|
[6] |
张英, 韦闽峰, 王世会, 等. 飞行器强化学习多模在轨控制[J]. 西安电子科技大学学报, 2020, 47(2):75-82.
|
|
ZHANG Ying, WEI Minfeng, WANG Shihui, et al. Aircraft Reinforcement Learning Multi-Mode Control in Orbit[J]. Journal of Xidian University, 2020, 47(2):75-82.
|
[7] |
朱冰, 蒋渊德, 赵健, 等. 基于深度强化学习的车辆跟驰控制[J]. 中国公路学报, 2019, 32(6):53-60.
doi: 10.19721/j.cnki.1001-7372.2019.06.005
|
|
ZHU Bing, JIANG Yuande, ZHAO Jian, et al. A Car-Following Control Algorithm Based on Deep Reinforcement Learning[J]. 2019, 32(6):53-60.
doi: 10.19721/j.cnki.1001-7372.2019.06.005
|
[8] |
LIAO Y, YU G, CHEN P, et al. Modelling Personalised Car-Following Behaviour:A Memory-Based Deep Reinforcement Learning Approach(2022)[J/OL].[2022-12-31]. https://www.tandfonline.com/doi/full/10.1080/23249935.2022.2035846.
|
[9] |
COLOMBARONI C, FUSCO G, ISAENKO N. Modeling Car Following with Feed-Forward and Long-Short Term Memory Neural Networks[J]. Transportation Research Procedia, 2021, 52:195-202.
doi: 10.1016/j.trpro.2021.01.022
|
[10] |
SHI K, WU Y, SHI H, et al. An Integrated Car-Following and Lane Changing Vehicle Trajectory Prediction Algorithm Based on a Deep Neural Network[J]. Physica A:Statistical Mechanics and Its Applications, 2022, 599:127303.
doi: 10.1016/j.physa.2022.127303
|
[11] |
ZHOU Y, FU R, WANG C. Learningthe Car-Following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning[J]. Journal of Advanced Transportation, 2020, 4752651:1-13.
|
[12] |
ZHU M, WANG X, WANG Y. Human-Like Autonomous Car-Following Model with Deep Reinforcement Learning[J]. Transportation Research Part C:Emerging Technologies, 2018, 97:348-368.
doi: 10.1016/j.trc.2018.10.024
|
[13] |
JOHNSONM A, MORADI M H. PID Control[M]. London: Springer-Verlag London Limited, 2005:1-559.
|
[14] |
SCHULMAN J, WOLSKI F, DHARIWAL P, et al. Proximal Policy Optimization Algorithms(2017)[J/OL].[2022-01-01]. https://arxiv.org/abs/1707.06347.
|
[15] |
SHAH S, DEY D, LOVETT C, et al. Airsim:High-Fidelity Visual and Physical Simulation for Autonomous Vehicles[C]// Field and Service Robotics.Berlin:Springer, 2018:621-635.
|
[16] |
QUIGLEY M, GERKEY B, CONLEY K, et al. ROS:An Open-Source Robot Operating System[C]// ICRA Workshop on Open Source Software.Piscataway:IEEE, 2009:1-6.
|
[17] |
PASZKE A, GROSS S, MASSA F, et al. Pytorch:An Imperative Style,High-Performance Deep Learning Library[C]// Advances in Neural Information Processing Systems. San Diego: NIPS, 2019:1-12.
|
[18] |
KESTING A, TREIBER M, HELBING D. Enhanced Intelligent Driver Model to Access the Impact of Driving Strategies on Traffic Capacity[J]. Philosophical Transactions of the Royal Society A:Mathematical,Physical and Engineering Sciences, 2010, 368(1928):4585-4605.
doi: 10.1098/rsta.2010.0084
|