-
近年来,智能化革命席卷全球,以深度学习为核心的AI技术取得了重大突破。在机器人[1]、语音识别[2-3]、图像识别[4-7]、自然语言处理[8-9]等多个任务上,人工智能技术的识别能力和决策水平已经追平甚至超越人类,如以AlphaGo为代表的人工智能机器人击败人类职业围棋冠军,以Google、百度等工业界为代表的无人驾驶汽车已经开始实际道路上路测试等。
美国斯坦福大学尼尔逊教授将人工智能定义为:“人工智能是关于知识的学科—怎样表示知识以及怎样获得知识并使用知识的科学”[10]。麻省理工学院温斯顿教授认为:“人工智能就是研究如何使计算机去做过去只有人才能做的智能工作”。这些说法[10-11]反映了人工智能学科的基本思想和基本内容,以及其他诸多对人工智能的理解[12-14]。这些观点都反应了人工智能是通过研究人类智能活动规律,构造具有一定智能的人工系统,研究如何应用计算机的软硬件技术来模拟和代替人类某些智能行为(如学习、推理、思考、规划、控制等)的基本理论、方法和技术。
人工智能技术的发展受到了广泛的重视,并在机器人、控制系统中得到了广泛应用,为传统制造业提供了前所未有的发展机遇。汽车行业作为传统制造业的龙头之一,也立足自身发展,结合智能化技术,展开了传统汽车面向智能化革新的进程。智能汽车以汽车为载体,应用一系列高精尖信息化技术和智能化技术(传感器感知技术、V2X网联通讯技术、驾驶决策技术等),即代表了汽车技术产业化进程的重要方向,也是汽车技术创新发展的主流趋势。我国工信部把智能汽车定义为:搭载先进的车载传感器、控制器、执行器等装置,并融合现代通讯与网络技术,实现车与车、人、路、云等智能信息的交换、共享,具备复杂环境感知、智能决策、协同控制等功能,可实现安全、高效、舒适、节能行驶,并最终可实现代替人来操作的新一代汽车。
智能汽车中的智能化技术,可分为3个模块:环境感知层、决策规划层和运动控制层。环境感知层利用环境感知传感器(视觉传感器、激光雷达、毫米波雷达、超声波雷达、里程计、GPS等)感知车辆行驶环境信息,利用车辆自身状态传感器(如轮速检测等)感知车辆自身状态。经过智能化模型处理后,感知出车辆周围环境(如绝对位置、车道线、周围车辆相对位置、行人位置、动态静态障碍物类型和位置、行为预测等),决策规划层按照驾驶决策算法将空间、时间上的独立信息、互补信息和冗余信息进行理解,根据实时感知到的车辆周围环境信息,实时决策车辆可执行的驾驶指令并规划出行程轨迹。运动控制层接收决策规划层的驾驶指令,控制车辆稳定运行的同时保证车辆的控制精度。
随机性和模糊性导致不确定性是人类思维活动中最基本的特性。对人类思维模拟、研究的人工智能技术,也具有不确定性的特点。随着科学技术不断深入发展,需要学者们研究的变量越来越多,而且变量之间的关系也越来越复杂,对系统的判别和推理的精确性要求也越来越高。实践告诉我们:复杂的系统往往难以精确化。这就使得人们对系统精确性的需求和问题本身的复杂性之间形成矛盾。复杂性越高,有意义的精确化能力就越低,而复杂性意味着因素众多,使人们在求解这类复杂问题时,只能抓住问题的主要部分,忽略次要部分,而这又常常使本身明确的概念变得模糊起来,从而导致不确定性。
因人为主观因素导致的安全问题,往往通过政府部门健全法律、法规,引导、管理人工智能技术健康发展,本文将焦点放在因客观技术问题引起的安全问题方面。目前,智能车的安全性问题越来越受到社会重视。国务院于2017年发布的《新一代人工智能发展规划》中明确指出:“在大力发展人工智能的同时,必须高度重视可能带来的安全风险挑战,加强前瞻预防与约束引导,最大限度降低风险,确保人工智能安全、可靠、可控发展”[15]。基于上述原因,预期功能安全(safety of the intended functionality, SOTIF)的研究应运而生。预期功能安全在ISO/PAS 21448中首次给出定义[16],关注由功能不足或者由可合理预见的人员误用所导致的危害和风险。例如,传感系统在暴雨、积雪等天气情况下,传感器本身功能未发生故障,但智能车是否仍能按预期行驶。
本文总结智能汽车研究中的环境感知算法、智能决策算法、智能化算法的不确定性以及不确定性带来的安全问题等4个方面的研究情况,以期引起相关研究者的关注并提供指导。
A Survey: Artificial Intelligence and its Security in Intelligent Vehicle
-
摘要: 随着人工智能(AI)技术的发展,以智能驾驶汽车、智能机器人为代表的智能系统逐渐代替或辅助人类从事各种场景中简单或复杂的工作。该文从智能汽车中的智能算法出发,总结了在智能汽车中人工智能感知算法、决策算法的研究进展;讨论了智能算法的不确定性问题;并从智能算法的不确定性带来的安全问题角度,讨论了预期功能安全的意义及发展,最后讨论了人机共驾对当前智能驾驶汽车解决预期功能安全的必要性。Abstract: With the development of artificial intelligence (AI) technology, intelligent systems, such as intelligent driving cars and intelligent robots, gradually replace or assist human beings to do simple or complex work in various scenes. Starting from the intelligent algorithm in intelligent vehicle, this paper summarizes the research progress of artificial intelligence perception algorithm and decision algorithm in intelligent vehicle. Secondly, the uncertainty of intelligent algorithm is discussed. Finally, from the point of view of the security problems brought by the uncertainty of the intelligent algorithm, this paper discusses the significance and development of the expected functional security, and discusses the necessity of human-computer co driving to solve the expected functional security of the current intelligent driving vehicle.
-
Key words:
- AI /
- human-computer co-driving /
- intelligent driving /
- SOTIF /
- statistical pattern recognition
-
[1] 常周林, 袁婷. 人工智能在智能机器人系统中的应用研究[J]. 科技创新导报, 2016, 13(23): 10. CHANG Zhou-lin, YUAN Ting. Application of artificial intelligence in intelligent robot system[J]. Science and Technology Innovation Herald, 2016, 13(23): 10. [2] ZHANG Y, WILLIAM C, NAVDEEP J. Very deep convolutional networks for end-to-end speech recognition[C]//IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). [S. l.]: IEEE, 2017: 4845-4849. [3] SAINATH T N, MOHAMED A R, BRIAN K, et al. Deep convolutional neural networks for LVCSR[C]//IEEE International Conference on Acoustics, Speech and Signal Processing. [S. l.]: IEEE, 2013: 8614-8618. [4] ALEX K, SUTSKEVER I, GEOFFREY E H. Imagenet classification with deep convolutional neural networks[C]//Advances in Neural Information Processing Systems. [S. l.]: IEEE, 2012: 1097-1105. [5] KAREN S, ANDREW Z. Very deep convolutional networks for large-scale image recognition[C]//The 3rd International Conference on Learning Representations. [S.l.]: [s.n.], 2015:1884-2020. [6] HE Kai-ming, ZHANG Xiang-yu, REN Shao-qing, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S. l.]: IEEE, 2016: 770-778. [7] CHRISTIAN S, LIU Wei, JIA Yang-qing, et al. Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S. l.]: IEEE, 2015: 1-9. [8] ZHANG Wei-nan, ZHU Qing-fu, WANG Yi-fa, et al. Neural personalized response generation as domain adaptation[J]. World Wide Web, 2019(4): 1427-1446. [9] TAO Chong-yang, MOU Li-li, ZHAO Dong-yan, et al. Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems[EB/OL]. [2018-11-12]. https://arxiv.org/abs/1701.03079. [10] 志刚. 什么是人工智能[J]. 大众科学, 2018(1): 44-45. ZHI Gang. What is AI[J]. China Public Science, 2018(1): 44-45. [11] 王家祺, 王赛. 人工智能技术的发展趋势探讨[J]. 通讯世界, 2017(16): 1006-4222. WANG Jia-qi, WANG Sai. Discussion on the development trend of artificial intelligence technology[J]. Telecom World, 2017(16): 1006-4222. [12] 崔雍浩, 商聪, 陈锶奇. 人工智能综述: AI的发展[J]. 无线电通信技术, 2019, 45(3): 5-11. CUI Yong-hao, SHANG Cong, CHEN Tian-qi. A review of artificial intelligence: The development of AI[J]. Radio Communication Technology, 2019, 45(3): 5-11. [13] 韩晔彤. 人工智能技术发展及应用研究综述[J]. 电子制作, 2016, DOI: 10.3969/j.issn.1006-5059.2016.12.082. HAN Ye-tong. Development and application of artificial intelligence technology[J]. Electronic Production, 2016, DOI: 10.3969/j.issn.1006-5059.2016.12.082. [14] 李开复, 王咏刚. 到底什么是人工智能[J]. 科学大观园, 2018(2): 48-49. LI Kai-fu, WANG Yong-gang. What is artificial intelligence[J]. Science Grand View Park, 2018(2): 48-49. [15] 中国国务院. 新一代人工智能发展规划[J]. 科技导报, 2017(17): 113. State Council of China. New generation artificial intelligence development plan[J]. Science & Technology Review, 2017(17): 113. [16] LI Bo. Road vehicles-Safety of the intended functionality[J]. China Auto, 2019(4): 20-22. [17] DALAL N, Bill T. Histograms of oriented gradients for human detection[C]//In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). [S.l.]: IEEE, 2005: 886-893. [18] LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004(2): 91-110. [19] BAY H, TINNE T, LUC V G. Surf: Speeded up robust features[C]//European Conference on Computer Vision. [S. l.]: Springer, 2006: 404-417. [20] RUBLEE E, VINCENT R, KURT K, et al. ORB: An efficient alternative to SIFT or SURF[C]//International Conference on Computer Vision. [S.l.]: IEEE, 2011: 2564-2571. [21] CALONDER M, VINCENT L, CHRISTOPH S, et al. Brief: Binary robust independent elementary features[C]// European Conference on Computer Vision. [S.l.]: Springer, 2010: 778-792. [22] ZOU Zheng-xia, SHI Zhen-wei, GUO Yu-hong, et al. Object detection in 20 years: A survey[EB/OL]. [2019-10-25]. https://arxiv.org/abs/1905.05055v2. [23] ROSS G, JEFF D, TREVOR D, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2014: 580-587. [24] SUYKENS J A K, JOOS V. Least squares support vector machine classifiers[J]. Neural Processing Letters, 1999(3): 293-300. [25] ROSS G. Fast R-CNN[C]//Proceedings of the IEEE International Conference on Computer Vision. [S.l.]: IEEE, 2015: 1440-1448. [26] REDMON J, SANTOSH D, ROSS G, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE Conference on Computer Vision and pattern Recognition. [S.l.]: IEEE, 2016: 779-788. [27] REDMON J, ALI F. YOLO9000: Better, faster, stronger[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2017: 7263-7271. [28] REDMON J, ALI F Yolov3: An incremental improvement[R]. Washington: University of Washington, 2018. [29] LIU W, DRAGOMIR A, DUMITRU E, et al. Ssd: Single shot multibox detector[C]//European Conference on Computer Vision. [S.l.]: Springer, 2016: 21-37. [30] KIM J, MINHO L. Robust lane detection based on convolutional neural network and random sample consensus[C]//International Conference on Neural Information Processing. [S.l.]: Springer, 2014: 454-461. [31] HUVAL B, WANG T, SAMEEP T, et al. An empirical evaluation of deep learning on highway driving[EB/OL]. [2019-11-10]. https://arxiv.org/abs/1504.01716. [32] 方睿. 基于视觉的车道线检测技术综述[J]. 内江科技, 2018(7): 41-42. FANG Rui. Overview of vision based lane detection technology[J]. Neijiang Technology, 2018(7): 41-42. [33] CHOUGULE S, NORA K, ASAD I, et al. Reliable multilane detection and classification by utilizing CNN as a regression network[C]//Proceedings of the European Conference on Computer Vision. [S.l.]: Springer, 2018: 740-752. [34] LEE S, JUNSIK K, JAE S Y, et al. Vpgnet: Vanishing point guided network for lane and road marking detection and recognition[C]//Proceedings of the IEEE International Conference on Computer Vision. [S.l.]: IEEE, 2017: 1947-1955. [35] PAN Xin-gang, SHI Jian-ping, LUO Ping, et al. Spatial as deep: Spatial cnn for traffic scene understanding[EB/OL]. [2019-11-27]. https://arxiv.org/abs/1712.06080. [36] MA Chao, HUANG Jia-Bin, YANG Xiao-kang, et al. Hierarchical convolutional features for visual tracking[C]//Proceedings of the IEEE International Conference on Computer Vision. [S.l.]: IEEE, 2015: 3074-3082. [37] QI Yuan-kai, ZHANG Sheng-ping, LEI Qin, et al. Hedged deep tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2016: 4303-4311. [38] GLADH S, MARTIN D, FAHAD S K, et al. Deep motion features for visual tracking[C]//The 23rd International Conference on Pattern Recognition. [S.l.]: IEEE, 2016: 1243-1248. [39] CHOI J, CHANG H J, YUN S, et al. Attentional correlation filter network for adaptive visual tracking[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S. l.]: IEEE, 2017: 4807-4816. [40] Dai S, LI L, LI Z. Modeling vehicle interactions via modifified LSTM models for trajectory prediction[J]. IEEE Acess, 2019, 7: 287-296. [41] ALEX Zyner, STEWART W, EDUARDO N. A recurrent neural network solution for predicting driver intention at unsignalized intersections[J]. IEEE Robotics and Automation Letters, 2018(3): 1759-1764. [42] ALEX Z, STEWART W, JAMES W, et al. Long short term memory for driver intent prediction[C]//2017 IEEE Intelligent Vehicles Symposium (IV). [S.l.]: IEEE, 2017: 1484-1489. [43] PHILLIPS D J, WHEELER T A, MYKEL J K. Generalizable intention prediction of human drivers at intersections[C]//2017 IEEE Intelligent Vehicles Symposium (IV). [S. l.]: IEEE, 2017: 1665-1670. [44] DING W C, SHEN S J. Online vehicle trajectory prediction using policy anticipation network and optimization-based context reasoning[C]//2019 International Conference on Robotics and Automation (ICRA). [S.l.]: IEEE, 2019: 9610-9616. [45] DAI S Z, LI L, LI Z H. Modeling vehicle interactions via modified LSTM models for trajectory prediction[J]. IEEE Access, 2019(7): 38287-38296. [46] LEE D, KWON Y P, SARA M, et al. Convolution neural network-based lane change intention prediction of surrounding vehicles for ACC[C]//2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). [S.l.]: IEEE, 2017: 1-6. [47] CUI H G, VLADAN R, CHOU F C, et al. Multimodal trajectory predictions for autonomous driving using deep convolutional networks[C]//2019 International Conference on Robotics and Automation (ICRA). [S.l.]: IEEE, 2019: 2090-2096. [48] DJURIC N, VLADAN R, CUI H G, et al. Motion prediction of traffic actors for autonomous driving using deep convolutional networks[EB/OL]. [2019-10-20]. https://arxiv.org/abs/1808.05819v1. [49] SANDLER M, ANDREW H, ZHU M L, et al. Mobilenetv2: Inverted residuals and linear bottlenecks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2018: 4510-4520. [50] CASAS S, LUO W J, RAQUEL U. Intentnet: Learning to predict intention from raw sensor data[C]//Conference on Robot Learning. [S.l.]: [s. n.], 2018: 947-956. [51] SCHREIBER M, STEFAN H, KLAUS D. Long-term occupancy grid prediction using recurrent neural networks[C]//2019 International Conference on Robotics and Automation (ICRA). [S. l.]: IEEE, 2019: 9299-9305. [52] HUANG S D, GAMINI D. Convergence and consistency analysis for extended Kalman filter based SLAM[J]. IEEE Transactions on Robotics, 2007, 23(5): 1036-1049. doi: 10.1109/TRO.2007.903811 [53] LI Jian, LI Qing, CHENG Nong. A combined visual-inertial navigation system of MSCKF and EKF-SLAM[C]//2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC). [S.l.]: IEEE, 2018: 1-6. [54] KLEIN G, DAVID M. Parallel tracking and mapping for small AR workspaces[C]//2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. [S.l.]: IEEE, 2007: 225-234. [55] RAUL M A, JOSE M M M, TARDOS J D. ORB-SLAM: A versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163. doi: 10.1109/TRO.2015.2463671 [56] ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(3): 611-625. [57] ENGEL J, JURGEN S, DANIEL C. Semi-dense visual odometry for a monocular camera[C]//Proceedings of the IEEE International Conference on Computer Vision. [S.l.]: IEEE, 2013: 1449-1456. [58] QIN T, LI P L, SHEN S J. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020. doi: 10.1109/TRO.2018.2853729 [59] RAÚL M A. ORB_SLAM open source[EB/OL]. [2019-11-10]. http://webdiis.unizar.es/~raulmur/orbslam/. [60] LI T, MEI T, KWEON I S, et al. Contextual bag-of-words for visual categorization[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2010, 21(4): 381-392. [61] CHEN Z T, OBADIAH L, ADAM J, et al. Convolutional neural network-based place recognition[EB/OL]. [2019-11-22]. https://arxiv.org/abs/1411.1509. [62] RELJA A, PETR G, AKIHIKO T, et al. NetVLAD: CNN architecture for weakly supervised place recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. [S.l.]: IEEE, 2016: 5297-5307. [63] KIM H J, ENRIQUE D, FRAHM J M. Learned contextual feature reweighting for image geo-localization[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). [S.l.]: IEEE, 2017: 3251-3260. [64] ZHU Y Y, WANG J, XIE L X, et al. Attention-based pyramid aggregation network for visual place recognition[C]//Proceedings of the 26th ACM International Conference on Multimedia. [S.l.]: ACM, 2018: 99-107. [65] LIU L, LI H D, DAI Y C. Deep stochastic attraction and repulsion embedding for image based localization[EB/OL]. [2019-11-12]. https://arxiv.org/abs/1808.08779v2. [66] LOWRY S, NIKO S, PAUL N, et al. Visual place recognition: A survey[J]. IEEE Transactions on Robotics, 2015, 32(1): 1-19. [67] DANIL P. Computational intelligence in automotive applications[M]. [S.l.]: Springer, 2008. [68] BALAKIRSKY S B. A framework for planning with incrementally created graphs in attributed problem spaces[M]. [S.l.]: IOS Press, 2003. [69] KUMAR P, MATHIAS P, STÉPHANIE L, et al. Learning-based approach for online lane change intention prediction[C]//2013 IEEE Intelligent Vehicles Symposium (IV). [S.l.]: IEEE, 2013: 797-802. [70] YAMADA K, HIROSHI M, KAZUHIRO U. A method for analyzing interaction of driver intention through vehicle behavior when merging[C]//2014 IEEE Intelligent Vehicles Symposium Proceedings. [S.l.]: IEEE, 2014: 158-163. [71] NISHIWAKI Y, CHIYOMI M, NORIHIDE K, et al. Generating lane-change trajectories of individual drivers[C]//2008 IEEE International Conference on Vehicular Electronics and Safety. [S.l.]: IEEE, 2008: 271-275. [72] 袁盛玥. 自动驾驶车辆城区道路环境换道行为决策方法研究[D]. 北京: 北京理工大学, 2016. YUAN Sheng-yue. Research on the decision-making method of road environment changing behavior of autonomous vehicles in urban areas[D]. Beijing: Beijing University of Technology, 2016. [73] STEPHEN H. A brief history of time: From big bang to black holes[M]. [S. l.]: Random House, 2009. [74] WANG Yan-peng, HAN Tao, WANG Xue-zhao. The development trend of artificial intelligence in the group 20[J]. Science Focus, 2019, 14(1): 20-32. [75] 张昕. 人工智能中的不确定性问题研究[D]. 湖南: 国防科学技术大学, 2012. ZHANG Xin. Research on uncertainty in artificial intelligence[D]. HuNan: National University of Defense Science and Technology, 2012. [76] 骆清铭. 脑空间信息学—连接脑科学与类脑人工智能的桥梁[J]. 中国科学: 生命科学, 2017(47): 1015-1024. LUO Qing-ming. Brain spatial informatics-a bridge between brain science and brain like artificial intelligence[J]. Chinese Science: Life Science, 2017(47): 1015-1024. [77] SERGIO V. Fifty years of Shannon theory[J]. IEEE Transactions on Information Theory, 1998, 44(6): 2057-2078. doi: 10.1109/18.720531 [78] SVANTE W, KIM E, PAUL G. Principal component analysis[J]. Chemometrics and Intelligent Laboratory Systems, 1987(2): 37-52. [79] MAURER Markus. EMS-vision: Knowledge representation for flexible automation of land vehicles[C]//Proceedings of the IEEE Intelligent Vehicles Symposium. [S. l.]: IEEE, 2000: 575-580. [80] GEYER S, MARCEL B, BENJAMIN F, et al. Concept and development of a unified ontology for generating test and use-case catalogues for assisted and automated vehicle guidance[J]. IET Intelligent Transport Systems, 2013, 8(3): 183-189. [81] THOMASON M G, Gonzalez R C. Data structures and databases in digital scene analysis[C]//Advances in Information Systems Science. Boston: Springer, 1985: 1-47. [82] ULBRICH S, TILL M, ANDREAS R, et al. Defining and substantiating the terms scene, situation, and scenario for automated driving[C]//2015 IEEE 18th International Conference on Intelligent Transportation Systems. [S.l.]: IEEE, 2015: 982-988. [83] 毛向阳, 尚世亮, 崔海峰. 自动驾驶汽车安全影响因素分析与应对措施研究[J]. 上海汽车, 2018(1): 33-37. doi: 10.3969/j.issn.1007-4554.2018.01.08 MAO Xiang-yang, SHANG Shi-liang, CUI Hai-feng. Auto driving vehicle safety impact factors analysis and countermeasures[J]. Shanghai Motor, 2018(1): 33-37. doi: 10.3969/j.issn.1007-4554.2018.01.08 [84] MIRKO C. Automated driving: Challenges in the interplay between functional safety and safety of the intended functionality[M]. Berlin: [s.n.], 2018. [85] JOHN B. Safety argument framework for highly automated vehicles[EB/OL]. [2019-09-12]. http://safety.addalot.se/upload/2017/2-6-2%20JohnBirch.pdf. [86] SUSANNE E. Bosch case study: Application of SOTIF for ADAS[EB/OL]. [2019-09-15]. https://www.automotive-iq.com/events-sotif-conference-usa/downloads/partner-content-robert-bosch-a-case-study-application-of-sotif-for-adas. [87] ELROFAI H, PAARDEKOOPER J P, ERWIN D G, et al. Scenario-based safety validation of connected and automated driving[EB/OL]. [2019-11-10]. https://publications.tno.nl/publication/34626550/AyT8Zc/TNO-2018-streetwise.pdf. [88] BEV L, DAVID W. Some conservative stopping rules for the operational testing of safety critical software[J]. IEEE Transactions on software Engineering, 1997, 23(11): 673-683. doi: 10.1109/32.637384 [89] 车天伟, 马建峰, 王超, 等. 基于安全熵的多级访问控制模型量化分析方法[J]. 华东师范大学学报 (自然科学版), 2015(1): 172. CHE Tian-wei, MA Jian-feng, WANG Chao, et al. Quantitative analysis method of multilevel access control model based on security entropy[J]. Journal of East China Normal University (Natural Science Edition), 2015(1): 172. [90] LEX F. Human-centered autonomous vehicle systems: Principles of effective shared autonomy[EB/OL]. [2019-12-10]. https://arxiv.org/abs/1810.01835. [91] 胡云峰, 曲婷, 刘俊, 等. 智能汽车人机协同控制的研究现状与展望[J]. 自动化学报, 2019(7): 1261-1280. HU Yun-feng, QU Ting, LIU Jun, et al. Research status and prospect of human-machine cooperative control of intelligent vehicles[J]. Journal of Automatic Chemistry, 2019(7): 1261-1280.