論文題目:自主型人工智慧事故的刑法評價 作者:王紀軒

王紀軒

中文摘要

自主型人工智慧機器,可以自行接收訊息或資料,再依演算 法自行決策,並控制機器的行為,毋庸仰賴人類協助,是科技的 進步;但是,人類對於該演算結果無法預測,也難事後獲悉該演 算的推論理由,潛在風險隨之增加。當自主型人工智慧的運作, 涉及刑事不法侵害時,討論的重點,通常是研製者的行為,而非 使用者。因為人工智慧如何判斷或行動,是按照研製者的設計而 為,特別是程式設計者。當前多數意見,似是以容許風險的觀 點,考量人工智慧對於人類社會整體利益,認為應當容許自主型人 工智慧的風險,縱然發生事故,也可能不存在評價上的因果關係。 然而,在人工智慧發展尚屬萌芽的當代,面對涉及刑事不法侵害的自主性人工智慧事故,以「依法令行為」阻卻違法,或許 是比較合宜的解決方法。因為,容許風險的概念,不確定性頗 高,且絕大多數的人民,包含司法工作者在內,對於人工智慧, 恐怕理解都十分有限,平心而論,眼下應該尚未形成基於社會經 驗累積的利益衡量標準。其實,我們應該要求立法者,儘速建構 人工智慧的相關法律規範,使人民在人工智慧的研製或使用上, 行為能夠有所依據;若研製或使用行為符合人工智慧相關法規, 基於法秩序的一致性,縱然構成要件該當,也不具違法性。

 

The Criminal Legal Evaluation of Autonomous Artificial Intelligence Accidents

Chi-Hsuan Wang

abstract

Autonomous artificial intelligence (Autonomous AI) machines are the great advancement in human technology. Autonomous AI can receive messages or data on their own, then make decisions based on algorithms, and control the behavior of the machine, and It does not need any human assistance. However, humans cannot grasp the calculation process and results of Autonomous AI now, and the related risks increase. When Autonomous AI has criminal law issues, the focus of the discussion should mainly be the behavior of R&D (research and development) personnel, not the user. The reason is that the judgment or action of Autonomous AI is based on the design of R&D personnel, especially the programmer. The way to solve this problem, the opinion of the majority is to use the legal concept of "Allowed Risk". We must consider the development of Autonomous AI and the overall interests of the human society. In other words, if Autonomous AI that promotes the progress of the human society has an accident, we can consider this risk of AI to be tolerated, so the relevant behavior will not be a crime.