人工智慧很危險,但原因並非你所想的那樣
Sasha Luccioni
Link: https://www.youtube.com/watch?v=eXdVDhOGqoE
文章分析:
This speech was given by an AI researcher who has spent over a decade working on AI projects. The speaker’s goal is to shift the focus from hypothetical future risks of AI to its current, tangible impacts on society and the environment.
The speaker starts by sharing a personal anecdote about receiving an email accusing their work in AI of being responsible for humanity’s demise. They acknowledge that AI has received significant attention in recent times, but argue that this attention is often misdirected towards hypothetical existential risks rather than its actual current impacts.
The speaker highlights several areas where AI has significant impacts on society and the environment:
1. **Sustainability**: AI models consume vast amounts of energy and contribute to climate change.
2. **Consent and copyright**: The use of artwork, music, or text in AI training data without consent is a growing concern.
3. **Bias**: AI models can perpetuate stereotypes and biases against marginalized groups.
The speaker shares their own work on creating tools to measure the environmental impact of AI, such as CodeCarbon, which estimates energy consumption and carbon emissions during AI model training. They also highlight the “Have I Been Trained?" tool created by Spawning.ai, which allows artists and authors to search large datasets for their work.
The speaker emphasizes that while it’s impossible to know exactly what the future holds for AI, there are steps we can take today to mitigate its current impacts:
1. **Measuring AI’s impact**: Create tools like CodeCarbon to understand energy consumption and emissions.
2. **Transparency and consent**: Develop opt-in and opt-out mechanisms for using artwork, music, or text in AI training data.
3. **Addressing bias**: Use tools like the Stable Bias Explorer to better understand and address biases in image generation models.
The speaker concludes by emphasizing that collective action is needed to shape the direction of AI development and deployment. By focusing on current impacts rather than hypothetical future risks, we can create a more responsible and sustainable path forward for AI research and innovation.
Overall, this speech highlights the importance of considering the actual consequences of AI in its current state, rather than solely focusing on speculative existential risks. The speaker emphasizes that by working together to address these tangible impacts, we can build a more trustworthy and beneficial relationship between humans and AI.
中譯
這次演講的作者是一位在人工智慧專案上工作了十多年的人工智慧研究人員。演講者的目標是將焦點從人工智慧的假設未來風險轉移到其對社會和環境的當前實際影響。
演講者首先分享了一個個人軼事,講述了他們收到一封電子郵件,指責他們在人工智慧領域的工作導致人類滅亡的故事。他們承認人工智慧近年來受到了廣泛關注,但他們認為這種關注往往被誤導到假設的生存風險而非其當前的實際影響。
講者強調了人工智慧對社會和環境產生重大影響的幾個領域:
1. **永續性**:人工智慧模型消耗大量能源並導致氣候變遷。
2.**同意與版權**:未經同意在人工智慧訓練資料中使用藝術品、音樂或文字日益受到人們的關注。
3.**偏見**:人工智慧模型可能會延續針對邊緣群體的刻板印象和偏見。
演講者分享了他們自己在創建衡量人工智慧對環境影響的工具方面的工作,例如CodeCarbon,它可以估算人工智慧模型訓練期間的能源消耗和碳排放量。他們還強調「我接受過培訓嗎?」由 Spawning.ai 創建的工具,允許藝術家和作家搜尋大型數據集以查找他們的作品。
講者強調,雖然我們不可能確切知道人工智慧的未來會怎樣,但我們今天可以採取一些措施來減輕其當前的影響:
1. **衡量人工智慧的影響**:創建像 CodeCarbon 這樣的工具來了解能源消耗和排放。
2. **透明度和同意**:發展在 AI 訓練資料中使用藝術品、音樂或文字的選擇加入和選擇退出機制。
3. **解決偏見**:使用穩定偏見探索器等工具來更好地理解和解決圖像生成模型中的偏見。
演講者最後強調,需要集體行動來決定人工智慧發展和部署的方向。透過專注於當前的影響而不是假設的未來風險,我們可以為人工智慧研究和創新創造一條更負責任、更永續的道路。
總的來說,這次演講強調了考慮人工智慧當前實際後果的重要性,而不是僅僅關注推測性的生存風險。演講者強調,透過共同努力解決這些切實的影響,我們可以在人類和人工智慧之間建立更值得信賴和有益的關係。

發表留言