AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

20
3


AI won’t kill us all — but that doesn’t make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology’s current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it’s inclusive and transparent.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas:

Follow TED!
Twitter:
Instagram:
Facebook:
LinkedIn:
TikTok:

The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more:

TED’s videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at

#TED #TEDTalks #AI

source

20 Comments

  1. You claim you are AI researcher, but actually you don’t know exactly how AI works. You just see the current positive impacts brought by AI, you didn’t know how it will become a huge, uncontrolled monster in the future. I am a AI researcher, I certainly believe that if AI continues developing in the current pace-“no any constraints on AI that they should always controlled by human”, a huge disasters will happen in near future

  2. idk, so strange to me… bias, for example, isn't it representing just an average basically? So why is it "danger"? For example, with more women as CEOs, input dataset would change – as well as that bias, no?

  3. AI is dangerous, e.g., an overprivileged female-chauvinist DEI policy-beneficiary, blithely unaware of her own privileged existence, subscribing to and conflating “Anthropogenic Climate Change (ACC)” THEORY with FACT (ACC is NOT a FACT), weaponizing AI to maintain the ACC narrative while also attempting to advance/engineer the leftist utopian goal/myth of “equality of outcomes” on a global scale.

  4. Most of the videos I ve watched talking about the "risks of AI" are clearly propagandistic and not informative. We need to be aware that some major players (eg Google) have huge losses from the rise of commercial AI ChatGPT for example and we need to be prepared for a large-scale war between those who have huge benefits from that technology and the others who are clearly losing their leading position. That's what always happens when the new threatens the old

  5. AI shows pictures of men because there are up to 3 times as many male doctors and scientists to 1 female counterpart. The fact that the scientist feels misrepresented is a problem of perception, not data accuracy.
    Also, the real threat of AI is what it is doing to us. Every automation we create for the sake of speed, is a skill we export to AI and makes us dumber. If this crap continues, people will need AI to have a conversation with each other. You think I`m exagerating? The one constancy in human drive for technology is laziness. Look at our IKEA furniture when compared to our grandparents`s, that is what the minds of our children will look like.
    The future of AI is not for us to choose, but for companies. Nobody asked for AI being in every product possible, and yet here we are.

  6. Today the vast majority have the information to make wise decisions but don't want to read so they do things that involve less reading less thinking people are getting dumber not smarter.

Leave A Reply

Please enter your comment!
Please enter your name here