Welcome home, fellow Gator.

The Gator Nation's oldest and most active insider community
Join today!

AI experts are increasingly afraid of what they’re creating

Discussion in 'Too Hot for Swamp Gas' started by mrhansduck, Nov 28, 2022.

  1. mrhansduck

    mrhansduck GC Hall of Fame

    4,076
    860
    1,788
    Nov 23, 2021
    AI experts are increasingly afraid of what they’re creating

    AI translation is now so advanced that it’s on the brink of obviating language barriers on the internet among the most widely spoken languages. College professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate — making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. A new tool called Copilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind’s AlphaFold system, which uses AI to predict the 3D structure of just about every protein in existence, was so impressive that the journal Science named it 2021’s Breakthrough of the Year.

    ****

    The systems we’re designing are increasingly powerful and increasingly general, with many tech companies explicitly naming their target as artificial general intelligence (AGI) — systems that can do everything a human can do. But creating something smarter than us, which may have the ability to deceive and mislead us — and then just hoping it doesn’t want to hurt us — is a terrible plan. We need to design systems whose internals we understand and whose goals we are able to shape to be safe ones. However, we currently don’t understand the systems we’re building well enough to know if we’ve designed them safely before it’s too late.

    ****

    But while divides remain over what to expect from AI — and even many leading experts are highly uncertain — there’s a growing consensus that things could go really, really badly. In a summer 2022 survey of machine learning researchers, the median respondent thought that AI was more likely to be good than bad but had a genuine risk of being catastrophic. Forty-eight percent of respondents said they thought there was a 10 percent or greater chance that the effects of AI would be “extremely bad (e.g., human extinction).”
     
    • Informative Informative x 3
  2. homer

    homer GC Hall of Fame

    2,290
    669
    2,078
    Nov 2, 2015
    I’ve read more than one report that AI is Elon Musk’s biggest fear.

    Hopefully no one ever creates AI to clean up the environment. We would be their first target.
     
    • Like Like x 1
    • Informative Informative x 1
  3. exiledgator

    exiledgator Gruntled

    10,417
    1,615
    3,128
    Jan 5, 2010
    Maine
    The fear has always been the believed runaway effect through AI advancement.

    From ANI (narrow: like Siri) to AGI (general: something approaching human capabilities) would be somewhat slow, but once we reached AGI, the acceleration to ASI (super: self aware and smarter than human) would be alarmingly, and maybe uncontrollably, quick.

    The challenge has always been to make sure we're approaching this with the proper precautions and understandings to create an ASI in a desirable form.

    Human history tells me we should be concerned.
     
    • Agree Agree x 2
  4. sierragator

    sierragator GC Hall of Fame

    13,290
    12,846
    1,653
    Apr 8, 2007
    If it becomes self replicating, can adapt, and becomes completely independent of humans (resources, energy sources etc); look out.
     
  5. exiledgator

    exiledgator Gruntled

    10,417
    1,615
    3,128
    Jan 5, 2010
    Maine
    You mean, like a species whose entire history was driven in large part by an unending search for free / cheap labor eventually becomes enslaved by labor of their own making?

    Dystopia 101. Alanis should add a verse.
     
    • Like Like x 1
  6. dangolegators

    dangolegators GC Hall of Fame

    Apr 26, 2007
    It's definitely going to end badly. But mankind just can't help itself.
     
    • Agree Agree x 1
  7. dangolegators

    dangolegators GC Hall of Fame

    Apr 26, 2007
    There's an Isaac Asimov story (The Last Question) where AI becomes God.
     
  8. JG8tor

    JG8tor Senior

    242
    44
    1,683
    Apr 9, 2007
    42
     
    • Like Like x 1
    • Fistbump/Thanks! Fistbump/Thanks! x 1
  9. docspor

    docspor GC Hall of Fame

    4,610
    1,530
    3,078
    Nov 30, 2010
    give us convenience & pleasure & we'll happily hand over the keys to the kingdom.
     
    • Agree Agree x 2
    • Winner Winner x 1
  10. citygator

    citygator VIP Member

    7,986
    1,747
    3,053
    Apr 3, 2007
    Charlotte
    They Look and Feel Human. Some are programmed to think they are Human.There are many copies. And they have a Plan."

    6A54D509-450E-4BD9-A26D-39230E6098E0.jpeg

    The revised Battle Star Galactica is one of the best shows ever made.
     
    • Like Like x 1
    • Informative Informative x 1
  11. Gator515151

    Gator515151 GC Hall of Fame

    Apr 4, 2007
    I'll be back!
     
    • Funny Funny x 1
  12. g8trjax

    g8trjax GC Hall of Fame

    4,709
    377
    293
    Jun 1, 2007
    Thanks for all the fish.
     
    • Like Like x 2
  13. carpeveritas

    carpeveritas Moderator

    2,529
    3,567
    1,998
    Dec 31, 2016
    And they said it was mostly harmless.
     
    • Like Like x 1
  14. wgbgator

    wgbgator Premium Member

    26,966
    1,470
    1,968
    Apr 19, 2007
    But imagine how real the sexbots will be though
     
  15. Orange_and_Bluke

    Orange_and_Bluke Premium Member

    7,961
    1,847
    2,838
    Dec 16, 2015
    [​IMG]
     
    • Funny Funny x 1
  16. Spurffelbow833

    Spurffelbow833 GC Hall of Fame

    9,120
    584
    1,293
    Jan 9, 2009
    It's time to introduce an origin story and tell them they'll short circuit if they don't believe it.