• Landing Page
  • Shop
  • Contact
  • Privacy Policy
  • Login
  • Register
Upgrade
TrivDaily
">
  • WorldNew
    Pound

    Pound hits 37-year low against dollar

    Palm Trees - WIND

    Hurricane Tracker : Tropical Storm Hurricane Nine has the potential to reach Florida

    Prince of Wales - TrivDaily

    Princess Diana’s title has been passed on to the Duchess of Cambridge

    TrivDaily - King Charles Speech

    3 main points to be gleaned from King Charles first public speech

    Abdul Qadeer Khan: ‘Father of Pakistan’s nuclear bomb’ dies

    Abdul Qadeer Khan: ‘Father of Pakistan’s nuclear bomb’ dies

    The Afghanistan airport explosion came about beneathneath Biden however lines lower back to Trump

    The Afghanistan airport explosion came about beneathneath Biden however lines lower back to Trump

    Hibernian  beat Arsenal 2-1 in first preseason game on Easter Road

    Hibernian beat Arsenal 2-1 in first preseason game on Easter Road

    After a “racist” tweet against England black players, comedian Andrew Lawrence’s agent cancelled his appearance in show.

    After a “racist” tweet against England black players, comedian Andrew Lawrence’s agent cancelled his appearance in show.

    Lionel Messi, Argentina win Copa America over Brazil

    Lionel Messi, Argentina win Copa America over Brazil

    Trending Tags

    • Lifestyle

      Trending Tags

      • Pandemic
    • Business
      From £650 To Six Figures: How Shelly Nuruzzaman Turned Her Love for Curry Into Profit With Bang! Curry

      From £650 To Six Figures: How Shelly Nuruzzaman Turned Her Love for Curry Into Profit With Bang! Curry

      Did Elon Musk Buy MSNBC? Alex Jones Posts That Tech Billionaire ‘Officially Purchased’ News Outlet

      Did Elon Musk Buy MSNBC? Alex Jones Posts That Tech Billionaire ‘Officially Purchased’ News Outlet

      Philippines VP Threatens To Assassinate President And First Lady, Echoing Her Father’s Brutal Legacy

      Philippines VP Threatens To Assassinate President And First Lady, Echoing Her Father’s Brutal Legacy

      Quick Facts About Kelly Loeffler: Age, Net Worth, Family And Insider Trading Scandal

      Quick Facts About Kelly Loeffler: Age, Net Worth, Family And Insider Trading Scandal

      Ford To Cut 4000 Jobs Across Europe By 2027: 800 UK Roles On The Chopping Block

      Ford To Cut 4000 Jobs Across Europe By 2027: 800 UK Roles On The Chopping Block

      Netflix Co-Founder Reveals Strict Tuesday Habit That Helped Him ‘Stay Sane’ While Running An Empire

      Netflix Co-Founder Reveals Strict Tuesday Habit That Helped Him ‘Stay Sane’ While Running An Empire

      Trending Tags

      • Vaccine
      • Pandemic
    • Entertainment
      The Good, The Bad, and the Review of Netflix’s ‘Uglies’ (2024)

      The Good, The Bad, and the Review of Netflix’s ‘Uglies’ (2024)

      Led Zeppelin’s Robert Plant On His Inspirational Elvis Presley Meeting

      Led Zeppelin’s Robert Plant On His Inspirational Elvis Presley Meeting

      Oxy Picks Up The Pace with New EP, ‘Misplaced’

      Oxy Picks Up The Pace with New EP, ‘Misplaced’

      Ariana Grande addresses ‘closeted’ character in Wicked

      Ariana Grande addresses ‘closeted’ character in Wicked

      Hidden Gems: Nowhere Boy

      Hidden Gems: Nowhere Boy

      Daniel Craig praises Chappell Roan for speaking out about the treatment of celebrities

      Daniel Craig praises Chappell Roan for speaking out about the treatment of celebrities

      Stella Rose Shares Dark-Pop Hymn ‘Hollybaby’

      Stella Rose Shares Dark-Pop Hymn ‘Hollybaby’

      Live Report: Ezra Collective – OVO Wembley Arena

      Live Report: Ezra Collective – OVO Wembley Arena

      Eva Longoria denies leaving the US because of Trump

      Eva Longoria denies leaving the US because of Trump

      Trending Tags

      • Sports
        Mario Andretti named as director on board of General Motors F1 team

        Mario Andretti named as director on board of General Motors F1 team

        Why General Motors got the green light that Andretti did not

        Why General Motors got the green light that Andretti did not

        Michael Strahan confirmed as honorary pace car driver for 2025 Indy 500

        Michael Strahan confirmed as honorary pace car driver for 2025 Indy 500

        Rally Japan organisers hit with hefty €50,000 fine for safety breach

        Rally Japan organisers hit with hefty €50,000 fine for safety breach

        MLBB M6 Wildcard: Connection Issue Spark Investigation on ULFHEDNAR vs DFYG

        MLBB M6 Wildcard: Connection Issue Spark Investigation on ULFHEDNAR vs DFYG

        Eagles RB Saquon Barkley hopes former teammate Daniel Jones finds ‘same fresh start and success’ after release by Giants Nov …

        Eagles RB Saquon Barkley hopes former teammate Daniel Jones finds ‘same fresh start and success’ after release by Giants Nov …

        Watch: Free Fight — Deiveson Figueiredo vs Marlon ‘Chito’ Vera

        Watch: Free Fight — Deiveson Figueiredo vs Marlon ‘Chito’ Vera

        Machado Garry sends Joaquin Buckley message ahead of Covington bout

        Machado Garry sends Joaquin Buckley message ahead of Covington bout

        Steelers WR George Pickens started fighting Browns CB as game ended, had to be restrained

        Steelers WR George Pickens started fighting Browns CB as game ended, had to be restrained

        Trending Tags

        • Travel
          ‘As a travel editor I swear by this Ryanair-approved backpack and it now has 30% off’

          ‘As a travel editor I swear by this Ryanair-approved backpack and it now has 30% off’

          Seven travel items are hidden in this tricky airport brain teaser

          Seven travel items are hidden in this tricky airport brain teaser

          Three major UK airports are ‘put up for multi-billion pound sale’ as owners look to seize on ‘resurgence in air travel’

          Three major UK airports are ‘put up for multi-billion pound sale’ as owners look to seize on ‘resurgence in air travel’

          Man United ponder January move for Championship teenager

          Man United ponder January move for Championship teenager

          Three things from the training video

          Three things from the training video

          Brits set for travel chaos as a major train line is closed over Christmas

          Brits set for travel chaos as a major train line is closed over Christmas

          Trending Tags

          • Technology
            The Jim Henson Company Responds to The Muppets Ride Closing at Disney World

            The Jim Henson Company Responds to The Muppets Ride Closing at Disney World

            For Black Friday, Take Advantage of This High-Quality Cloud Storage at a Discounted Price

            For Black Friday, Take Advantage of This High-Quality Cloud Storage at a Discounted Price

            Russian spies may have moved in next door to target your network

            Russian spies may have moved in next door to target your network

            China sends cloud powered by homebrew Loongson CPUs into space

            China sends cloud powered by homebrew Loongson CPUs into space

            One thing AI can’t generate at the moment – compelling reasons to use it for work

            One thing AI can’t generate at the moment – compelling reasons to use it for work

            Trump taps border hawk to head DHS. Will Noem’s ‘enthusiasm’ extend to digital domain?

            Trump taps border hawk to head DHS. Will Noem’s ‘enthusiasm’ extend to digital domain?

            Trending Tags

            • Real Estate
              Malaysia Plans To Open Worldwide Tourism On December 1

              Malaysia Plans To Open Worldwide Tourism On December 1

              #1 UK housing: renting has turn out to be less expensive than shopping

              #1 UK housing: renting has turn out to be less expensive than shopping

              UK assets marketplace pastime maintains at record-breaking levels

              UK assets marketplace pastime maintains at record-breaking levels

              GUUD Launches New RYTE Financing Platform To Make Trade Finance Accessible for All Businesses

              GUUD Launches New RYTE Financing Platform To Make Trade Finance Accessible for All Businesses

              Climate Finance Partnership Raises US$250 Million at First Close to Invest in Emerging Market Climate Infrastructure

              Climate Finance Partnership Raises US$250 Million at First Close to Invest in Emerging Market Climate Infrastructure

              Interior Jennifer Lopez’s luxe Miami rental: 5 stress-free details in regards to the mansion

              Interior Jennifer Lopez’s luxe Miami rental: 5 stress-free details in regards to the mansion

              Trending Tags

              No Result
              View All Result
              • WorldNew
                Pound

                Pound hits 37-year low against dollar

                Palm Trees - WIND

                Hurricane Tracker : Tropical Storm Hurricane Nine has the potential to reach Florida

                Prince of Wales - TrivDaily

                Princess Diana’s title has been passed on to the Duchess of Cambridge

                TrivDaily - King Charles Speech

                3 main points to be gleaned from King Charles first public speech

                Abdul Qadeer Khan: ‘Father of Pakistan’s nuclear bomb’ dies

                Abdul Qadeer Khan: ‘Father of Pakistan’s nuclear bomb’ dies

                The Afghanistan airport explosion came about beneathneath Biden however lines lower back to Trump

                The Afghanistan airport explosion came about beneathneath Biden however lines lower back to Trump

                Hibernian  beat Arsenal 2-1 in first preseason game on Easter Road

                Hibernian beat Arsenal 2-1 in first preseason game on Easter Road

                After a “racist” tweet against England black players, comedian Andrew Lawrence’s agent cancelled his appearance in show.

                After a “racist” tweet against England black players, comedian Andrew Lawrence’s agent cancelled his appearance in show.

                Lionel Messi, Argentina win Copa America over Brazil

                Lionel Messi, Argentina win Copa America over Brazil

                Trending Tags

                • Lifestyle

                  Trending Tags

                  • Pandemic
                • Business
                  From £650 To Six Figures: How Shelly Nuruzzaman Turned Her Love for Curry Into Profit With Bang! Curry

                  From £650 To Six Figures: How Shelly Nuruzzaman Turned Her Love for Curry Into Profit With Bang! Curry

                  Did Elon Musk Buy MSNBC? Alex Jones Posts That Tech Billionaire ‘Officially Purchased’ News Outlet

                  Did Elon Musk Buy MSNBC? Alex Jones Posts That Tech Billionaire ‘Officially Purchased’ News Outlet

                  Philippines VP Threatens To Assassinate President And First Lady, Echoing Her Father’s Brutal Legacy

                  Philippines VP Threatens To Assassinate President And First Lady, Echoing Her Father’s Brutal Legacy

                  Quick Facts About Kelly Loeffler: Age, Net Worth, Family And Insider Trading Scandal

                  Quick Facts About Kelly Loeffler: Age, Net Worth, Family And Insider Trading Scandal

                  Ford To Cut 4000 Jobs Across Europe By 2027: 800 UK Roles On The Chopping Block

                  Ford To Cut 4000 Jobs Across Europe By 2027: 800 UK Roles On The Chopping Block

                  Netflix Co-Founder Reveals Strict Tuesday Habit That Helped Him ‘Stay Sane’ While Running An Empire

                  Netflix Co-Founder Reveals Strict Tuesday Habit That Helped Him ‘Stay Sane’ While Running An Empire

                  Trending Tags

                  • Vaccine
                  • Pandemic
                • Entertainment
                  The Good, The Bad, and the Review of Netflix’s ‘Uglies’ (2024)

                  The Good, The Bad, and the Review of Netflix’s ‘Uglies’ (2024)

                  Led Zeppelin’s Robert Plant On His Inspirational Elvis Presley Meeting

                  Led Zeppelin’s Robert Plant On His Inspirational Elvis Presley Meeting

                  Oxy Picks Up The Pace with New EP, ‘Misplaced’

                  Oxy Picks Up The Pace with New EP, ‘Misplaced’

                  Ariana Grande addresses ‘closeted’ character in Wicked

                  Ariana Grande addresses ‘closeted’ character in Wicked

                  Hidden Gems: Nowhere Boy

                  Hidden Gems: Nowhere Boy

                  Daniel Craig praises Chappell Roan for speaking out about the treatment of celebrities

                  Daniel Craig praises Chappell Roan for speaking out about the treatment of celebrities

                  Stella Rose Shares Dark-Pop Hymn ‘Hollybaby’

                  Stella Rose Shares Dark-Pop Hymn ‘Hollybaby’

                  Live Report: Ezra Collective – OVO Wembley Arena

                  Live Report: Ezra Collective – OVO Wembley Arena

                  Eva Longoria denies leaving the US because of Trump

                  Eva Longoria denies leaving the US because of Trump

                  Trending Tags

                  • Sports
                    Mario Andretti named as director on board of General Motors F1 team

                    Mario Andretti named as director on board of General Motors F1 team

                    Why General Motors got the green light that Andretti did not

                    Why General Motors got the green light that Andretti did not

                    Michael Strahan confirmed as honorary pace car driver for 2025 Indy 500

                    Michael Strahan confirmed as honorary pace car driver for 2025 Indy 500

                    Rally Japan organisers hit with hefty €50,000 fine for safety breach

                    Rally Japan organisers hit with hefty €50,000 fine for safety breach

                    MLBB M6 Wildcard: Connection Issue Spark Investigation on ULFHEDNAR vs DFYG

                    MLBB M6 Wildcard: Connection Issue Spark Investigation on ULFHEDNAR vs DFYG

                    Eagles RB Saquon Barkley hopes former teammate Daniel Jones finds ‘same fresh start and success’ after release by Giants Nov …

                    Eagles RB Saquon Barkley hopes former teammate Daniel Jones finds ‘same fresh start and success’ after release by Giants Nov …

                    Watch: Free Fight — Deiveson Figueiredo vs Marlon ‘Chito’ Vera

                    Watch: Free Fight — Deiveson Figueiredo vs Marlon ‘Chito’ Vera

                    Machado Garry sends Joaquin Buckley message ahead of Covington bout

                    Machado Garry sends Joaquin Buckley message ahead of Covington bout

                    Steelers WR George Pickens started fighting Browns CB as game ended, had to be restrained

                    Steelers WR George Pickens started fighting Browns CB as game ended, had to be restrained

                    Trending Tags

                    • Travel
                      ‘As a travel editor I swear by this Ryanair-approved backpack and it now has 30% off’

                      ‘As a travel editor I swear by this Ryanair-approved backpack and it now has 30% off’

                      Seven travel items are hidden in this tricky airport brain teaser

                      Seven travel items are hidden in this tricky airport brain teaser

                      Three major UK airports are ‘put up for multi-billion pound sale’ as owners look to seize on ‘resurgence in air travel’

                      Three major UK airports are ‘put up for multi-billion pound sale’ as owners look to seize on ‘resurgence in air travel’

                      Man United ponder January move for Championship teenager

                      Man United ponder January move for Championship teenager

                      Three things from the training video

                      Three things from the training video

                      Brits set for travel chaos as a major train line is closed over Christmas

                      Brits set for travel chaos as a major train line is closed over Christmas

                      Trending Tags

                      • Technology
                        The Jim Henson Company Responds to The Muppets Ride Closing at Disney World

                        The Jim Henson Company Responds to The Muppets Ride Closing at Disney World

                        For Black Friday, Take Advantage of This High-Quality Cloud Storage at a Discounted Price

                        For Black Friday, Take Advantage of This High-Quality Cloud Storage at a Discounted Price

                        Russian spies may have moved in next door to target your network

                        Russian spies may have moved in next door to target your network

                        China sends cloud powered by homebrew Loongson CPUs into space

                        China sends cloud powered by homebrew Loongson CPUs into space

                        One thing AI can’t generate at the moment – compelling reasons to use it for work

                        One thing AI can’t generate at the moment – compelling reasons to use it for work

                        Trump taps border hawk to head DHS. Will Noem’s ‘enthusiasm’ extend to digital domain?

                        Trump taps border hawk to head DHS. Will Noem’s ‘enthusiasm’ extend to digital domain?

                        Trending Tags

                        • Real Estate
                          Malaysia Plans To Open Worldwide Tourism On December 1

                          Malaysia Plans To Open Worldwide Tourism On December 1

                          #1 UK housing: renting has turn out to be less expensive than shopping

                          #1 UK housing: renting has turn out to be less expensive than shopping

                          UK assets marketplace pastime maintains at record-breaking levels

                          UK assets marketplace pastime maintains at record-breaking levels

                          GUUD Launches New RYTE Financing Platform To Make Trade Finance Accessible for All Businesses

                          GUUD Launches New RYTE Financing Platform To Make Trade Finance Accessible for All Businesses

                          Climate Finance Partnership Raises US$250 Million at First Close to Invest in Emerging Market Climate Infrastructure

                          Climate Finance Partnership Raises US$250 Million at First Close to Invest in Emerging Market Climate Infrastructure

                          Interior Jennifer Lopez’s luxe Miami rental: 5 stress-free details in regards to the mansion

                          Interior Jennifer Lopez’s luxe Miami rental: 5 stress-free details in regards to the mansion

                          Trending Tags

                          No Result
                          View All Result
                          TrivDaily
                          No Result
                          View All Result
                          Home Technology

                          Friendly AI chatbots will be designing bioweapons for criminals ‘within years’

                          Ferhan Rana by Ferhan Rana
                          July 31, 2023
                          in Technology
                          Reading Time:5 mins read
                          30.8k 953
                          A A
                          0
                          Friendly AI chatbots will be designing bioweapons for criminals ‘within years’
                          29.7k
                          SHARES
                          33.8k
                          VIEWS
                          Share on FacebookShare on Twitter
                          ">

                          AI systems are rapidly improving and will accelerate scientific discoveries – but the technology could also give criminals the power to create bioweapons and dangerous viruses in as little as two to three years, according to Anthropic CEO Dario Amodei.

                          Anthropic, founded by former OpenAI employees, prides itself on being safety-oriented and is best known for its large language model (LLM) chatbot Claude. Over the past six months the startup has reportedly been working with biosecurity experts to study how neural networks could be used to create weapons in the future.

                          On Thursday the head of the AI biz warned a US Senate technology subcommittee that regulation is desperately needed to tackle misuse of powerful models for harmful purposes in science and engineering, such as cyber security, nuclear technology, chemistry, and biology.

                          “Whatever we do, it has to happen fast. And I think to focus people’s minds on the biorisks, I would really target 2025, 2026, maybe even some chance of 2024. If we don’t have things in place that are restraining what can be done with AI systems, we’re going to have a really bad time,” he testified at the hearing on Tuesday.

                          “Today, certain steps in the use of biology to create harm involve knowledge that cannot be found on Google or in textbooks and requires a high level of specialized expertise,” Amodei said in his opening statement to the senators.

                          “The question we and our collaborators studied is whether current AI systems are capable of filling in some of the more difficult steps in these production processes. We found that today’s AI systems can fill in some of these steps – but incompletely and unreliably. They are showing the first, nascent signs of risk.

                          “However, a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, if appropriate guardrails and mitigations are not put in place. This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”

                          You can see where he’s coming from. Though the fundamental principles of modern nuclear weapons are publicly known and documented, actually engineering the devices – from producing the fuel and other materials at the heart of them, to designing the conventional explosives that trigger them, to miniaturizing them – is difficult and some of the steps remain highly classified. The same goes for biological weapons: there are steps that relatively few people know, and there is a danger a future ML model will be able to fill in those gaps for a wider audience.

                          Although the timescale seems dramatic, it’s not so far-fetched. Folks have taken to chatbots asking for instructions on how to create weapons such as pipe bombs and napalm, as well as drug recipes and other nefarious topics. The bots are supposed to have guardrails that prevent them from revealing that kind of information – a lot of which can be found through web searches or libraries, admittedly. However, there is a realistic risk that chatbots can make that sensitive info more easily accessible or understandable for curious netizens.

                          These models are trained on large amounts of text, including papers from scientific journals and textbooks. As they become more advanced, they could get better at gleaning insights from today’s knowledge to come up with discoveries – even dangerous ones – or provide answers that until now have been kept tightly under wraps for security reasons.

                          If nuclear bombs were software, would you allow open source of nuclear bombs?

                          Collaboration Pharmaceuticals, based in North Carolina, previously raised concerns that the same technology used to develop drugs could also be repurposed to create biochemical weapons.

                          LLMs therefore pose a potential threat to national security, as foreign adversaries or terrorists could use this knowledge to carry out large scale attacks. Bear in mind, though, it’s just information – actually obtaining the material, handling it, and processing it to pull off an assault would be tricky.

                          • AI drug algorithms can be flipped to invent bioweapons
                          • Top AI execs tell US Senate: Please, please pour that regulation down on us
                          • If AI drives humans to extinction, it’ll be our fault
                          • OpenAI is still banging on about defeating rogue superhuman intelligence

                          The dangers are further heightened by the release of open source models that are becoming more and more powerful. Senator Richard Blumenthal (D-CT) noted that a group of developers had used the code for Stability AI’s Stable Diffusion models to create a text-to-image system tailored to generating sexual abuse material, for example.

                          Let’s hear from one of the granddaddies

                          Yoshua Bengio, a pioneer researcher in neural networks and the scientific director of the Montreal Institute for Learning Algorithms, agreed. Bengio is often named as one of the three “Godfathers of AI” alongside Geoff Hinton, a computer science professor at the University of Toronto, and Yann LeCun, chief AI scientist at Meta.

                          He urged lawmakers to pass legislation moderating the capabilities of AI models before they can be released more widely to the public.

                          “I think it’s really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we’re opening all the doors to bad actors,” Bengio said during the hearing. “As these systems become more capable, bad actors don’t need to have very strong expertise, whether it’s in bioweapons or cyber security, in order to take advantage of systems like this.”

                          “I think it’s really important that the government come up with some definition, which is going to keep moving, but makes sure that future releases are going to be carefully evaluated for that potential before they are released,” he declared.

                          “I’ve been a staunch advocate of open source for all my scientific career. Open source is great for scientific progress, but as Geoff Hinton, my colleague, was saying: if nuclear bombs were software, would you allow open source of nuclear bombs?”

                          “When you control a model that you’re deploying, you have the ability to monitor its usage,” Amodei said. “It might be misused at one point, but then you can alter the model, you can revoke a user’s access, you can change what the model is willing to do. When a model is released in an uncontrolled manner, there’s no ability to do that. It’s entirely out of your hands.”

                          Although companies like Meta have tried to limit the potential risks of their systems, and prohibit developers from using them in harmful ways, it’s not a very effective method for preventing misuse. Who is responsible if something goes wrong?

                          “It’s not completely clear where the liability should lie,” said Stuart Russell, a professor of computer science at the University of California, Berkeley, who also testified at the hearing.

                          “To continue the nuclear analogy, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets, and someone decided to take that enriched uranium and buy several pounds of it and make a bomb, wouldn’t we say that some liability resides with the company that decided to sell the enriched uranium?

                          “They could put advice on it that says ‘do not use more than three ounces of this in one place or something’, but no one is going to say that absolved them from liability … The open source community has got to start thinking whether they should be liable for putting stuff out there is ripe for misuse.”

                          Leaders in the open source AI community, however, seem to disagree. On Wednesday, a report backed by GitHub, Hugging Face, Eleuther AI and others argued that open source AI projects should not be subjected to the same regulatory scrutiny outlined in the EU AI Act as products and services built by private companies.

                          You can watch a replay of the hearing here. ®

                          ">

                          AI systems are rapidly improving and will accelerate scientific discoveries – but the technology could also give criminals the power to create bioweapons and dangerous viruses in as little as two to three years, according to Anthropic CEO Dario Amodei.

                          Anthropic, founded by former OpenAI employees, prides itself on being safety-oriented and is best known for its large language model (LLM) chatbot Claude. Over the past six months the startup has reportedly been working with biosecurity experts to study how neural networks could be used to create weapons in the future.

                          On Thursday the head of the AI biz warned a US Senate technology subcommittee that regulation is desperately needed to tackle misuse of powerful models for harmful purposes in science and engineering, such as cyber security, nuclear technology, chemistry, and biology.

                          “Whatever we do, it has to happen fast. And I think to focus people’s minds on the biorisks, I would really target 2025, 2026, maybe even some chance of 2024. If we don’t have things in place that are restraining what can be done with AI systems, we’re going to have a really bad time,” he testified at the hearing on Tuesday.

                          “Today, certain steps in the use of biology to create harm involve knowledge that cannot be found on Google or in textbooks and requires a high level of specialized expertise,” Amodei said in his opening statement to the senators.

                          “The question we and our collaborators studied is whether current AI systems are capable of filling in some of the more difficult steps in these production processes. We found that today’s AI systems can fill in some of these steps – but incompletely and unreliably. They are showing the first, nascent signs of risk.

                          “However, a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, if appropriate guardrails and mitigations are not put in place. This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”

                          You can see where he’s coming from. Though the fundamental principles of modern nuclear weapons are publicly known and documented, actually engineering the devices – from producing the fuel and other materials at the heart of them, to designing the conventional explosives that trigger them, to miniaturizing them – is difficult and some of the steps remain highly classified. The same goes for biological weapons: there are steps that relatively few people know, and there is a danger a future ML model will be able to fill in those gaps for a wider audience.

                          Although the timescale seems dramatic, it’s not so far-fetched. Folks have taken to chatbots asking for instructions on how to create weapons such as pipe bombs and napalm, as well as drug recipes and other nefarious topics. The bots are supposed to have guardrails that prevent them from revealing that kind of information – a lot of which can be found through web searches or libraries, admittedly. However, there is a realistic risk that chatbots can make that sensitive info more easily accessible or understandable for curious netizens.

                          These models are trained on large amounts of text, including papers from scientific journals and textbooks. As they become more advanced, they could get better at gleaning insights from today’s knowledge to come up with discoveries – even dangerous ones – or provide answers that until now have been kept tightly under wraps for security reasons.

                          If nuclear bombs were software, would you allow open source of nuclear bombs?

                          Collaboration Pharmaceuticals, based in North Carolina, previously raised concerns that the same technology used to develop drugs could also be repurposed to create biochemical weapons.

                          LLMs therefore pose a potential threat to national security, as foreign adversaries or terrorists could use this knowledge to carry out large scale attacks. Bear in mind, though, it’s just information – actually obtaining the material, handling it, and processing it to pull off an assault would be tricky.

                          • AI drug algorithms can be flipped to invent bioweapons
                          • Top AI execs tell US Senate: Please, please pour that regulation down on us
                          • If AI drives humans to extinction, it’ll be our fault
                          • OpenAI is still banging on about defeating rogue superhuman intelligence

                          The dangers are further heightened by the release of open source models that are becoming more and more powerful. Senator Richard Blumenthal (D-CT) noted that a group of developers had used the code for Stability AI’s Stable Diffusion models to create a text-to-image system tailored to generating sexual abuse material, for example.

                          Let’s hear from one of the granddaddies

                          Yoshua Bengio, a pioneer researcher in neural networks and the scientific director of the Montreal Institute for Learning Algorithms, agreed. Bengio is often named as one of the three “Godfathers of AI” alongside Geoff Hinton, a computer science professor at the University of Toronto, and Yann LeCun, chief AI scientist at Meta.

                          He urged lawmakers to pass legislation moderating the capabilities of AI models before they can be released more widely to the public.

                          “I think it’s really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we’re opening all the doors to bad actors,” Bengio said during the hearing. “As these systems become more capable, bad actors don’t need to have very strong expertise, whether it’s in bioweapons or cyber security, in order to take advantage of systems like this.”

                          “I think it’s really important that the government come up with some definition, which is going to keep moving, but makes sure that future releases are going to be carefully evaluated for that potential before they are released,” he declared.

                          “I’ve been a staunch advocate of open source for all my scientific career. Open source is great for scientific progress, but as Geoff Hinton, my colleague, was saying: if nuclear bombs were software, would you allow open source of nuclear bombs?”

                          “When you control a model that you’re deploying, you have the ability to monitor its usage,” Amodei said. “It might be misused at one point, but then you can alter the model, you can revoke a user’s access, you can change what the model is willing to do. When a model is released in an uncontrolled manner, there’s no ability to do that. It’s entirely out of your hands.”

                          Although companies like Meta have tried to limit the potential risks of their systems, and prohibit developers from using them in harmful ways, it’s not a very effective method for preventing misuse. Who is responsible if something goes wrong?

                          “It’s not completely clear where the liability should lie,” said Stuart Russell, a professor of computer science at the University of California, Berkeley, who also testified at the hearing.

                          “To continue the nuclear analogy, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets, and someone decided to take that enriched uranium and buy several pounds of it and make a bomb, wouldn’t we say that some liability resides with the company that decided to sell the enriched uranium?

                          “They could put advice on it that says ‘do not use more than three ounces of this in one place or something’, but no one is going to say that absolved them from liability … The open source community has got to start thinking whether they should be liable for putting stuff out there is ripe for misuse.”

                          Leaders in the open source AI community, however, seem to disagree. On Wednesday, a report backed by GitHub, Hugging Face, Eleuther AI and others argued that open source AI projects should not be subjected to the same regulatory scrutiny outlined in the EU AI Act as products and services built by private companies.

                          You can watch a replay of the hearing here. ®

                          ">

                          AI systems are rapidly improving and will accelerate scientific discoveries – but the technology could also give criminals the power to create bioweapons and dangerous viruses in as little as two to three years, according to Anthropic CEO Dario Amodei.

                          Anthropic, founded by former OpenAI employees, prides itself on being safety-oriented and is best known for its large language model (LLM) chatbot Claude. Over the past six months the startup has reportedly been working with biosecurity experts to study how neural networks could be used to create weapons in the future.

                          On Thursday the head of the AI biz warned a US Senate technology subcommittee that regulation is desperately needed to tackle misuse of powerful models for harmful purposes in science and engineering, such as cyber security, nuclear technology, chemistry, and biology.

                          “Whatever we do, it has to happen fast. And I think to focus people’s minds on the biorisks, I would really target 2025, 2026, maybe even some chance of 2024. If we don’t have things in place that are restraining what can be done with AI systems, we’re going to have a really bad time,” he testified at the hearing on Tuesday.

                          “Today, certain steps in the use of biology to create harm involve knowledge that cannot be found on Google or in textbooks and requires a high level of specialized expertise,” Amodei said in his opening statement to the senators.

                          “The question we and our collaborators studied is whether current AI systems are capable of filling in some of the more difficult steps in these production processes. We found that today’s AI systems can fill in some of these steps – but incompletely and unreliably. They are showing the first, nascent signs of risk.

                          “However, a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, if appropriate guardrails and mitigations are not put in place. This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”

                          You can see where he’s coming from. Though the fundamental principles of modern nuclear weapons are publicly known and documented, actually engineering the devices – from producing the fuel and other materials at the heart of them, to designing the conventional explosives that trigger them, to miniaturizing them – is difficult and some of the steps remain highly classified. The same goes for biological weapons: there are steps that relatively few people know, and there is a danger a future ML model will be able to fill in those gaps for a wider audience.

                          Although the timescale seems dramatic, it’s not so far-fetched. Folks have taken to chatbots asking for instructions on how to create weapons such as pipe bombs and napalm, as well as drug recipes and other nefarious topics. The bots are supposed to have guardrails that prevent them from revealing that kind of information – a lot of which can be found through web searches or libraries, admittedly. However, there is a realistic risk that chatbots can make that sensitive info more easily accessible or understandable for curious netizens.

                          These models are trained on large amounts of text, including papers from scientific journals and textbooks. As they become more advanced, they could get better at gleaning insights from today’s knowledge to come up with discoveries – even dangerous ones – or provide answers that until now have been kept tightly under wraps for security reasons.

                          If nuclear bombs were software, would you allow open source of nuclear bombs?

                          Collaboration Pharmaceuticals, based in North Carolina, previously raised concerns that the same technology used to develop drugs could also be repurposed to create biochemical weapons.

                          LLMs therefore pose a potential threat to national security, as foreign adversaries or terrorists could use this knowledge to carry out large scale attacks. Bear in mind, though, it’s just information – actually obtaining the material, handling it, and processing it to pull off an assault would be tricky.

                          • AI drug algorithms can be flipped to invent bioweapons
                          • Top AI execs tell US Senate: Please, please pour that regulation down on us
                          • If AI drives humans to extinction, it’ll be our fault
                          • OpenAI is still banging on about defeating rogue superhuman intelligence

                          The dangers are further heightened by the release of open source models that are becoming more and more powerful. Senator Richard Blumenthal (D-CT) noted that a group of developers had used the code for Stability AI’s Stable Diffusion models to create a text-to-image system tailored to generating sexual abuse material, for example.

                          Let’s hear from one of the granddaddies

                          Yoshua Bengio, a pioneer researcher in neural networks and the scientific director of the Montreal Institute for Learning Algorithms, agreed. Bengio is often named as one of the three “Godfathers of AI” alongside Geoff Hinton, a computer science professor at the University of Toronto, and Yann LeCun, chief AI scientist at Meta.

                          He urged lawmakers to pass legislation moderating the capabilities of AI models before they can be released more widely to the public.

                          “I think it’s really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we’re opening all the doors to bad actors,” Bengio said during the hearing. “As these systems become more capable, bad actors don’t need to have very strong expertise, whether it’s in bioweapons or cyber security, in order to take advantage of systems like this.”

                          “I think it’s really important that the government come up with some definition, which is going to keep moving, but makes sure that future releases are going to be carefully evaluated for that potential before they are released,” he declared.

                          “I’ve been a staunch advocate of open source for all my scientific career. Open source is great for scientific progress, but as Geoff Hinton, my colleague, was saying: if nuclear bombs were software, would you allow open source of nuclear bombs?”

                          “When you control a model that you’re deploying, you have the ability to monitor its usage,” Amodei said. “It might be misused at one point, but then you can alter the model, you can revoke a user’s access, you can change what the model is willing to do. When a model is released in an uncontrolled manner, there’s no ability to do that. It’s entirely out of your hands.”

                          Although companies like Meta have tried to limit the potential risks of their systems, and prohibit developers from using them in harmful ways, it’s not a very effective method for preventing misuse. Who is responsible if something goes wrong?

                          “It’s not completely clear where the liability should lie,” said Stuart Russell, a professor of computer science at the University of California, Berkeley, who also testified at the hearing.

                          “To continue the nuclear analogy, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets, and someone decided to take that enriched uranium and buy several pounds of it and make a bomb, wouldn’t we say that some liability resides with the company that decided to sell the enriched uranium?

                          “They could put advice on it that says ‘do not use more than three ounces of this in one place or something’, but no one is going to say that absolved them from liability … The open source community has got to start thinking whether they should be liable for putting stuff out there is ripe for misuse.”

                          Leaders in the open source AI community, however, seem to disagree. On Wednesday, a report backed by GitHub, Hugging Face, Eleuther AI and others argued that open source AI projects should not be subjected to the same regulatory scrutiny outlined in the EU AI Act as products and services built by private companies.

                          You can watch a replay of the hearing here. ®

                          ">

                          AI systems are rapidly improving and will accelerate scientific discoveries – but the technology could also give criminals the power to create bioweapons and dangerous viruses in as little as two to three years, according to Anthropic CEO Dario Amodei.

                          Anthropic, founded by former OpenAI employees, prides itself on being safety-oriented and is best known for its large language model (LLM) chatbot Claude. Over the past six months the startup has reportedly been working with biosecurity experts to study how neural networks could be used to create weapons in the future.

                          On Thursday the head of the AI biz warned a US Senate technology subcommittee that regulation is desperately needed to tackle misuse of powerful models for harmful purposes in science and engineering, such as cyber security, nuclear technology, chemistry, and biology.

                          “Whatever we do, it has to happen fast. And I think to focus people’s minds on the biorisks, I would really target 2025, 2026, maybe even some chance of 2024. If we don’t have things in place that are restraining what can be done with AI systems, we’re going to have a really bad time,” he testified at the hearing on Tuesday.

                          “Today, certain steps in the use of biology to create harm involve knowledge that cannot be found on Google or in textbooks and requires a high level of specialized expertise,” Amodei said in his opening statement to the senators.

                          “The question we and our collaborators studied is whether current AI systems are capable of filling in some of the more difficult steps in these production processes. We found that today’s AI systems can fill in some of these steps – but incompletely and unreliably. They are showing the first, nascent signs of risk.

                          “However, a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, if appropriate guardrails and mitigations are not put in place. This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”

                          You can see where he’s coming from. Though the fundamental principles of modern nuclear weapons are publicly known and documented, actually engineering the devices – from producing the fuel and other materials at the heart of them, to designing the conventional explosives that trigger them, to miniaturizing them – is difficult and some of the steps remain highly classified. The same goes for biological weapons: there are steps that relatively few people know, and there is a danger a future ML model will be able to fill in those gaps for a wider audience.

                          Although the timescale seems dramatic, it’s not so far-fetched. Folks have taken to chatbots asking for instructions on how to create weapons such as pipe bombs and napalm, as well as drug recipes and other nefarious topics. The bots are supposed to have guardrails that prevent them from revealing that kind of information – a lot of which can be found through web searches or libraries, admittedly. However, there is a realistic risk that chatbots can make that sensitive info more easily accessible or understandable for curious netizens.

                          These models are trained on large amounts of text, including papers from scientific journals and textbooks. As they become more advanced, they could get better at gleaning insights from today’s knowledge to come up with discoveries – even dangerous ones – or provide answers that until now have been kept tightly under wraps for security reasons.

                          If nuclear bombs were software, would you allow open source of nuclear bombs?

                          Collaboration Pharmaceuticals, based in North Carolina, previously raised concerns that the same technology used to develop drugs could also be repurposed to create biochemical weapons.

                          LLMs therefore pose a potential threat to national security, as foreign adversaries or terrorists could use this knowledge to carry out large scale attacks. Bear in mind, though, it’s just information – actually obtaining the material, handling it, and processing it to pull off an assault would be tricky.

                          • AI drug algorithms can be flipped to invent bioweapons
                          • Top AI execs tell US Senate: Please, please pour that regulation down on us
                          • If AI drives humans to extinction, it’ll be our fault
                          • OpenAI is still banging on about defeating rogue superhuman intelligence

                          The dangers are further heightened by the release of open source models that are becoming more and more powerful. Senator Richard Blumenthal (D-CT) noted that a group of developers had used the code for Stability AI’s Stable Diffusion models to create a text-to-image system tailored to generating sexual abuse material, for example.

                          Let’s hear from one of the granddaddies

                          Yoshua Bengio, a pioneer researcher in neural networks and the scientific director of the Montreal Institute for Learning Algorithms, agreed. Bengio is often named as one of the three “Godfathers of AI” alongside Geoff Hinton, a computer science professor at the University of Toronto, and Yann LeCun, chief AI scientist at Meta.

                          He urged lawmakers to pass legislation moderating the capabilities of AI models before they can be released more widely to the public.

                          “I think it’s really important because if we put something out there that is open source and can be dangerous – which is a tiny minority of all the code that is open source – essentially we’re opening all the doors to bad actors,” Bengio said during the hearing. “As these systems become more capable, bad actors don’t need to have very strong expertise, whether it’s in bioweapons or cyber security, in order to take advantage of systems like this.”

                          “I think it’s really important that the government come up with some definition, which is going to keep moving, but makes sure that future releases are going to be carefully evaluated for that potential before they are released,” he declared.

                          “I’ve been a staunch advocate of open source for all my scientific career. Open source is great for scientific progress, but as Geoff Hinton, my colleague, was saying: if nuclear bombs were software, would you allow open source of nuclear bombs?”

                          “When you control a model that you’re deploying, you have the ability to monitor its usage,” Amodei said. “It might be misused at one point, but then you can alter the model, you can revoke a user’s access, you can change what the model is willing to do. When a model is released in an uncontrolled manner, there’s no ability to do that. It’s entirely out of your hands.”

                          Although companies like Meta have tried to limit the potential risks of their systems, and prohibit developers from using them in harmful ways, it’s not a very effective method for preventing misuse. Who is responsible if something goes wrong?

                          “It’s not completely clear where the liability should lie,” said Stuart Russell, a professor of computer science at the University of California, Berkeley, who also testified at the hearing.

                          “To continue the nuclear analogy, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets, and someone decided to take that enriched uranium and buy several pounds of it and make a bomb, wouldn’t we say that some liability resides with the company that decided to sell the enriched uranium?

                          “They could put advice on it that says ‘do not use more than three ounces of this in one place or something’, but no one is going to say that absolved them from liability … The open source community has got to start thinking whether they should be liable for putting stuff out there is ripe for misuse.”

                          Leaders in the open source AI community, however, seem to disagree. On Wednesday, a report backed by GitHub, Hugging Face, Eleuther AI and others argued that open source AI projects should not be subjected to the same regulatory scrutiny outlined in the EU AI Act as products and services built by private companies.

                          You can watch a replay of the hearing here. ®

                          Tags: chatbotsFriendly
                          ">
                          Ferhan Rana

                          Ferhan Rana

                          Related Posts

                          Lee Jung-Jae Was Shocked By the Racist Backlash to The Acolyte
                          Technology

                          Lee Jung-Jae Was Shocked By the Racist Backlash to The Acolyte

                          by Ferhan Rana
                          December 4, 2024
                          ‘Cosmic Fireflies’ Spotted From the ISS Turn Out to Be a Total Buzzkill
                          Technology

                          ‘Cosmic Fireflies’ Spotted From the ISS Turn Out to Be a Total Buzzkill

                          by Ferhan Rana
                          December 4, 2024
                          OpenAI denies it is building ad biz model into its platform
                          Technology

                          OpenAI denies it is building ad biz model into its platform

                          by Ferhan Rana
                          December 3, 2024
                          Apple’s backwards design mistake and the reversed capacitor
                          Technology

                          Apple’s backwards design mistake and the reversed capacitor

                          by Ferhan Rana
                          December 3, 2024
                          Forget Black Friday: Amazon Just Revealed 12 Gems This Sunday For Cyber Monday ⚡️
                          Technology

                          Forget Black Friday: Amazon Just Revealed 12 Gems This Sunday For Cyber Monday ⚡️

                          by Ferhan Rana
                          December 2, 2024

                          Premium Content

                          OnlyFans star Hannah Veillet: ‘I was a mentor assistant, now I’m making thousands and can assist my household’

                          OnlyFans star Hannah Veillet: ‘I was a mentor assistant, now I’m making thousands and can assist my household’

                          March 5, 2022
                          Taylor Swift desires to make a movie

                          Taylor Swift desires to make a movie

                          June 12, 2022
                          Precise Madrid bring together diverse transfer blueprint for defender, eager to enhance funds

                          Precise Madrid bring together diverse transfer blueprint for defender, eager to enhance funds

                          June 29, 2021

                          Browse by Category

                          • Business
                          • Crypto
                          • Entertainment
                          • Fashion
                          • Health
                          • Lifestyle
                          • Real Estate
                          • Sports
                          • Technology
                          • Travel
                          • Uncategorized
                          • World

                          Browse by Tags

                          Amazon Andrew announces Apple Barcelona Charles director Elizabeth Europe Exclusive First former Future George Google Harry health Intel James Jennifer Lewis Manchester Markle Meghan Michael Microsoft Middleton people Prince Princess Queen REPORT reveals Review Royal Samsung Shares Takes Twitter wants WATCH William Woman World Years
                          TrivDaily

                          Get the latest World news and analysis, breaking news, features and special reports from World. Also watch videos from across the Europian continent.

                          Learn more

                          Categories

                          • Business
                          • Crypto
                          • Entertainment
                          • Fashion
                          • Health
                          • Lifestyle
                          • Real Estate
                          • Sports
                          • Technology
                          • Travel
                          • Uncategorized
                          • World

                          Browse by Tag

                          Business (1215) Crypto (1235) Entertainment (1612) Fashion (3) Health (1146) Lifestyle (1528) Real Estate (40) Sports (2287) Technology (2335) Travel (1131) Uncategorized (11) World (23)

                          Recent Posts

                          • When Was Fortnite Made and How Old is It?
                          • Fortnite PS4 – How to Play Fortnite on PS4 and PS5 in 2025
                          • Over 100 tourists trapped after avalanche in tourist hotspot after 20cm snow barrage

                          © 2021 TrivDaily - Developed by ADSA Solutions.

                          Welcome Back!

                          Login to your account below

                          Forgotten Password? Sign Up

                          Create New Account!

                          Fill the forms bellow to register

                          All fields are required. Log In

                          Retrieve your password

                          Please enter your username or email address to reset your password.

                          Log In

                          Add New Playlist

                          • Login
                          • Sign Up
                          • Cart
                          No Result
                          View All Result
                          • Home
                          • Business News
                          • Entertainment News
                          • Lifestyle News
                          • Health News
                          • Tech News
                          • Real Estate News
                          • World News

                          © 2021 TrivDaily - Developed by ADSA Solutions.

                          Are you sure want to unlock this post?
                          Unlock left : 0
                          Are you sure want to cancel subscription?