• THE PENTAGON WANTS TO USE AI TO CREATE DEEPFAKE INTERNET USERS

    From D. Ray@d@ray to alt.fan.rush-limbaugh,talk.politics.misc,alt.censorship,comp.misc,comp.ai.philosophy on Fri Oct 18 20:01:28 2024
    From Newsgroup: comp.misc

    THE UNITED STATES’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

    The plan, mentioned in a new 76-page wish list by the Department of
    Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military
    efforts. “Special Operations Forces (SOF) are interested in technologies
    that can generate convincing online personas for use on social media
    platforms, social networking sites, and other online content,” the entry reads.

    The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as
    human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”

    In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background
    video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than
    fake people: Each deepfake selfie will come with a matching faked
    background, “to create a virtual environment undetectable by social media algorithms.”

    The Pentagon has already been caught using phony social media users to
    further its interests in recent years. In 2022, Meta and Twitter removed a propaganda network using faked accounts operated by U.S. Central Command, including some with profile pictures generated with methods similar to
    those outlined by JSOC. A 2024 Reuters investigation revealed a Special Operations Command campaign using fake social media users aimed at
    undermining foreign confidence in China’s Covid vaccine.

    Last year, Special Operations Command, or SOCOM, expressed interest in
    using video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording, for “influence operations, digital deception, communication disruption, and disinformation campaigns.” Such imagery is generated using a variety of machine learning techniques, generally using software that has been “trained” to recognize and recreate human features by analyzing a massive database of faces and bodies. This year’s SOCOM wish list specifies an interest in software
    similar to StyleGAN, a tool released by Nvidia in 2019 that powered the globally popular website “This Person Does Not Exist.” Within a year of StyleGAN’s launch, Facebook said it had taken down a network of accounts
    that used the technology to create false profile pictures. Since then,
    academic and private sector researchers have been engaged in a race between
    new ways to create undetectable deepfakes, and new ways to detect them.
    Many government services now require so-called liveness detection to thwart deepfaked identity photos, asking human applicants to upload a selfie video
    to demonstrate they are a real person — an obstacle that SOCOM may be interested in thwarting.

    The listing notes that special operations troops “will use this capability
    to gather information from public online forums,” with no further
    explanation of how these artificial internet users will be used.

    This more detailed procurement listing shows that the United States pursues
    the exact same technologies and techniques it condemns in the hands of geopolitical foes. National security officials have long described the state-backed use of deepfakes as an urgent threat — that is, if they are being done by another country.

    Last September, a joint statement by the NSA, FBI, and CISA warned
    “synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications.” It described the global proliferation of deepfake technology as a “top risk” for 2023. In a background briefing to reporters this year, U.S. intelligence officials cautioned that the ability of foreign adversaries to disseminate “AI-generated content” without being detected — exactly the capability the
    Pentagon now seeks — represents a “malign influence accelerant” from the likes of Russia, China, and Iran. Earlier this year, the Pentagon’s Defense Innovation Unit sought private sector help in combating deepfakes with an
    air of alarm: “This technology is increasingly common and credible, posing
    a significant threat to the Department of Defense, especially as U.S. adversaries use deepfakes for deception, fraud, disinformation, and other malicious activities.” An April paper by the U.S. Army’s Strategic Studies Institute was similarly concerned: “Experts expect the malicious use of AI, including the creation of deepfake videos to sow disinformation to polarize societies and deepen grievances, to grow over the next decade.”

    The offensive use of this technology by the U.S. would, naturally, spur its proliferation and normalize it as a tool for all governments. “What’s notable about this technology is that it is purely of a deceptive nature,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “There are
    no legitimate use cases besides deception, and it is concerning to see the
    U.S. military lean into a use of a technology they have themselves warned against. This will only embolden other militaries or adversaries to do the same, leading to a society where it is increasingly difficult to ascertain truth from fiction and muddling the geopolitical sphere.”

    Both Russia and China have been caught using deepfaked video and user
    avatars in their online propaganda efforts, prompting the State Department
    to announce an international “Framework to Counter Foreign State
    Information Manipulation” in January. “Foreign information manipulation and interference is a national security threat to the United States as well as
    to its allies and partners,” a State Department press release said. “Authoritarian governments use information manipulation to shred the fabric of free and democratic societies.”

    SOCOM’s interest in deepfakes is part of a fundamental tension within the U.S. government, said Daniel Byman, a professor of security studies at Georgetown University and a member of the State Department’s International Security Advisory Board. “Much of the U.S. government has a strong interest in the public believing that the government consistently puts out truthful
    (to the best of knowledge) information and is not deliberately deceiving people,” he explained, while other branches are tasked with deception. “So there is a legitimate concern that the U.S. will be seen as hypocritical,” Byman added. “I’m also concerned about the impact on domestic trust in government — will segments of the U.S. people, in general, become more suspicious of information from the government?”

    <https://theintercept.com/2024/10/17/pentagon-ai-deepfake-internet-users/>
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Pierre Delecto Romney@robberbaron@invalid.ut to alt.fan.rush-limbaugh,talk.politics.misc,alt.censorship,comp.misc,comp.ai.philosophy on Fri Oct 18 15:18:35 2024
    From Newsgroup: comp.misc

    D. Ray wrote:
    THE UNITED STATES’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

    The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

    The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple
    expressions” and “Government Identification quality photos.”

    In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked
    background, “to create a virtual environment undetectable by social media algorithms.”

    The Pentagon has already been caught using phony social media users to further its interests in recent years. In 2022, Meta and Twitter removed a propaganda network using faked accounts operated by U.S. Central Command, including some with profile pictures generated with methods similar to
    those outlined by JSOC. A 2024 Reuters investigation revealed a Special Operations Command campaign using fake social media users aimed at undermining foreign confidence in China’s Covid vaccine.

    Last year, Special Operations Command, or SOCOM, expressed interest in
    using video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording, for “influence operations, digital deception, communication disruption, and disinformation campaigns.” Such imagery is generated using a variety of machine learning techniques, generally using software that has been “trained” to recognize and recreate human features by analyzing a massive database of faces and bodies. This year’s SOCOM wish list specifies an interest in software similar to StyleGAN, a tool released by Nvidia in 2019 that powered the globally popular website “This Person Does Not Exist.” Within a year of StyleGAN’s launch, Facebook said it had taken down a network of accounts that used the technology to create false profile pictures. Since then, academic and private sector researchers have been engaged in a race between new ways to create undetectable deepfakes, and new ways to detect them.
    Many government services now require so-called liveness detection to thwart deepfaked identity photos, asking human applicants to upload a selfie video to demonstrate they are a real person — an obstacle that SOCOM may be interested in thwarting.

    The listing notes that special operations troops “will use this capability to gather information from public online forums,” with no further explanation of how these artificial internet users will be used.

    This more detailed procurement listing shows that the United States pursues the exact same technologies and techniques it condemns in the hands of geopolitical foes. National security officials have long described the state-backed use of deepfakes as an urgent threat — that is, if they are being done by another country.

    Last September, a joint statement by the NSA, FBI, and CISA warned “synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications.” It described the global proliferation of deepfake technology as a “top risk” for 2023. In a background briefing to reporters this year, U.S. intelligence officials cautioned that the ability of foreign adversaries to disseminate “AI-generated content” without being detected — exactly the capability the
    Pentagon now seeks — represents a “malign influence accelerant” from the
    likes of Russia, China, and Iran. Earlier this year, the Pentagon’s Defense Innovation Unit sought private sector help in combating deepfakes with an
    air of alarm: “This technology is increasingly common and credible, posing a significant threat to the Department of Defense, especially as U.S. adversaries use deepfakes for deception, fraud, disinformation, and other malicious activities.” An April paper by the U.S. Army’s Strategic Studies
    Institute was similarly concerned: “Experts expect the malicious use of AI, including the creation of deepfake videos to sow disinformation to polarize societies and deepen grievances, to grow over the next decade.”

    The offensive use of this technology by the U.S. would, naturally, spur its proliferation and normalize it as a tool for all governments. “What’s notable about this technology is that it is purely of a deceptive nature,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “There are no legitimate use cases besides deception, and it is concerning to see the U.S. military lean into a use of a technology they have themselves warned against. This will only embolden other militaries or adversaries to do the same, leading to a society where it is increasingly difficult to ascertain truth from fiction and muddling the geopolitical sphere.”

    Both Russia and China have been caught using deepfaked video and user
    avatars in their online propaganda efforts, prompting the State Department
    to announce an international “Framework to Counter Foreign State Information Manipulation” in January. “Foreign information manipulation and
    interference is a national security threat to the United States as well as
    to its allies and partners,” a State Department press release said. “Authoritarian governments use information manipulation to shred the fabric of free and democratic societies.”

    SOCOM’s interest in deepfakes is part of a fundamental tension within the U.S. government, said Daniel Byman, a professor of security studies at Georgetown University and a member of the State Department’s International Security Advisory Board. “Much of the U.S. government has a strong interest in the public believing that the government consistently puts out truthful (to the best of knowledge) information and is not deliberately deceiving people,” he explained, while other branches are tasked with deception. “So
    there is a legitimate concern that the U.S. will be seen as hypocritical,” Byman added. “I’m also concerned about the impact on domestic trust in government — will segments of the U.S. people, in general, become more suspicious of information from the government?”

    <https://theintercept.com/2024/10/17/pentagon-ai-deepfake-internet-users/>


    TY for sharing the full article on this.

    Now we know for sure why Jonathan D Ball/Rudey remains active here.
    --
    ⛨ 🥐🥖🗼🤪
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From 186282ud0s3@186283@ud0s4.net to alt.fan.rush-limbaugh,talk.politics.misc,alt.censorship,comp.misc,comp.ai.philosophy on Fri Oct 18 19:41:25 2024
    From Newsgroup: comp.misc

    On 10/18/24 5:18 PM, Pierre Delecto Romney wrote:
    D. Ray wrote:
    THE UNITED STATES’ secretive Special Operations Command is looking for
    companies to help create deepfake internet users so convincing that
    neither
    humans nor computers will be able to detect they are fake, according to a
    procurement document reviewed by The Intercept.

    The plan, mentioned in a new 76-page wish list by the Department of
    Defense’s Joint Special Operations Command, or JSOC, outlines advanced
    technologies desired for country’s most elite, clandestine military
    efforts. “Special Operations Forces (SOF) are interested in technologies >> that can generate convincing online personas for use on social media
    platforms, social networking sites, and other online content,” the entry >> reads.


    <https://theintercept.com/2024/10/17/pentagon-ai-deepfake-internet-users/>

    Using 'fake people' is nothing new. You can employ
    them for various propaganda, financial, military
    and intel purposes. This just sounds like the latest
    upgrade.

    Chat and friends CAN fake a human quite well now.
    Pairing that with a video avatar should not be an
    issue these days either. The tech is there, the
    desire is there.

    Fortunately, a convincing android BODY to go with
    the persona ... that's a long way off. Fritz Lang
    envisioned it back in the 1920s, but the actual
    tech required to pull it off ... 50+ years yet fer
    sure.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Yuri@X@Y.com to alt.fan.rush-limbaugh,talk.politics.misc,alt.censorship,comp.misc,comp.ai.philosophy on Sun Oct 20 03:14:58 2024
    From Newsgroup: comp.misc

    Fortunately, a convincing android BODY to go with
    the persona ... that's a long way off. Fritz Lang
    envisioned it back in the 1920s, but the actual
    tech required to pull it off ... 50+ years yet fer
    sure.


    I am a DeepFake Inter net user and I suck Putin's cock.

    --- Synchronet 3.20a-Linux NewsLink 1.114