eurleif 14 hours ago

Somewhat related, on YouTube, there's a channel filled with fake police bodycam videos. The most-viewed of these are racially inflammatory, e.g.: https://www.youtube.com/watch?v=5AkXOkXNd8w

The description of the channel on YouTube claims: "In our channel, we bring you real, unfiltered bodycam footage, offering insight into real-world situations." But then if you go to their site, https://bodycamdeclassified.com/, which is focused on threatening people who steal their IP, they say: "While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage. Our videos represent original creative works that we script, film, edit, and produce ourselves." Pretty gross.

  • BLKNSLVR 13 hours ago

    I've seen less than a handful of, usually shorts, on YT purporting to be body-cam footage, but they all seem too well-framed and fairly obviously scripted / staged / fake to me, because I'm actually paying attention to the 'environment' not just the 'action'.

    But I doubt most doomscrollers would notice that in their half-comatose state.

    It IS real, unfiltered bodycam footage. From an actor, following a script, in front of one or many other actors, also following scripts. I think that's how they get away with it, they don't specify it's bodycam footage from actual law enforcement. Yes, gross.

  • RockRobotRock 14 hours ago

    These videos are insane, and the lack of "this is fake" comments are disheartening.

    • tgsovlerkhgsel 13 hours ago

      I assume those just get deleted?

      • Lockal 5 hours ago

        I scrolled down and don't see a single comment with word "fake" (but a lot of comments like "not real") - channel owner probably automatically shadowbans all users who write "fake".

  • hombre_fatal 12 hours ago

    Dang, police bodycam videos are my guilty pleasure when I'm working out and just want dumb stimulation to pass the grind.

    Definitely have watched enough videos from this channel to recognize its name. :(

  • moritzwarhier 4 hours ago

    The website you link (disgusting people) has apparently changed.

    > For Content Thieves (Warning)

    > If you are currently using Body Cam Declassified content without [...]

    > You are in violation of copyright law and will be subject to legal action

    [...]

    > We aggressively pursue legal remedies against content theft, including statutory damages of up to $150,000 per infringement under U.S. [...]

    > An additional administrative fee of $2,500 per infringing video will be assessed

    > We demand all revenue generated from the unauthorized use of our content

    > We maintain relationships with copyright attorneys who specialize in digital media infringement

    > We recommend removing the infringing content immediately and contacting us regarding settlement options

    A paragraph about the videos being fake is still there.

    > While actual government-produced bodycam footage may have different copyright considerations and may be subject to broader fair use provisions in some contexts, our content is NOT actual bodycam footage.

    > Our videos represent original creative works that we script, film, edit, and produce ourselves.

    > As privately created content (not government-produced public records), our videos are fully protected by copyright law and are NOT subject to the same fair use allowances that might apply to actual police bodycam

    > The distinction means our content receives full copyright protection as creative works, similar to any other professionally produced video content.

    This reminds me of a non-AI content mill business strategy that has been metastasizing for years. People who film homeless people and drug addicts and make whole Insta and Youtube channels monetizing it, either framed at "REAL rough footage from city XY" or even openly mocking helpless people. The latter seems to be more common on TikTok and I'm not watching "original" videos of such shite.

    There is a special place in hell for people who do such things and in my opinion, there should be laws with very harsh punishments for the people that "create" this trash and make money from it. When it's about the filming of real people without their consent, we really need some laws that effectively allow to punish people who do this, because the victims are not likely to defend themselves.

    And in total, the whole strategy is to worsen societal division and tensions, and feed bad human instincts (voyeurism, superiority complex) in order to funnel money into the pockets of parasites without ethics.

drdaeman 15 hours ago

> nothing drives engagement on social media like anger and drama

There. It isn’t even a “real” racism, it’s more of a flamebait, where the more outrageous and deranged a take is, the more likely it would captivate attention and possibly even provoke a reaction. Most likely they primarily wanted to earn some buck from viewer engagement, and didn’t care about the ethics of it. Maybe they also had the racist agendas, maybe not - but that’s just not the core of it.

And in the same spirit, the issue is not really racism or AI videos, but perversely incentivized attention economics. It just happened to manifest this way, but it could’ve been anything else - this is merely what happened to hit some journalist mental filters (suggesting that “racism” headlines attract attention those days, and so does “AI”).

And the only low-harm way - that I can think of - how to put this genie back in the bottle is to make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.

  • CharlesW 15 hours ago

    > It isn’t even a “real” racism…

    Generating and distributing racist materials is racist regardless of the intent, even if the person "doesn't mean it".

    Simple thought experiment: If the content was CSAM, would you still excuse the perpetrators as victims of perversely incentivized attention economics?

    • drdaeman 14 hours ago

      I agree, but I believe the intent matters if we’re trying to identify why this happens.

      Racism is just less legally dangerous. There would be people posting snuff or CSAM videos, would that “sell”. Make social networks tough on racism and it’ll be sexism next day. Or extremist politics. Or animal abuse. Or, really, anything, as long as people strongly react to it.

      But, yeah, to avoid any misunderstanding - I didn’t mean to say racism isn’t an issue. It is racist, it’s bad, I don’t argue any otherwise. All I want to stress is that it’s not the real issue here, merely a particular manifestation.

      • jrflowers 13 hours ago

        >it’s not the real issue here

        I like this reasoning. “Trolling” is when people post things to irritate or offend people, so if you see something that’s both racist and offensive then it’s not really racist. If you see somebody posting intentionally offensive racist stuff, and you have no other information about them, you should assume that the offensiveness of their post is an indicator of how not racist they are.

        Really if you think about it, it’s like a graph where as offensiveness goes up the racism goes down becau

        • drdaeman 11 hours ago

          That’s not what I meant, though. When I wrote “not really racist” I meant “the primary cause for posting this is not racism[, but engagement solicitation]”, rather than “not racist”. And it’s not an implication, but only an observation paired with my (and article authors’) guess about the actual intent. I’m sorry for the confusion, I guess I worded that poorly.

          But, yeah, as weird as it may sound, you don’t have to be racist (as in believing in racist ideas) to be a racist troll (propagate racist ideas). Publishing and agreeing with are different things, and they don’t always overlap (even if they frequently do). He who had not ever said or wrote some BS without believing a single iota of it but because they wanted to make some effect, throw the stone.

          And not sure how sarcastic you were, but nothing I’ve said could possibly mean if something is offensive it’s what somehow makes it less racist.

          • jrflowers 11 hours ago

            > you don’t have to be racist (as in believing in racist ideas) to be a racist troll (propagate racist ideas)

            Exactly. Racism has nothing to do with what people say or do, it’s a sort of vibe, so really there is no way of telling if anything or anyone is Real Racist versus fake racist. It is important to point this out b

            • drdaeman 9 hours ago

              I’m a bit confused, is that possible you think racism is binary? I recognize you jest, but not sure I get the idea, and I sincerely hope you don’t do it pointlessly.

              If you refuse to distinguish between someone who genuinely believes in concept of a race, or postulates an inherent supremacy of some particular set of biological and/or sociocultural traits, and someone who merely talks edgy shit they heard somewhere and haven’t given it much thought - then I’m not entirely sure how can I persuade you to see the distinction I do.

              But I believe this difference exists and is important because different causes require different approaches. Online trolls, engagement farmers, and bonehead racists are (somewhat overlapping but generally) different kind of people. And any of those can post racist content.

    • fluidcruft 15 hours ago

      I don't follow your CSAM bit but I have no outrage about Blazing Saddles existing, for example.

      • jazzyjackson 14 hours ago

        It would indeed be an impressive feat to produce a film satirizing child porn

        • defrost 14 hours ago

          Off the cuff the closest example to mind is the Paedogeddon!! special episode of the Brass Eye series created by Chris Morris.

          Admittedly that didn't satirize CSAM material, rather it cut hard into the reflexive reaction people have at the very thought of CSAM and peodophiles.

          https://en.wikipedia.org/wiki/Paedogeddon

          Moreover, that took a human to thread that needle, it'll be a while before AI generation can pass through that strange valley.

          • RockRobotRock 13 hours ago

            This is the one thing we didn't want to happen

            • accoil 13 hours ago

              AI creating satire about media spreading hysteria? I don't think it's at that point.

    • whamlastxmas 14 hours ago

      I think maybe the nuance they’re trying to capture is that yes the content is absolutely freaking racist but the reason it’s being spread isn’t racists laughing at it and liking it, it’s people being angry about it

    • Dig1t 11 hours ago

      The creation of CSAM is a crime because an underage person must be harmed in its creation by definition. Making an AI video of an offensive stereotype does not harm anyone in its creation. It is textbook free speech.

      Clutch your pearls as much as you want about the videos, but forcibly censoring them is going to cause you to continue to lose elections.

      • plaguuuuuu 8 hours ago

        Nobody said anything about governments banning it. We're pointing it out as something harfmul. I'll also happily exercise my free speech (I'm not from the US so it's free, as in - you can't stop me)

  • jazzyjackson 14 hours ago

    > make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.

    i.e. delete your facebook, your tiktok, your youtube and return to calling people on your flip phone and writing letters (or at least emails). I say this without irony (The Sonim XP3+ is a decent device). all the social networking on smart phones has not been a net positive in most people's lives, I don't really know why we sleep walked into it. I'm open to ideas how to make living "IRL" more palatable than cyberspace. It's like telling people to stop smoking cigarettes. I guess we just have to reach a critical mass of people who can do without it and lobby public spaces to ban it. Concert venues and schools are already playing with it by forcing everyone to put their phones in those faraday baggies so maybe it's not outlandish.

    • prmoustache 8 hours ago

      I didn't need to buy a flip phone to delete all my social media accounts.

    • atentaten 13 hours ago

      Have you thought about what we're currently sleep walking into?

  • agnishom 6 hours ago

    > It isn’t even a “real” racism, it’s more of a flamebait

    I think the harm done by circulating racist media is "real" racism regardless of whether someone is doing it because they have hateful ideology, are profiting for it, or just having a good time.

  • GaggiX 14 hours ago

    I don't even think it's flamebait, people just like being edgy on the internet so they enjoy these memes, reading the comments under these posts would probably confirm what I'm saying.

  • corimaith 13 hours ago

    >And the only low-harm way - that I can think of - how to put this genie back in the bottle is to make sure everyone is well aware about how their attention is the new currency in the modern age, and spend it wisely, being aware about the addictive and self-reinforcing nature of some systems.

    Gonna be hard to admit, but mandatory identity verification like in Korea, i.e attaching real consequences to what happens in the internet is more realistic way this is going to be solved. We've have "critical thinking" programs for decades, it's completely pointless on a aggregate scale, primairly because the majority aren't interested in the truth. Save for their specific expertise, it's quite common for even academics to easily fall into misinformation bubbles.

ilaksh 15 hours ago

This isn't really a problem with video generation or AI in general. Sure, there is an aspect of ragebait to it, but the reality is that racism is extremely widespread. If it were not, this kind of content would not be so popular. The people at the very top of US government right now are white supremacists. I'm sorry that is not an exaggeration. There is another term that encompasses more of their worldviews which is not politically correct but is accurate.

Stop trying to blame technology for longstanding social problems. That's a cop out.

  • jazzyjackson 14 hours ago

    Granted that racism is not new, the infinite production of automated content drowning out any genuine human opinion is a harbinger of the internet to come.

    • trhway 13 hours ago

      it also allows automated production of positive content. The main issue here is given a sea of good and a sea of bad content, where the typical person would go for a swim? Why calls for empathy fall flat while inciting rage and hatred is so successful?

      • favflam 36 minutes ago

        This situation is like southern China when the British decided to even up their trade deficit with Opium.

prvc 14 hours ago

None of the examples shown in the video are passable hoaxes. They are all obvious burlesque-style parodies, albeit made in bad taste. They all also have clear and prominent hallmarks of AI generation. Anyone fooled by these has got bigger, prior problems than any potential belief instilled by these videos.

  • ghushn3 12 hours ago

    The problem is not that they are fooling anyone. No one thinks a woman is marrying a chimpanzee. The problem is that the videos are obviously and openly racist and being spread quite brazenly.

    If I have to encounter a constant barrage of shitty racist (or sexist, or homophobic, or whatever) material just to exist online, I'm going to pretty quickly feel like garbage. (If not feel unsafe.) Especially if I'm someone who has other stressors in their life. Someone who is doing well, their life otherwise together, might encounter these and go, "Fucking idiots made a racist video, block."

    But empathize with someone who is struggling? Who just worked 18 hours to make ends meet to come home and feed their kids and pay rent for a shitty apartment that doesn't fit everyone, and their kid comes up to them asking what this video means, and it just... gets past all their barriers. It wedges open so many doubts.

    This isn't harmless.

    • Dig1t 11 hours ago

      It’s not illegal, it’s free speech. You feeling “unsafe” because you saw an offensive stereotype is nobody’s problem but your own.

      If you want to change this situation you should try campaigning for the abolishment of the first amendment or try moving to Europe.

      • pcbro141 10 hours ago

        Your country is currently deporting people for speaking ill of a certain special country in the Middle East (and soon stripping citizenship).

      • tomhow 9 hours ago

        Once again you're using HN for ideological battle, which is against the guidelines. We've asked you to refrain from doing this before. Please remember to avoid this.

        https://news.ycombinator.com/newsguidelines.html

        • ChrisNorstrom 6 hours ago

          No one follows this. HN is very left leaning, there's 3 front page articles about immigrants in the last 24 hours, this is why the independent thinkers left HN about 2015-2016.

          • RandomBacon 5 hours ago

            I missed that wave, do you know where they went?

            I'm getting tired of people letting politics seep into the discussions on this website.

      • ChrisNorstrom 6 hours ago

        It's HN. They get offended by mean memes, but strangely nothing about rappers pimping (sex trafficing women ), calling them bitches and hoes n-words, etc... I used to argue but over the years I just gave up. Now I just warn people like you, you're not going to "wake up" the leftist HNers, they outnumber normal users. They've been pushing liberal Guardian, BBC, CNN crap on HN for over a decade, on a tech forum. You prove them wrong in the comments, they don't care, the mindless zombie just stares at the screen and downvotes. Check my account, been here since 2011. It's been a sad decline.

aaviator42 15 hours ago

I really miss the time before generative images and video were a thing. We opened such a can of worms. Really seems like a "the scientists were so occupied with if they could they didn't stop to think if they should" situation. What is the actual utility of these tools again beyond putting artists out of work?

  • Waterluvian 15 hours ago

    I’m old enough to remember when video killed the radio star.

  • SchemaLoad 14 hours ago

    The scientists were absolutely occupied with if we should. But the CEOs steamrolled them and had it built anyway.

  • currymj 15 hours ago

    from an information theory perspective, predicting and efficiently representing data is so closely tied to generation that it is unavoidable.

    if you want to use ML to do anything at all with image and video, you will usually wind up creating the capability to generate image and video one way or another.

    however building a polished consumer product is a choice, and probably a mistake. every technology has good and bad uses, but there seem to be few and trivial good uses for image/video generation, with many severe bad uses.

  • deadbabe 15 hours ago

    Every generation has their “I miss the time before ‘thing I don’t like’ became a thing”.

    In our case, it’s just generative AI.

    • mopenstein 15 hours ago

      I miss the time before everybody was on the Internet when it was mostly like minded techie types. This modem internet kinda sucks with all its AI generated racism.

    • bashinator 14 hours ago

      Has every generation also seen the rise of massively-multiuser automated personalized propaganda engines?

      • deadbabe 13 hours ago

        TV was one hell of a propaganda engine.

        • HaZeust 10 hours ago

          was?

          • thefz 6 hours ago

            It cannot compete with millions of people glued to their pocket device for hours every day, in which they can see only what is not challenging - or reinforcing - their world view

  • 123yawaworht456 15 hours ago

    [flagged]

    • const_cast 9 hours ago

      This comment has the wombo combo of annoying shit nobody likes with "my brother in Christ" and "bless your heart".

      In all seriousness, nobody is concerned about capability. Everything was always capable if you're rich enough and you have enough time. But scale matters. That's why the printing press literally created new religions.

  • mslansn 15 hours ago

    I’ve seen lots of them which I found very very amusing. That seems good enough for me. Think about it: there are channels on YouTube and on the telly that are there just to amuse you. So a system that creates amusing videos is a net positive for the world.

runjake 15 hours ago

I was skeptical about this, but a quick search for “the usual suspects” pulls up many, many examples for me.

linotype 13 hours ago

People have to stop watching this trash. And I mean all of TikTok.

  • ghushn3 12 hours ago

    Some of TikTok is great. I mean, most of it is just dopamine hits, and it's potentially quite bad from a health perspective. But also, plenty of TikTok is news, or political theory, or thoughtful commentary, or explanations of how things work.

    It's a bowl of fun size candy bars, with a few razors, a few drugs, a few rotten apples, etc. mixed in. You can, by and large, get the algorithm to serve you nothing but the candy, but you are still eating only candy bars at that point.

    Some people can say no to infinite candy. Other people, like myself, cannot and it's a real problem.

thefz 6 hours ago

Luckily sane US senators just rejected a 10 year ban on state level AI regulation.

AstroJetson 14 hours ago

Just watched Mountainhead about this very topic. The AI videos were good enough to start wars, topple banking systems and countries.

It is very scary because the "tech-bros" in the movie pretty much mimic the actions of the real life ones.

Apocryphon 15 hours ago

The Tayification of everything

  • turbofreak 11 hours ago

    Nice callback. That was a golden era.

jrflowers 13 hours ago

The interesting thing about this is that it is the use case for these video generators. If the point of these tools is to churn out stuff to drive engagement, and the best way to do that is through content that is inflammatory, offensive, or misinformation, then that’s the ideal use case for them. That’s what the tool is for.

ynab10 13 hours ago

[flagged]

ivape 15 hours ago

I think it's fine to fingerprint AI generated images/videos. It's a massive privacy violation but I just can't see any other way. Too many people have always been and will always be unethical.

  • WillPostForFood 13 hours ago

    To what end? You want to fingerprint all AI images and video to catch people who make racist videos in order too to do what? It isn't illegal. If TikTok doesn't like the content they can delete the video and the account. If Google or OpenAI doesn't want the content being created, they can figure out a way to block it, and delete the user's accounts in the meantime.

    If I told you many 14 year olds were making very similar offensive jokes at lunch in high school, would you support adding microphones throughout schools to track and catch them?

    • ivape 10 hours ago

      If I told you many 14 year olds were making very similar offensive jokes at lunch in high school

      A picture is worth a thousand words. Me saying your mom is so fat that _______ in the lunchroom is different than me saying your mom is so fat in cinematic video format that can go locally viral (your whole school). This is the first time in my life I'm going to say this is not a history is echoing situation. This is a we have entirely gone to the next level, forget what you think you know.

  • SchemaLoad 14 hours ago

    I've been wondering if ChatGPT makes such excessive use of EM dash just so people can easily identify AI generated content.

    Google wouldn't even need a fingerprint, they could just look up from their logs who generated the video.

    • oceanplexian 14 hours ago

      Google already admitted they are fingerprinting generative video and have a safety obsession so I guarantee they do it to their LLMs. Another reason is to pollute the output that folks like Deepseek are using to train derivative models.

    • IAmGraydon 13 hours ago

      The em-dash is one marker, but I’ve read that most LLMs create small but statistically detectable biases in their output to help them avoid reingesting their own content.

  • partiallypro 14 hours ago

    Eventually as models become cheaper, the big companies that would do this won't have control over newer generated content, so it's fairly pointless.

no_time 9 hours ago

The spirit airlines video with the smoke alarm chirp is perfect though.

nvch 14 hours ago

The question is, who is acting in a racist manner here: the LLM that does what it can, or the humans sharing those videos?

  • unsnap_biceps 13 hours ago

    Until we get a LLM that actually "thinks", it's just a tool like photoshop. Photoshop isn't racist if someone uses it to create racist material, so a LLM wouldn't be racist either.

    • ghushn3 12 hours ago

      I saw (on HN, actually) an academic definition for prejudice, discrimination, and racism that stuck with me. I might be butchering this a bit, but prejudice is basically thinking another group is less than purely because of their race. Discrimination is acting on that belief. Racism is discrimination based on race, particularly when the person discriminated against is a minority/less powerful person.

      LLMs don't think, and also have no race. So I have a hard time saying they can racist, per se. But they can absolutely produce racist and discriminatory material. Especially if their training corpus contains racist and discriminatory material (which it absolutely does.)

      I do think it's important to distinguish between photoshop, which is largely built from feature implementation ("The paint bucket behaves like this", etc.), and LLMs which are predictive engines that try to predict the right set of words to say based on their understanding of human media. The input is not some thoughtful set of PMs and engineers, it's "read all this, figure out the patterns". If "all this" contains racist material, the LLM will sometimes repeat it.

    • stuaxo 7 hours ago

      An LLM is a reflection of the biases in the data it's trained on, so it's not as simple as that.

    • redundantly 13 hours ago

      LLMs can and do have biases. One wouldn't be far off calling an LLM racist.