You Can't Opt-Out of A.I. Online
91 points by fortran77 a day ago | 88 comments
  • ishtanbul a day ago |
    The outcome is that spaces which are prone to AI slop and spam will wither and lose real users, eventually collapsing on themselves. For the social media platforms, good riddance
    • shombaboor a day ago |
      Discord and heavily moderated forums are the future.
      • altruios a day ago |
        IRC still is around. Pretty sure most of the user base is human still.
        • rogerthis a day ago |
          Lots of bots. Also body-only-soul-less users (as I call those that never look at messages).
      • doesnt_know a day ago |
        Yeah, small, invite-only "walled garden" style communities are definitely the future of the web.

        There has always been those of course, but I think they will just end up becoming the default, rather then the fringes. And the thing with these smaller communities is that they are far less tolerant to their hosting platforms pulling dumb shit (like stating they will train AI's on the comunities content).

    • dartos a day ago |
      We can only hope
    • add-sub-mul-div a day ago |
      If that was true Twitter would be collapsing. The vast majority have been too passive to leave. I no longer think there's a threshold of worsening that will disrupt Twitter, Reddit, etc.
    • jsheard a day ago |
      What I'd give to see the user retention stats at DeviantArt ever since they allowed it to be filled absolutely wall-to-wall with the most boring AI art you've ever seen. Not just allowed but encouraged by jumping on the bandwagon and integrating their own generator. Anecdotally it feels like there's not just an insane glut of AI slop drowning out everything else, but also less "everything else" over time due to user attrition.

      It really can't be overstated just how much it's come to dominate the entire site, skimming the frontpage now I immediately spotted a generic Midjourney-core image and the creator has posted 4100 pieces to date after joining... two months ago. 68 uploads every day on average.

      • vunderba a day ago |
        100%. It's intensely frustrating that they didn't at least firewall that vast amount of AI noise using a subdomain or something (e.g. ai.deviantart.com).
        • jsheard a day ago |
          They do have an AI tag that creators are supposed to set where applicable, and there's an option to hide everything with that tag, but the enforcement is weak so there's a lot of obvious AI spam which isn't tagged. Many of the AI spammers are using DAs monetization features or promoting a Patreon so they're incentivized to not set the AI tag to maximise their reach.
        • yieldcrv a day ago |
          Deviantart was already noise of unappreciated forms of expressions

          People just use it as a hosting platform

          Sounds more like a UX issue that you can see galleries

      • Mistletoe a day ago |
    • doctorpangloss a day ago |
      Hard to make any forecast about "spaces" "prone to AI slop," whatever that means, besides social networks that you personally do not like, in aggregate or individually.

      TikTok and Meta apps are definitely growing faster, in relative or absolute terms, than the New Yorker and the New York Times audiences are, paid or unpaid. If you compare aggregate social media to aggregate news and magazines, the latter is shrinking, if you exclude their presence on social media itself.

      If anything we already live in a post-scarcity world with regards to engaging content, starting a few years before the advent of generative AI. Why would AI generated content reverse that trend? Isn't TikTok's feed algorithm agnostic to whether a video is AI slop or user created? Isn't Instagram's? Are you really going to play No True Scotsman with "spaces" "prone to AI slop?" Shouldn't Kyle Chayka, who supposedly wrote a book on this, know that? Aren't there already too many good books, movies, TV shows, video games, operas, plays, etc. to consume?

      The toughest thing about this article is it is longing for a world that hasn't existed for a long time. If anything, the New Yorker and the New York Times, by doing a bad job at being media companies, have reduced the amount of new narrative creative projects that can thrive, not increased it. They never look in the mirror. The fickle and sometimes vindictive personalities that work there are not allied with narrative creators.

      The idea that discovering one or two diamonds in the rough offsets the incumbent cultural trends the New Yorker reinforces has long been dead - there is just way too much new stuff for any traditional media company to accurately review, report on and amplify. Everyone thriving on YouTube, Instagram, Steam, TikTok, hawking their shit, figured that out.

      It's even crazier to me that Kyle Chayka, who wrote a book on this, misses the mark here - I mean he should know about non-negative matrix factorization, which basically was the beginning of the end of traditional media, he should be able to make the leap that invention that enabled accurate collaborative filtering at scale killed The New Yorker, not AI, or anything in between 2000 and now. He should know there's absolutely no reason that AI generated content would be treated any differently than any other bad content, by NNMF or whatever feed algorithm.

      There's a possibility that the reason the NYTimes and Conde Nast have to reinvent themselves is because they do a bad job. To them, "A.I." is just another effigy, when the reality is that fewer and fewer people care what important New Yorkers think. Listen guys, it's not looking good for the writers, better to get your head out of the sand.

      • kibwen a day ago |
        > TikTok and Meta apps are definitely growing faster, in relative or absolute terms, than the New Yorker and the New York Times audiences are, paid or unpaid.

        Imagine the doctor walking into the room and saying "Congratulations! The tumor is growing faster than your legacy cells."

        • doctorpangloss a day ago |
          Yeah, I don't think this is a good thing. It's just to say that Kyle Chayka and this HN commenter don't offer any remedies. They go and complain and mock people's superstitious Instagram shitposts.

          If you care about narrative creative media, the best thing you can do is pay for it. Whatever that means to you. That is my remedy. The New Yorker isn't going to go out and promote Substack, and the deeper you think about why, the more you realize it is the New Yorker who are the assholes.

          • ishtanbul a day ago |
            I don’t have a remedy to offer for the problem at large, which is why I didnt suggest one, but on a personal level I would read the New Yorker, which is high quality content, imo
      • Barrin92 a day ago |
        > If you compare aggregate social media to aggregate news and magazines, the latter is shrinking

        This isn't even factually correct, the New York Times has done well, and grown over the last decade. In particular their move in the early 2010s towards paywalled premium subscription content, away from advertising, which was lambasted at the time, was in hindsight a very smart move.

        Yes, in the aggregate premium offerings don't come close to the size of the market of slop, it's always been a numbers game, but if you're talking about cultural trends, the people who run those slop factories read the Times and the Journal, they don't watch Youtube shorts themselves and probably keep their kids a mile away from it.

        Cultural capital and literal capital aren't the same thing. Danielle Steel has made 800 million selling 200 smut novels, but that hasn't given her a lot of cultural status or influence. The people who make decisions and set trends don't read her books.

    • jabroni_salad a day ago |
      When I pop open facebook and see some page has somehow gotten onto my feed with an AI generated children's science fair project I always notice that it has 60k+ likes. To me it says that the stuff is indistinguishable from real for enough people that discerning users are just helplessly along for the ride on every platform.

      The article purports that the reason slop is here to stay is because people like it. The next iteration, IMO, will be AI generated children's science fair projects that also have a pepsi logo in them, still with 60k likes.

      • kibwen a day ago |
        And how many of those likes are AI-generated?
      • munk-a a day ago |
        We need to move above that before it kills the web though - if the end state is that you want to look at any old random adorable puppy pictures so you just ask your machine to generate them for you then we might have a way out of this internet death cycle. I'd hope that if you can trivially produce your own better tailored slop on your local machine then the AI slop online will lose any value - I'm concerned that we're going to lose a lot of good content created by artists before we reach that level though.

        I suspect the actual outcome will be a rise of manual curation and providence where a feed of adorable puppies or discussions on technology will rise or fall by how diligent the moderator is at keeping slop out of the feed.

      • Animats a day ago |
        > When I pop open facebook and see some page has somehow gotten onto my feed...

        There are still people watching Facebook with unfiltered feeds? There are filters for that.

        From the article: "The main people benefitting from the launch of A.I. tools so far are not everyday Internet users trying to communicate with one another but those who are producing the cheap, attention-grabbing A.I.-generated content that is monetizable on social platforms."

        Yes. The main use case for LLMs remains blithering.

        • Terr_ a day ago |
          > The main use case for LLMs remains blithering.

          And in a close second-place, to cheaply/easily counterfeit signals that we are/were treating as indicators of human things like "intelligence", "time-investment", "emotional involvement", "distinct identity", etc.

          I wonder how many traditional cover-letters / essays will vanish because the format has been so debased it doesn't mean anything, and people will say: "Just give me the core bullet-points."

      • dspillett a day ago |
        > notice that it has 60k+ likes. To me it says that the stuff is indistinguishable from real for enough people

        Or that buying likes from bot farms is incredibly cheap.

    • bakugo a day ago |
      I wish this was actually true. The unfortunate reality is that the average internet user can't tell that they're looking at AI generated content with an AI generated caption and AI generated people posting AI generated comments, as we've already seen on Facebook.
    • r0m4n0 a day ago |
      Or humanity will devolve into confusion over what is real or AI generated for the next decade until we learn to live alongside the monster we have created.
    • orev 2 hours ago |
      Unlikely. What’s more likely to happen is that the AI posts that don’t get good engagement provides a feedback loop into the AI, training it to what works and what doesn’t.

      I believe we’re already seeing this on Reddit where upon the initial OpenAI deal, there was an influx of obviously poorly generated AI posts, people flagged them, and they fell off pretty quickly. The posts that make it through now are the ones people can’t tell are AI, based on that feedback.

      We’re training it in real time how to fool us.

  • desumeku a day ago |
  • ryandv a day ago |
    This is just the practice of invasive data harvesting taken to its natural conclusion, which any tech savvy user or computer geek brought up in the pre-postmodern social media era of the Internet (before Facebook) could have seen coming decades in advance. The only winning move here is not to play, accepting the unfortunate consequence that you will need to "limit the reach of your profile just to avoid participating in the new technology" - because nothing on the Internet is ever forgotten, and can (and will) be used for any purpose.

    Adding a few magic words to bewitch the AI into not scraping your profile is the new superstition of the digital era, a cousin to the pseudolegal "no copyright intended" incantation often seen on pirated YouTube videos of yesteryear. You cannot have your cake and eat it too, for there is a fundamental tradeoff between privacy and convenience as popularized by Schneier, 11 years ago. [0] You must stop using the platform and do something other than continue to consume vapid social media nonsense; yet no one ever listens or cares, for the revealed preference of the masses is to continue to not be users of the system but to be used in exchange for "free" access to these platforms.

    [0] https://www.schneier.com/blog/archives/2013/06/trading_priva...

    • joe_the_user a day ago |
      I'm not sure why having an LLM use my output in particular is problematic - I'm pretty sure OpenAI trained GPT-3+ on every single bit of data they could find, so they probably included a bunch of stuff I've written here already (though it is minuscule fraction of their vast corpus of course). Losing anonymity is something I'd be much more worried about overall (not that it's entirely separate I don't think it's the same).
      • rpgwaiter a day ago |
        There’s many good reasons, but for me it’s that I don’t want companies profiting off of my posts that have no intention of profit sharing.

        When I make a Youtube video and companies run ads on it, I get a piece of that pie (assuming I meet the requirements, etc.).

        That same video fed into Gemini so google can charge for AI video generation? I get nothing, Google makes bank. As a user I can pay for YouTube premium and not see ads, but as a creator there’s no amount I can pay to not feed Gemini.

        • gruez a day ago |
          >That same video fed into Gemini so google can charge for AI video generation? I get nothing, Google makes bank.

          How do you feel about commenting on this site for free, which probably provides some benefit to ycombinator?

          • rpgwaiter a day ago |
            I would happily pay a reasonable monthly subscription to this site or similar. No problem paying for a service that treats users with respect. Also using this site has taught me all sorts of things that likely made me some money indirectly. It seems mutually beneficial without selling my data or paying for HN.

            That said, if YC made a deal with some tech company to give them the firehose of data to train AI, I’d probably stop using HN. I stopped using reddit for a similar reason despite being a very frequent redditor with like 60k karma. I know it’s all pretty open and getting fed into many different LLMs anyways, but thats not necessarily YC’s fault.

            My ideal would be strong government regulations regarding AI training, requiring explicit opt-in that isn’t buried in a ToS or EULA. Ideally companies would require a “non-AI feeding” version of their website to legally run in my country.

            I can’t imagine a scenario where this happens in the current system, but I sure can fantasize.

            • gruez a day ago |
              Right, but are you objecting to AI training because companies are benefiting while you're being uncompensated, or think AI training is fundamentally bad? Your previous comment suggests it's the former, but by the same logic you shouldn't comment online either, because that also benefits the company and is uncompensated.
              • afiori 15 hours ago |
                They already said that they believe commenting here has been mutually beneficial but anyway this is a false dichothomy, one could be neutral on AI in general but feel negative towards training proprietary privacy-invasive AI models that will for sure be used to make their content creation less relevant.
              • shprd 10 hours ago |
                Users are here because they want to, they chose to participate in this community. You can stop using HN at any minute and Ycombinator will not chase you.

                But AI companies on the other hand took the internet hostage. They stole any creative work, code, art, and literally any data they could their hands on with no regard to license or consent from users. No one actively opted-in to let AI companies have their personal data, they just silently grapped everything they could. Maybe there's some obscure website where you shared something private and lost access to the account or the website even went down? Congrats, it's now revived in OpenAI dataset where you've absolutely no control or details about how it's being used, not even a way to request and pursue legal action because the training data is a "secret".

                It's not about compensation, it's that fact you have no option or say even if you don't use their services.

                You can't escape or opt-out, unless you go off the grid, and even then they still retain your old data and use it as they see fit.

          • 01HNNWZ0MV43FF a day ago |
            I get the satisfaction of telling off strangers
        • recursive a day ago |
          > for me it’s that I don’t want companies profiting off of my posts that have no intention of profit sharing

          I won't argue with your position, but most people would be hurting themselves more than they would hurt the companies, even in absolute terms.

        • csallen a day ago |
          > I don’t want companies profiting off of my posts

          I despise this attitude. It's so entitled.

          Our history of forever extending copyrights and protecting "intellectual property" has run amok, to the point where the average person thinks their scribbles, utterings, and ideas are valuable enough on their own to be worthy of a pay day. It's the culture of "My cut, my cut, my cut!"

          Someone else profiting is not a tragedy to get up in arms about. The fact that you were somehow, tangentially, kinda sorta in the vicinity of that profit does not and should not mean you are owed money.

          If you want to talk about privacy, sure, that's an issue worth bringing up. But, "I'm only mad because someone else made money and I didn't get paid," has nothing to do with privacy. It's pure greed, entitlement, and envy.

          You know what? If you want to profit, do something to create value. Write a book. Start a paid newsletter. Create a startup. Put on a show and charge admission. Nobody is stopping you.

          But if someone else figured out how to use a snippet of a comment you made 10 years ago as one-quadrillionth of the training data for a powerful LLM… if someone else figured out how to use your publicly-shared social media posts to attract advertisers to a platform they built… if someone else used 6 notes from a song you once sang to create a smash hit… kudos to them. They created something of value. You should've and could've done it yourself. Hell, you still can.

          But nobody should owe you money. We should not have a society where people who actually create stuff are subject to endless friction and threats from do-nothings and patent trolls demanding "my cut" if the metadata from their words or actions contributes to 0.0001% of someone else's idea that they turned into profit with hard work.

          • _heimdall 20 hours ago |
            > You know what? If you want to profit, do something to create value. Write a book. Start a paid newsletter. Create a startup. Put on a show and charge admission. Nobody is stopping you.

            I believe that the GP's complaint is that their content online is actually being scraped and turned into value for companies, they would want compensation for it.

            I'm personally of two minds on this, posting public content online includes no guard rails for how its used. I also disagree strongly with LLM companies throwing mountains of resources at scraping the web though, if nothing else it feels very much like a monopolistic play leveraging massive power in those resources to create a competitive edge that other players couldn't compete with.

            • Dylan16807 15 hours ago |
              > I believe that the GP's complaint is that their content online is actually being scraped and turned into value for companies, they would want compensation for it.

              And the comment directly addresses that. If someone creates a valuable thing and it has a minuscule pinch of your content inside it, you shouldn't be complaining or demanding payment. That's how participating in culture is supposed to work. When someone copies you orders of magnitude more directly, that's when you should be compensated or have control over it.

              • _heimdall 8 hours ago |
                That's a totally reasonable take, though it is just one opinion. I wouldn't tell someone they can't complain or feel entitled to payment for the value they created, though I bet we both agree that posting publicly online offers no expectation of payment by anyone coming across your content.
              • JohnFen 7 hours ago |
                Since the web was widely scraped to train LLMs, I have to assume that the entirety of what I had up on the web was included. That's more than a "miniscule pinch". I consider it to be wholesale abuse. For me, money doesn't enter into it at all.

                However, there's literally nothing I can do about it aside from withdrawing from the public web -- which is what I've done, aside from writing comments here. Until/unless there is some sort of effective way of defending against the crawlers, the open web is no longer a suitable place to publish anything.

                • Dylan16807 6 hours ago |
                  The complaints I see are almost always aimed at the output of an LLM, and that only contains a significant amount of a work when it breaks.

                  Going after the LLM itself, not the output, is a lot trickier. Anyone can make a big database of public website contents. And if they use it to make a search engine for example, that gets classified as entirely legitimate. If we're excluding the output of the LLM, what's the difference?

                  Also if you scrunch down into a small model, it mathematically can't contain very much of the input text.

                • _heimdall 5 hours ago |
                  There's never going to be a way to defend against crawlers and still have an open web. Good actors may respect conventions like a robots.txt file but that's ultimately just a polite request.

                  You could get further trying to block by user agent headers, known crawler IPs, etc but then you're just taking up the same fight advertisers have with ad blockers.

          • johngladtj 16 hours ago |
            I couldn't have put it better myself.

            If you don't want others to use what you say to make money... Shut up.

          • OvbiousError 10 hours ago |
            OP is not saying they want money. They say they don't want companies profiting from their work. The two are unrelated here.
        • seanhunter 7 hours ago |
          I'm pretty sure for antitrust reasons Google is (ironically) about the only AI company who is not training generative AI on youtube content. So when you make a youtube video and people train AI on your video and generate stuff using it you only get 1 click of benefit out of it from their first download.
      • quantified a day ago |
        I was served a youtube ad for a service that will let you pick a topic to write an ebook on, write the book for you, give it a layout design and a cover, then you can market it. A bunch of topics available. I find this to be gross. Look for an explosion of crap on Amazon bookshelves.

        Do you want me to publish 12-15 different ebooks containing the content you actually worked to create, and it just found permutations for?

      • JohnFen 9 hours ago |
        > I'm not sure why having an LLM use my output in particular is problematic

        For me, the problem is almost entirely that doing so requires me to have a great deal of trust in entities that I consider untrustworthy.

    • moron4hire a day ago |
      It's not possible to not play. Even if I chose to avoid social media, my kids' school insists everyone interest with their teachers through some crappy branded app.

      Read the TOS. Didn't like what I saw. Told them I didn't want to install an app on my phone. Asked nicely if there was some way I could participate without it. They acted like I beat my kids.

      • _heimdall a day ago |
        How do schools actually manage requiring this? Is it required that all students have access to smartphones, tablets, the internet in general, etc? If a student doesn't have a device to install the crappy branded app on, what does the school do?

        I don't have kids yet though that time is likely coming, this goes firmly on the growing list of reasons why we'd choose to homeschool.

        • theGnuMe a day ago |
          When you homeschool you will likely buy said crappy app (or another one) to help you actually school your kids...
        • SoftTalker a day ago |
          Schools that do this typically provide a Chromebook or iPad to each student.
          • hyggetrold an hour ago |
            Notice how there's a very consistent winner in all this :)
      • pessimizer a day ago |
        > Asked nicely if there was some way I could participate without it.

        You've done your duty with them. Now write a letter to the school board. Give them a little while to respond, and if they don't, start handing copies of the letter to other parents outside of the school gates.

        If you can't get other parents to care, then you're in trouble. Try to claim a religious exception.

      • masfuerte a day ago |
        Organisations insist that I interact with them using an app. I show them my Nokia feature phone and they find an alternative.

        Really, this is more damning. They could accommodate you but they don't because you could submit to their bullshit and choose not to.

        Maybe the answer is to carry a dead feature phone to show to them. Like a talisman to ward off their evil.

      • SirMaster a day ago |
        Sure it is, you can homeschool for example.
        • moron4hire 21 hours ago |
          No thanks, don't want to put my kids through my childhood
          • _heimdall 20 hours ago |
            Assuming you were homeschooled and it went poorly for you, was your takeaway that it could never be done well? What did you find that makes homeschooling always worse than trusting the local, likely poorly paid and appreciated, state educators?
            • em-bee 12 hours ago |
              What did you find that makes homeschooling always worse

              we all live by the experience of our childhood. it is pervasive, and influences us in ways that we can't escape. i don't know what the parent commented experienced. but most likely they have not experienced bad regular schools, nor seen well working homeschooling. so their own homeschooling experience probably is all they have to go on.

              i struggle as a parent because i missed a lot of important experiences from my childhood. as a result i am unable to replicate them. even if i get told by others what i should be doing, it feels unnatural and uncomfortable because i have not experienced that myself. for all i can tell, my kids childhood is better than my own, but i am repeating many of the mistakes of my parents because i simply don't know any better.

              homeschooling may go the same way. if the parent had a bad experience homeschooling, they may just be unable to translate that into a better experience for their kids even if they believe or know that a better experience is possible.

              trusting the local, likely poorly paid and appreciated, state educators?

              my experience in a US highschool led me to conclude that the poor pay self selects for more motivated teachers. all the teachers i had there were excellent and i had the best time there out of all my schooling. that's an anecdote of course, and there are many counter examples, but equally you could ask the reverse question. what did you find that you believe that government schooling is necessarily a bad experience?

              homeschooling is not for everyone. during covid, my ability to engage the kids into learning activities was an abysmal failure. so despite believing that great homeschooling is possible and not having experienced bad homeschooling myself, i'll never be doing it with my kids.

              • _heimdall 8 hours ago |
                Thanks for sharing here, its always interesting to hear a different person's experience/story. With regards to public schooling I'll share my anecdotal experience, as you mentioned I can't compare directly to homeschooling since I only ever went to public school.

                I grew up in a pretty well off area, by no means was it a rich area but solidly middle to upper middle class. My public schools were pretty well funded and maintained, the teachers I had were very hit or miss though.

                I had a handful of really good teachers in high school, maybe 3 out of the 25 or so different teachers I had. I had at least as many that were down right awful and had no business teaching. The rest were somewhere in the middle, they did seem to care about their job but weren't very good teachers and were mostly just teaching to the test.

                That is my biggest issue with how public schools are run. Public education is made into such a specific process of how students are taught, what they are taught, and how they are tested and evaluated that the education seems better suited to developing robots rather than adults. The remnants of an educational system designed to produce factory workers are still very noticeable in my opinion.

                I saw plenty of my peers struggle in school because they weren't interested in the topics they were forced to learn and weren't offered the chance to engage with what they actually enjoyed. Others struggled because the teachers we did have didn't understand the materials very well and weren't able to teach lessons in different ways for different students (I always noticed this most in math classes).

                Homeschooling is interesting to me, if and when we have kids, because (a) I experienced way too many bad teachers in what was comparatively a pretty good school system and (b) I don't prefer the idea of the state taking such direct control over what, how, and when kids are taught. It definitely isn't for everyone though, totally agree there and I've seen friends and relatives bounce between public schools and homeschooling as they struggled with making homeschooling work for them.

                • em-bee an hour ago |
                  i agree that traditional schooling leaves a lot to be desired. it's not just public schools but private schools as well. it's the whole structure of lectures and boring homework and the inflexibility of the teaching material. but most of all, and i believe studies confirmed that (i vaguely remember there was a discussion about this here on HN) the primary factor for a good school experience is the quality of the teachers. that goes for homeschooling as well as public or private schools. and the solution here is not the funding for schools, but the funding of education of teachers. no amount of school funding is going to make schools better if we don't give teachers better training.

                  besides that though, i think schooling really needs to be revamped. and we already have proven alternative options, if only our governments had the courage to try them.

                  i am mainly thinking of the montessori education model. it has proven itself. and teacher training is not expensive. it only takes one year if done fulltime.

                  i really do not know what it takes to change that though. there is so much resistance to change in the education sector that it is really painful to watch.

                  btw, it is interesting to note that in germany where i am from, homeschooling is strictly illegal. the reason today is that homeschooling allows families to avoid integration into the wider society. it enables insular thinking and allows families to avoid contact with others who think differently. the goal of public schooling in germany is to let children of different backgrounds, cultures, opinions and worldviews interact and learn from each other, to accept and tolerate them and to create an integrated society. school is the only place where different cultures are forced to interact with each other.

        • 7bit 15 hours ago |
          Americans and their home schooling
      • em-bee 11 hours ago |
        exactly this. i have written about this before: https://news.ycombinator.com/item?id=41529783

        some people just do not understand that there are interactions that are forced on us and that refusing to participate is only to our own disadvantage.

        even if they allow you to participate without installing that app, you will always be second class. people will forget to copy you on messages, or not see messages you sent. and they will have a grudge because you make them do extra work. they will leave you out on things that are optional, or not vote for you when electing a parent representative. they will reject you for not being a team player.

        the irony is that as a software developer i would not hesitate to create such an app. surely my app is better than the alternatives. and the TOS of my app is fine. oh wait, my boss changed the TOS without asking me. ooops.

  • vouaobrasil a day ago |
    > For the time being, though, avoiding A.I. is up to you. If it were as easy as posting a message of objection on Instagram, many of us would already be seeing a lot less of it.

    It's true that it is quite hard, but there are ways to reduce it for sure. Here is what I have done:

    1. I've deleted accounts for websites that promote AI. I have already deleted LinkedIn, Github, Medium, and a few others.

    2. I have stopped supporting businesses that use AI/support ones that are against AI. For example, the company behind the Procreate iPad app is 100% against AI so I support them. Also, in my professional life, I have already refused to collaborate with three separate companies due to their promotion and use of AI.

    3. I've deactivated any tools that could be AI based like assisted writing tools in Gmail.

    4. I do not click or read any articles with AI-generated images or text. Nor do I watch any videos any more.

    5. I am reconnecting with friends and share with them through email and other means.

    In my opinion, the internet has gotten WAY worse with the introduction of generative AI. Generative AI itself is not the root cause of course: the aggressive capitalistic takeover of the internet is, but AI is the apex tool for that and it makes the internet a rather horrible place.

    • chrisjj a day ago |
      > 4. I do not click or read any articles with AI-generated images or text.

      How can you possibly know the articles you read have no AI-generated images or text?

      • debugnik a day ago |
        So what if his criteria has false negatives? AI or not, ignoring noticeable slop will be a net positive.
        • chrisjj 12 hours ago |
          > So what if his criteria has false negatives?

          Then he unjustly undermines the livelyhood of human authors.

          The AIs will applaud him.

      • rpgwaiter a day ago |
        Not OP, but it’s extremely easy to tell if an article uses AI images.

        Are there vague images padding the article for SEO purposes?

        If so, is there attribution to an artist? If they contain images and don’t attribute, I avoid the website. Either they use AI to make SEO bait, or they steal artwork. I’ll avoid in either case.

        AI text is much harder to detect, not sure of a good way to avoid it at the moment.

        • rafram a day ago |
          Lack of attribution doesn’t tell you anything. If they’re licensing their images from a stock photo site, attribution likely isn’t required.
          • rpgwaiter a day ago |
            Hadn’t thought about that, you make a good point. However in that case the author didn’t feel it necessary to explicitly say the image is not AI generated, which means they likely share very different views about AI artwork and I’ll probably avoid.

            Even without that caveat, if they feel it beneficial to pay for stock photos for an article I’m probably good giving it a pass. Most major stock media companies are buying fully into AI generation anyways, and I can’t think of too many cases where stock images really add anything to an article aside from adding to the page size by an order of magnitude.

            • rafram 9 hours ago |
              Generally speaking, the human brain likes pretty pictures, and we're more likely to be engaged by an article if it has a pretty picture. Text-only sites don't draw as many eyeballs. Sad but true!
        • gruez a day ago |
          >If so, is there attribution to an artist? If they contain images and don’t attribute, I avoid the website. Either they use AI to make SEO bait, or they steal artwork. I’ll avoid in either case.

          I'm not sure why you think every use of AI generated images is "SEO bait". I'm sure some (most?) are, but it's perfectly plausible a well written article uses AI art in place of a generic image off unsplash or whatever.

          • SoftTalker a day ago |
            > I'm not sure why you think every use of AI generated images is "SEO bait"

            Most of the internet is SEO bait. It’s the safe default assumption.

        • chrisjj 14 hours ago |
          > If they contain images and don’t attribute, I avoid the website.

          And if the images have AI-generated attributions?

      • vouaobrasil 13 hours ago |
        > How can you possibly know the articles you read have no AI-generated images or text?

        I use a web of trust. I have a network of known writers who don't use AI. I avoid others...I mean, most articles on the internet are relatively useless anyway and just for entertainment so I don't really "need" those in my day to day life.

        The ones in my web of trust are enough for me,

    • zamadatix a day ago |
      A commendable effort (those are some serious sacrifices/actions - no videos!?) but it's a bit at odds with posting about it here which puts into question how much reduction in AI content specifically is occurring vs how much reduction in general consumption (which isn't necessarily a bad thing though). I.e. Y Combinator invests in hundreds of AI companies, >25% of the front page posts are regularly about AI, it was an original investor in OpenAI, users here regular post and discuss about how the latest AI models are even harder to catch. Even if you try to use the site in the most human way possible (filtering explicitly AI content, not engaging with comments referencing it, etc) it still seems like there would be a ton of AI content at a place that does more with AI than most of the companies listed.

      All that isn't to scare you from Y Combinator or anything, it's just to get your thoughts on how much you think these actions have really changed your consumption ratio vs your overall consumption? And how have you come to any certainty in the measurement of e.g. whether an article you're about to click on has AI images or text (that would seem near impossible to do much of the time even after reading?)

      • vouaobrasil 13 hours ago |
        > your thoughts on how much you think these actions have really changed your consumption ratio vs your overall consumption?

        Well, in terms of overall consumption, I also write and talk extensively about the dangers of AI using these examples so I think they might help my audience

        > And how have you come to any certainty in the measurement of e.g. whether an article you're about to click on has AI images or text

        The idea is to build a web of trust. Often, I ask. If I am uncertain, I don't read. Also, I help run a small magazine and we ask all authors to submit a statement that says they did not use AI. Of course, we have to trust people but what are you going to do?

        I also work for a magazine where all of us dislike AI and we don't use generative AI for any of our articles. I know my coworkers well so I trust them too.

    • makeitdouble a day ago |
      > I have already deleted [...] Github

      Given that Gitlab and GitBucket are also integrating AI, is getting out of these two platforms even a realistic choice for 99% of devs working for a regular company ?

      On the other hand, convincing your CTO to get your organization out of any major code management platform feels like a pretty exciting chalenge to tackle.

      • jazzyjackson a day ago |
        Eh. Gitea is a thing. I wonder how the creator of Fossil feels about AI. Clearly the juggernauts will keep juggernauting but theyre not the only options to collaborate on a codebase. My dream is that more people learn to Just Use Git without the megacorp baggage
        • makeitdouble a day ago |
          Yes, and it's pretty important there's commercial alternatives left on the market.

          A few decades ago it would be a different story, but as of now the appeal of going for Github/Gitlab is usually not the git integration and more the rest of it: the issue/MR/PR management, the whole integrated workflow, the CI ecosystem, user and right access management, etc.

          I think there must a decent number of organizations keeping a private/local code repository but sync it to github/gitlab to get all the other benefits.

      • vouaobrasil 13 hours ago |
        It's not really a realistic choice I guess but since Git is very distributed it's not an impossible one. Luckily for me, I'm really the only one who codes in my own small organization so I don't have to use Github.
  • joe_the_user a day ago |
    I thought it was going to be about escape generative AI output online.

    Personally, I run a large-ish niche FB group and I haven't seen a threat of bots taking over there or other online spaces. Is there anywhere that people have seen the bots crowding out the people? My guess is Twitter but that place would already/always a sewer.

  • 23B1 a day ago |
    "People just submitted it. I don't know why. They 'trust me'. Dumb fucks."

    -Mark Zuckerberg.

  • yazzku 21 hours ago |
    > hundreds of thousands of Instagram users posted a block of text to their accounts hoping to avoid the plague of artificial intelligence online. [...] “I do not give Meta or anyone else permission to use any of my personal data, profile information or photos.”

    Actually, you did. What you didn't do is read the fucking terms of service.

    But sarcasm aside, it is a general failure of the system that things have gotten to this point. The amount of naivete like is seen above is tremendous, yet somehow we and everyone who understands this crap have failed to deliver the memo.

  • p0w3n3d 8 hours ago |
    I think one could hypothetical create a preamble invisible to human which would overload the LLM reading one's article.

    Like Rick and Morty did in simulation.

    At least it would cost a lot more to harvest one's article for the harvesting company