I Received an AI Email
692 points by imadj 3 days ago | 567 comments
  • namanyayg 3 days ago |
    This is going to become more common everywhere.

    If the dead internet theory isn't already true, it is going to be soon.

    Such "personalized" cold outreach is seen as the next holy grail by marketers and will be a common sight on LinkedIn, Twitter, Email etc, soon.

    • themanmaran 3 days ago |
      It's truely a race to the bottom. Cold email response rates are already ~1% industry average. Every outbound tool is adding the AI customization, and there is a slew of 'AI sales rep' companies promising more and more personalized spam.

      There will likely be rewards at first. An uptick in response rates as most of the market won't recognize emails are AI generated. But because it's trivial to send AI personalized emails at massive scale, your email inbox will become entirely useless.

      • nostromo 3 days ago |
        1% is also about how well this worked, according to the sender's blog post.

        10 signups / 970 emails sent

        • youssefabdelm 3 days ago |
          Kinda makes you wonder... why don't they just advertise with those odds?
          • lmm 3 days ago |
            Because that's an order of magnitude better than what you get from advertising.
            • youssefabdelm 17 hours ago |
              Assuming signups = $, true. But not if there's some free trial or something and then you've got a conversion rate on top of that.
        • altdataseller 2 days ago |
          Thats actually a very good rate. 1% conversion rate is drastically different from a 1% response rate
    • supriyo-biswas 3 days ago |
      The silver lining is that people will learn to just ignore such outreaches and word-of-mouth feedback will be important again, or at least I hope so.
      • nosbo 3 days ago |
        Word of mouth with IRL people? I'm not sure I can assume anyone on any forum is real anymore. And if they are real, I assume they are marketers pretending to be users to push a product. Maybe journalism makes a come back if you can trust they are real and not a sellout.
        • sambazi 3 days ago |
          pretty sure top meant RL ppl IRL aka meatspace.

          information coming over unqualified electronic channels is not trustworthy anymore

      • portaouflop 3 days ago |
        It is already like this in my experience.

        Cold outreach is dead and word-of-mouth is the most effective marketing method

      • lmm 3 days ago |
        The AI spammers will hire people on minimum wage to do that too, if they aren't already.
        • beAbU 3 days ago |
          I am convinced that any post on Reddit that espouses the virtues of some or other product is a paid advert.

          There is way too much corporate worship despite the platform's users generally priding themselves on being enlightened and smarter than the rest.

    • cranberryturkey 3 days ago |
      What is the "dead internet theory"?
      • kibwen 3 days ago |
        It was a joke from the 2010s that most of the people that you interacted with on the internet were actually bots, and that you were the only human using the web.

        Now, in the post-LLM age, it doesn't sound like a joke anymore.

        • devjab 3 days ago |
          Does it really matter if you’re being cold called by an AI or some sales person following the same few procedures they always do?

          I’d prefer sales people keep their jobs. Having had the misfortune of being seated next to the telemarketing team in an investment bank for half a year… however… Let’s just say that I’m not sure you would even know if it was a person or a bot. They’re not even scripted or “trained” like your average telemarketer because our target audience is actually somewhat interested in what we sell, but listening to them repeat themselves over and over from their own “personal scripts”… well… they are already bots man.

          • cranberryturkey 2 days ago |
            Very true, I sat next to the sales guy at a small company and he was on the phone all day repeating himself day in and day out. Easily replaced by AI.
    • cpach 3 days ago |
      Recruiters on LinkedIn already used automation for outreach even before LLMs became popular.
      • devjab 3 days ago |
        LinkedIn is already on this. The reason they had their little “skills tests” is because what they used to sell was the collection of “skills” listed on your profile. I say skills because I’m not sure what the English word for knowing C# and listing it on your linked in profile is and I can’t seem to find it.

        Anyway, I assume that the reason they are dismantling the skills system (and their verification quizzes) and moving things into personal “projects” is because it’s too easy for marketers to skip the LinkedIn tools if it remained the way it was. Now, however, with Microsoft own LLMs trundling through our data, they’re going to maintain their monopoly on easy access to professionals that meet certain requirements.

        I guess it could also be because those skill quizzes had their answers readily available all over the interwebs.

  • metadat 3 days ago |
    > And their blogpost starts of with:

    >> Have you ever received an email that felt so personalized, so tailored to your interests and experiences, that you couldn't help but be intrigued? What if I told you that email wasn't crafted by a human, but by an artificial intelligence (AI) agent?

    > I don't really have words for this, but I dislike this.

    What a classy understatement. I find the strategy employed by Wisp predictable and infuriating. Like insects or other near-automata, humanity is racing to the bottom with "Generative AI". And I use "AI" in the loosest possible sense here, because once you pull back the curtain, current tech is actually only a slightly better Markov chain.

    After using chatgpt regularly, it's responses to anything but the most trivial, clueless questions are riddled with errors and "hallucinations". I often don't bother anymore, because it's easier to go to the original source: stackoverflow, reddit, and community forums. Gag. It does still make a good shrink / Eliza replacement.

    • endofreach 3 days ago |
      > > I don't really have words for this, but I dislike this.

      > What a classy understatement.

      Maybe i should write a blog, simply because i have a lot of words for this... but well, they would neither be classy nor understatements.

      • zamalek 3 days ago |
        Kudos to the author for naming and shaming. I am honestly bewildered as to how this Raymond thinks that insulting a developer's intelligence could result in a lead.
    • mlsu 3 days ago |
      I love that turn of phrase. Insects or near-automata. Describes it perfectly.

      LinkedIn -- like a floodlight in a swamp.

      • metadat 3 days ago |
        I enjoyed your comment so much I've added it as a quote on my profile. Thank you!

        https://metadat.at.hn/

    • hunter2_ 3 days ago |
      > After using chatgpt regularly, it's answers

      It isn't responding with answers. It's responding with probable verbiage. An actual "answer" requires a type of interpretation that it doesn't perform.

      • Tao3300 3 days ago |
        > probable verbiage

        I like that phrase. Also, how'd you get my password?

        • lelanthran 3 days ago |
          So that's what it is. I just saw '****' ...
    • DrSiemer 3 days ago |
      Dismiss it all you want, it's still going to destroy what is left of the open internet and unsolicited email communication.

      Those haven't been in the best shape for the last decade anyway. The benefits of easily accessible compressed knowledge far outweigh the cost, so we're still going up imo.

      ChatGPT is perfect for mundane development tasks and language mobility, so quite useful for a significant portion of especially low level developers. I've prompted a bunch of useful little Python scripts myself, without ever bothering to even check the syntax.

  • zufallsheld 3 days ago |
  • elaus 3 days ago |
    AI will not only pass many classical spam filters (Bayesian filters), it will also make it much harder for humans to detect spam (OP's post being a good example).

    I never fell for a spam mail so far (i.e. not once clicked a link like OP did), but I fully expect this will change soon. Tough times for people that commonly expect mail from random strangers.

    • prmoustache 3 days ago |
      Well it is quite easy. No real humam has been using email anymore in the last 5 years or so.

      Even in the workplace it is now common for most people to have a signature saying "only contact me via ms teams".

      I am pretty sure that sooner or later the spam will find its way on teams/slack/discord the same way it does on whatsapp but at the very least they are easier to block permanently.

      • gambiting 3 days ago |
        >>No real humam has been using email anymore in the last 5 years or so

        Wow, that's some extrapolating from a personal bubble if I've ever seen one. Plenty of workplaces still have email as their default communication method.

        • prmoustache 3 days ago |
          There is obviously a little bit of exaggeration but when I open my email at the workplace the bulk of the mails are:

          - semi automated reminders (you haven't filled your timesheets!), usually sent by humans but that do not expect answers - internal newsletters - general HR news - special news: electrical issues at the office, stay at home! - spam

          Bottom line: none is addressed to you as a particular human, nor require answers.

          I am sure it changes for people who have interactions with people outside of the company but I would hate having their job and don't understand why companies haven't adopted XMPP widely to make those kind of interactions. I can theorically receive spam via XMPP, but it requires at the very least that I approve the relationship before hand so if it comes from a domain I don't expect I have no reason to accept that trust.

          But on personal side, I haven't received anything from a human for years. People I know usually know my phone number and contact me via instant messaging.

          • gambiting 3 days ago |
            Right, but that's an anecdote - and if we're sharing those my last company that I left very recently everything was an email. If you needed to speak to a lead or a developer from another team you'd email them, even though we had MS teams. You'd maybe ping the person on Teams for a quick thought, but if it was anything more complicated than couple messages you'd send an email. And that was a a big corporation of 40k people.

            >>But on personal side, I haven't received anything from a human for years.

            I actually have an old friend back from high school and we talk daily using emails. He doesn't use any IM apps so it kinda stuck as our default way of talking.

            And of course I exchange emails whenever there's some kind of customer service thing that needs to be dealt with - it's always best to have things in writing.

            • prmoustache 2 days ago |
              > And of course I exchange emails whenever there's some kind of customer service thing that needs to be dealt with - it's always best to have things in writing.

              I have the feeling contact forms are disappearing everywhere nowadays. Everything is either a chatbot or a chatcall these days.

    • portaouflop 3 days ago |
      I would treat it like I do with phone calls/messages - if it comes from a number/address I don’t know it goes into the trash.

      I have no need of messages by random strangers

    • jtriangle 3 days ago |
      I talked to a relative from nigeria one time for a couple months. He was actually in nigeria, spoke pretty good english, and was scamming to get by and fund his way through college. He said his group was doing ok, and he was living better than most. He sent me pictures of himself, and where he worked, his motorcycle, all sorts of things. Not as a pitch either, like, he was proud of those things and I was interested so he was happy to show and tell about his life, we even exchanged some recipies.

      Then one day he just stopped replying, and his email address would bounce. My best guess is it got shut down, for, you know, scamming. Bummed me out though, he was cool, except for the scamming thing.

    • fsckboy 3 days ago |
      Bayesian filters? how quaint. you haven't switched to AI filtering yet? Your AI has an advantage over their AI because it has read all your other email and knows what you are actually interested in.
  • ChilledTonic 3 days ago |
    I’m actually thrilled by this, as it means all the hack marketers that spam my inbox incessantly with whatever product they’re hucking - this time for sure perfect for my business, in spite of the fact I’ve ignored their last ten emails - are all out of a job, and good riddance.

    The author sounds unfamiliar with this brand of marketing email, so I can see why it would come off disquieting to find it’s all AI - but it’s equally annoying from a human.

    At least with AI sending this crap nobody can use these emails to justify their sales bonus.

    • _nalply 3 days ago |
      Some people will send their mass spam and phish anyway. No thanks.
      • purple-leafy 3 days ago |
        Spam? Easy. Someone selling something? Spam! I might set up an automatic email responder that reads an emails contents, runs it through my own LLM, and if the email is trying to sell me something, auto reply with “fuck off!”
        • Brajeshwar 3 days ago |
          I'd rather delete/block it than reply/react to it at all. If you react, they know you exist and you are a valid target to re-target repeatedly, resold to other marketers.

          Mark as SPAM or Block/Filter or Ignore.

          • purple-leafy 3 days ago |
            Okay new plan, I’ll have another email that responds to the email and says “fuck off”, meanwhile my honeypot email will block and mark as spam
            • Ekaros 3 days ago |
              Sadly I think it is illegal to sing up these addresses to every service known to you... Otherwise it would be interesting SaaS opportunity. Automatically sing-up spammers to any number of newsletters or contact forms...
              • purple-leafy 3 days ago |
                I think you just gave my life purpose. It will be my magnum opus.

                Actually that’s already been completed, and will be released to hackernews in the coming days

            • _nalply 2 days ago |
              Often spammers and phishers misuse legitimate emails they hacked.

              Just ignore and move on.

          • immibis 2 days ago |
            When they're paying real money to scam you, wasting their time isn't a terrible idea. Like keeping the Microsoft virus scammers on the phone for an hour while you set up a virtual machine for them to remote into.
    • jeauxlb 3 days ago |
      Why are you happy that people are out of a job here? You still suffer the ills of the product, now infinitely more incessant, at a marginal cost of $0.
      • ronsor 3 days ago |
        I think it's reasonable to be happy that someone is not getting paid to do something you hate. In fact, if you're suffering unwillingly, you probably want as few people as possible to benefit.
        • crabmusket 3 days ago |
          OpenAI is getting paid to do it.
          • ronsor 3 days ago |
            Yes, but a lot less than if a person were getting paid to do it, so still less money is changing hands.
            • lucianbr 3 days ago |
              I don't know which of "5 randos getting a living wage by spamming me" and "Altman getting filty rich by spamming me" is worse. I'm inclined to say the latter, though of course it's quite close.

              Wish SV would stop thinking anything that makes money is great, no matter the crap it inflicts on people. Guess I'm asking for way too much.

            • maronato 3 days ago |
              I don’t think so. Marketers don’t send X amount of spam because X is the right amount of spam they want to send. They are limited by how much money they want to pay in salaries and management, which defines how many people they can hire to send spam.

              If the people they employ today suddenly became twice as productive, the company wouldn’t fire half of them - they just would enjoy twice the profit. The same applies to AI.

      • Joker_vD 3 days ago |
        Because maybe, just maybe — those people will find some other jobs, and those jobs will be more socially beneficial this time? One can dream.
        • bryanrasmussen 3 days ago |
          They can maybe get jobs for Microsoft and call people up to tell them they've noticed something is wrong with their computer!!
        • bowsamic 3 days ago |
          “maybe, just maybe”

          “One can dream.”

          You’ve either used these sarcastically, or accurately. I think you’ve done the former, but the truth is the latter.

          • Joker_vD 2 days ago |
            I am absolutely serious. Any employment has opportunity costs: a person who writes and sends out cold call spam e-mail for 8 hours a day is a person who could be spending those 8 hours on something else, but isn't. Yes, switching jobs is not very easy, and it's stressful but humans, thankfully, are not (yet) a species of highly-specialized individuals, with distinct morphological differences that heavily determine the jobs they potentially can or can not do.
            • bowsamic 2 days ago |
              So I was right, you did use it sarcastically, since you are still naive
    • kazinator 3 days ago |
      How do you know it isn't exactly the same people, with zero reduction in headcount?

      Designing the content of spam e-mails sounds like a small aspect of the "job".

      If AI spams start fooling people more reliably, that's not something to celebrate.

      This blogger thought, at first, that it came from an actual reader. I can't remember the last time I thought that a spam was genuine, even for a moment. Sometimes the subject lines are attention-getting, but by the time you see any of the body, you know.

      • jstummbillig 3 days ago |
        If you do nothing that is discernible from noise (be that manually or through AI), unless your explicit goal is to generate noise, your ROI is 0.

        Sure, AI spam can severely disrupt peoples attention by competing with "real" people more competently. But people will not have twice the attention. We will simply shut down our channels when the number of real-person-level-ai-spam goes to infinity, because there is no other option. Nobody will be fooled, very quickly, because being fooled would require super human attention.

        Granted, that does not seem super fun either.

        • bowsamic 3 days ago |
          The emails are discernible from noise though. They literally have a signal to noise ratio higher than one. Noise would be pure rng output. So I don’t know what you’re getting at
          • Wolfenstein98k 3 days ago |
            Yes you do. You're being over-literal.

            "Noise" in context doesn't mean random characters, it means garbage or spam or content not worth your while.

            • bowsamic 3 days ago |
              No, I'm not being over-literal. Here's why:

              Yes, it could be that for you a given advert is irrelevant or not worth your while, but the point he was making is that it won't even be worth it for the advertiser to put out the advertisement because it will be noise for everyone.

              However, there is only one kind of noise that is noise for everyone: literal noise.

              So long as the spam is about something, it is relevant to someone, and therefore it does not necessarily have zero ROI.

              EDIT: The only kind of noise that has no semantic is actual "mathematically pure noise" as the person below commented (/u/dang banned my account so I can't reply)

              • thomashop 3 days ago |
                > However, there is only one kind of noise that is noise for everyone: literal noise.

                I feel like you're a bit too literal here. When people talk about noise it doesn't mean mathematically pure noise. A signal-to-noise ratio close to 1 is also colloquially called noise.

                • bowsamic 2 days ago |
                  Addressed above
          • ImHereToVote 3 days ago |
            He is talking about semantic noise. Something that appears to have substance but is just slop actually. When everything is that. Then all email will become equivalent to slop. How could it not? Someone will be burned once or twice, but after that, there is a semantic phase shift.
            • bowsamic 3 days ago |
              What is "just slop" though? A spam advert for a product is still an advert for a product. Therefore it's not just semantic noise, it is still an advert for a product, and therefore his point is invalid: there is an ROI and people will continue to be employed to do it
              • TeMPOraL 3 days ago |
                > A spam advert for a product is still an advert for a product. Therefore it's not just semantic noise, it is still an advert for a product

                Ergo slop and semantic noise.

                Companies that used adverts which weren't noise went out of business long ago.

                • bowsamic 2 days ago |
                  Adverts have semantic content, they aren't noise.
                  • ryandrake 2 days ago |
                    Let's just call it slop then. Peak HN: Another conversation is logjammed by nitpicking the precise definition of a word rather than discussing the overall point.
                    • bowsamic 2 days ago |
                      Except I am still discussing the point: the companies won't stop getting an ROI because "slop" still produces an ROI, even if people know it's slop, because it isn't contentless noise, it has semantic content.

                      Just because you and the others don't understand what point I'm making doesn't mean the conversation is "logjammed". I am still discussing the overall point, you just don't see it.

                      • ryandrake 2 days ago |
                        For the record I agree with you--just pointing out a silly, but common, HN pattern.
            • kazinator 3 days ago |
              "How could it not?" There are ways.

              Consider that we have fairly decent anti-spam measures which do not look at the body of a message. To these methods, it is irrelevant how cleverly crafted the text is.

              I reject something like 80% of all spam by the simple fact the hosts which try to deliver it do not have reverse DNS. Works like magic.

              E-mail is reputation based. Once your IP address is identified by a reputation service as being a source of spam, subscribers of the service just block your address. (Or more: your entire IP block, if you're a persistent source of spam, and the ISP doesn't cooperate in shutting you down.)

              To defeat reputation based services driven by reporting, your spams have to be so clever that they fool almost everyone, so that nobody reports you. That seems impractical.

              How AI spammers could advance in the war might be to create large numbers of plausible accounts on a mass e-mail provider like g-mail. It's impractical to block g-mail. If the accounts behave like unique individuals that each target small numbers of users with individually crafted content (i.e. none of these fake identities is a high volume source), that seems like a challenge to detect.

              • immibis 3 days ago |
                These IP blocklist services also have a reputation of their own: if you are trying to send legitimate mail, there's a good chance your IP is on several of these blocklists for reasons you have nothing to do with. You can only remove it by grovelling and paying lots of money (extortion). So using one of them will cause you to reject legitimate mail.
        • lmm 3 days ago |
          > If you do nothing that is discernible from noise (be that manually or through AI), unless your explicit goal is to generate noise, your ROI is 0.

          We're talking about a group of people whose core skill is convincing people to pay for stuff that isn't worth it. You and I may know they're worthless, but that doesn't mean they're not getting paid.

          • jstummbillig 3 days ago |
            Let's assume you have a mom that loves you very much and she let's your know by text on a semi-regular basis. She asks you to come by on Friday. That might seem like a nice idea to you. You reply yes, and you go.

            Now, imagine you got messages from what appears to be not 100 but, oh I don't know, 1 000 000 000 000 000 of the very best moms that have ever existed.

            And they all do love you so very much. And they do let you by writing these most beautifully touching text messages. And they all want to meet up on Friday.

            What is going to happen next? Here is what is not going to happen: You are not going to consider meeting any of them Friday, any week. You will, after the shortest of whiles, shut down to this signal. Because it's not actually a signal anymore. The noise floor has gone up and the most beautifully crafted, most personalized text messages of all time are just noise now.

            • ayewo 2 days ago |
              We all get to have only one mom and moms dont live forever.

              So once someone’s mom passes away, you can’t really fool them with 1 or dozens of message from other moms anyway.

            • lmm 2 days ago |
              I don't know what you're trying to say. The people making payroll decisions have the same amount of people under them as they always did.
    • elorant 3 days ago |
      Someone will just pack this into a product and sell it to marketers.
      • TeMPOraL 3 days ago |
        And use it to market the shit out of it. If marketing finally collapses under the weight of its own bullshit, I'll be celebrating.
    • tivert 3 days ago |
      > I’m actually thrilled by this, as it means all the hack marketers that spam my inbox incessantly with whatever product they’re hucking - this time for sure perfect for my business, in spite of the fact I’ve ignored their last ten emails - are all out of a job, and good riddance.

      > ...

      > At least with AI sending this crap nobody can use these emails to justify their sales bonus.

      What weird, misplaced animus. You're happy some salesguy got fired, while his boss sends even more spam and possibly makes even more money due to automation?

      Those hack marketers rate-limited this kind of spamming. Now things are about to get worse.

      • eru 3 days ago |
        > [...] while his boss sends even more spam and possibly makes even more money due to automation?

        Wouldn't the exact argument apply to that boss as well?

        • bryanrasmussen 3 days ago |
          unless this is a big multinational spam organization probably the boss of the person sending the email is the highest up, but no matter what there will be someone on the top who does not get fired and will be able to reap all the rewards of the AI automation, at least until the AI revolution puts them up against the wall.
          • eru 3 days ago |
            There's presumably heavier competition from other spammers, until everything is in equilibrium again. The wallets of potential spam victims only have so much total cash.
      • bloqs 3 days ago |
        Some people don't realise how lucky they are that they are blessed by the cognitive lottery that affords them a brain and personality that lets them pursue an enriching and engaging career they feel is valued by society.

        In classic HN style the original reply lacks empathy, and demonstrates a preference of machines over humans. Life goes on...

        • tivert 2 days ago |
          > In classic HN style the original reply lacks empathy, and demonstrates a preference of machines over humans. Life goes on...

          That stereotype definitely rings true. Thank you for helping me put my finger on it!

    • xarope 3 days ago |
      some of the marketing spam is so low effort, I get addressed as "Dear {{prospect}}". It does make deleting the email easy though, since the preview of the first line allows me to filter pretty fast!
    • saturn8601 3 days ago |
      I look forward to the blog post of how a hacker uses AI to respond to AI generated leads and then have them play with each other....and then uses AI to create content for a Youtube channel fighting back against marketers using said AI.

      These early days is ripe to make some quick cash before it all comes crashing down.

      • Terr_ 3 days ago |
        > and then uses AI to create content for a Youtube channel fighting back against marketers using said AI.

        I'm skeptical: It's easier to create bullshit than to analyze and refute it, and that should remain true even with an LLM in each respective pipeline.

        ----

        P.S.: From the random free-association neuron, an adapted Harry Potter quote:

        > Fudge continued, “Remove the moderation LLMs? I’d be kicked out of office! Half of us only feel safe in our beds at night because we know the AI are standing guard for misinformation on AzkabanTube!”

        > “The rest of us sleep less soundly knowing you have put Lord Bullshittermort’s most dangerous channels in the care of systems that will serve him the instant he makes the correct prompts! They will not remain loyal to you when he can offer them much more scope for their training and outputs! With the LLMs and his old supporters behind him, you’ll find it hard to stop him!”

      • stavros 3 days ago |
      • masswerk 3 days ago |
        Isn't this pretty much one of the proposed new concepts for online dating? ;-)
    • cen4 3 days ago |
      The problem is never what one person or one company is doing.

      But when everyone copies what that one person or one company is doing. Software makes the copying process dead easy.

      Once the herd starts stampeding, it creates a secondary effect of an arms race for finite Attention of a finite target audience. That assault and drainage of that finite attention pool, happens faster and faster and every one gets locked in trying to outspend the other guy.

      An example currently is Presidential Campaigns furiously trying to out fund raise each other. Its going to top 15-17 billion this year. All the campaign managers, marketers, advertisors make bank. And we know what quality of product the people end up with. Cause why produce a high quality product when you can generate demand via Attention Capture.

      The chimp troupe is dumb as heck as a collective intelligence.

    • simion314 3 days ago |
      If this works those spammers will make more money and send more emails scamming more people. Maybe some politician would fall for soemthing like this, be public ally embarrassed and lose a lot of money and then something more will be done to address this spammers and scammers .
    • darby_nine 3 days ago |
      I've found it's easier to simply ignore your inbox and hope the spam unsubscribes itself and disappears
      • chillfox 3 days ago |
        lol, I treat my email inbox like a dumpster that I occasionally search when I know there's something there that I need to retrieve. The spam has won, I have moved to chat platforms for my communication needs.
        • ChrisMarshallNY 3 days ago |
          I get -no exaggeration- several hundred spams a day. I have an OG email address that was grabbed by spammers, since the days of Network Solutions (so it’s been awhile).

          I maintain Inbox Zero, much of the time, and seldom have more than three or four emails in my client at any time.

          I get there by being absolutely brutal about tossing emails.

          I probably toss a couple of legit ones, from time to time, but I do have rules set up for the companies and people I need to hear from.

          The thing that will be annoying, is when AI can mimic these. Right now, that stuff is generally fairly clumsy, but some of the handcrafted phishing emails that I get, are fairly impressive. I expect them to improve.

          A lot of folks are gonna get cheated.

          I do think that some of these Chinese gangs are going to create AI “pig butchering” operations, so it will likely reduce their need to traffic slaves.

          • grugagag 2 days ago |
            What are pig butchering operations?
            • jabroni_salad 2 days ago |
              It's people that write you love letters until you western union them your entire retirement account.
              • ChrisMarshallNY 2 days ago |
                It’s really quite sophisticated.

                John Oliver actually did a great segment on it, but I won’t link it, because a lot of folks don’t like him.

                • jabroni_salad 2 days ago |
                  I haven't seen that but I have read some articles about it on propublica. I just kept the description as simple as possible to make it more memorable.
                  • ChrisMarshallNY 2 days ago |
                    Well, a lot of the scammers are actually slaves, trafficked into Myanmar boiler rooms, by Chinese Tongs.

                    If AI takes off for this stuff, the gangs are less likely to be kidnapping these poor schlubs.

                    So … I guess this would be a … positive outcome?

                    Not sure if AI zealots will be touting it, though.

          • chillfox 2 days ago |
            That seems like more effort than simply abandoning email.
            • ChrisMarshallNY 2 days ago |
              It is, but abandoning email isn’t an option for me, so this is what I do.
    • safety1st 3 days ago |
      I don't really think that AI is the central issue here. The issue is that Kurt, the founder of Wisp, is a liar.

      He misrepresented himself as a big fan of all these blogs, who's read their posts etc. and that's how he achieved such a high response rate. In effect he deceived people into trusting him enough to spend their time on a response.

      Now ordinarily this would be a little "white lie" and probably not a huge deal, but when you multiply it by telling it 1,000 times it becomes a more serious issue.

      This is already an issue in email marketing. The gold standard of course is emailing people who are double opted in and only telling the truth, and if AI is used to help create that sort of email I don't really have a problem. There is basically a spectrum where the farther away you get from that the progressively more illegal/immoral your campaigns become. By the time you are shooting lies into thousands of inboxes for commercial purposes... you are the bad guy.

      Sorry to say but the real issue here is Kurt has crossed an ethical line in promoting his startup. He did the wrong thing and he could have done it pretty effectively with conventional email tools too.

      • pseudalopex 2 days ago |
        Wisp founder Raymond Yeh is a spammer and liar. Kurt was a victim of Raymond Yeh's fraud.
        • safety1st 2 days ago |
          Thank you I got the names confused!
    • hk__2 3 days ago |
      From the spammer blog post [1]: "I spent hours trying different data sources", "a lot of time was spent on find-tuning the tone and structure of the email", "It took multiple tries to finally have the agent write emails in different language", etc. This won’t put marketers out of a job, but will greatly improve their tooling and enable more people to do the same thing with even less qualification.

      [1]: https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...

  • purple-leafy 3 days ago |
    Bastards. AI has been a massive ass pain, and marketers are the worst (:

    People sending AI crap to others should have their email accounts banned.

    • portaouflop 3 days ago |
      Replace AI with spam or ads and we have been talking about this for decades.
      • FridgeSeal 3 days ago |
        No no no it's really important we ~~let them keep harassing us and invading our privacy~~ ensure ad-tech and marketing survive /sarcasm.

        Can't help but wonder if the advent of LLM systems wouldn't be quite so depressing if we weren't already operating in an internet that's been reduced to basically a cesspool of advertising and communication-spam.

  • castigatio 3 days ago |
    It's a sign of things to come. We're going to have our own AI agents that filter and respond (or not respond) to these kinds of messages. Agents interacting with other agents. The bar to get hold of a real person is going to become that much higher. It is going to be messy for some time as agents war with other agents to reach the human eyeball. Some assholes are going to make a ton of money in the short term exploiting the gap - just like early spam kings did.
    • blitzar 3 days ago |
      I hope my Ai agent doesn't fall for the Ai agent who found my distant Nigerian prince cousin and wire them 10,000 so they can send me my 100,000,000 share of the family inheritance.
    • drsim 3 days ago |
      I love this direction. It could be that the writer’s AI agent knows that he’s looking around for a new CMS so asks for more info, compiling this for review. Or it says ‘not interested’ and the conversation is muted.

      All without the writer needing to be involved in reading the cold outreach.

    • saturn8601 3 days ago |
      Technology ruins everything it touches doesn't it?

      I was recently thinking about this Ozempic fad and how it will lead to no one being overweight but just be dependant on Ozempic...until food producers that made everyone fat in the first place with their processed junk will produce Ozempic resistant foods...and then we are really in a world of hurt.

      • okal 3 days ago |
        What incentive do they have to make Ozempic resistant food? Ozempic resistance seems like an odd thing to optimize for. Or are you suggesting it will happen accidentally?
        • choeger 3 days ago |
          Ozempic reduces appetite, right? So food producers cannot be happy about it.
          • N0b8ez 3 days ago |
            I love the idea of the comic villainy of someone who deliberately chooses to organize a team to find ways to circumvent Ozempic in order to keep their buyers unhealthy and addicted. Could such a schemer have an internal monologue, and what would it consist of? What do they see when they look into a mirror? Their experience of reality must be utterly fascinating and alien.
            • Sander_Marechal 3 days ago |
              Just ask an MBA focused on short term profit.
            • lucianbr 3 days ago |
              Read the blog post that this blog post talks about - the one that says "we use AI to spam people, isn't it great?". It will be something like that. As long as there is money to be made, the internal monologue is just "hope this works and I get more money".

              > What do they see when they look into a mirror?

              A person deserving of riches, that is about to get them. Nobody sees themselves as the villain. Well, maybe some, but vanishingly few.

            • rsynnott 3 days ago |
              I mean, see the tobacco industry.
            • immibis 2 days ago |
              They already did this pre-Ozempic - a lot of foods are optimized to keep you eating, and that's why there's an obesity crisis. Low nutrients, high sugar and fat. In the post-Ozempic world there will surely still be things that trigger the continued appetite of Ozempic users. Especially with the FDA having just been neutered.
        • DonHopkins 3 days ago |
          Read Philip K Dick's "A Scanner Darkly" (or see the movie). They're forcing overweight people in Ozempic rehab to farm the ingredients to make more Ozempic!
      • bowsamic 3 days ago |
        No, there are many things technology has improved, not ruined
        • saturn8601 3 days ago |
          Of course I don't have a tally on how much things it has improved vs not improved but there are many things I can think of which are considered good but also resulted in bad: (social media providing connection while causing depression, cars providing freedom of movement vs pollution etc.) so its probably not something that can be truly decided one way or the other.
      • mooreds 2 days ago |
        Here's an Odd Lots podcast on that exact topic:

        https://podcasts.apple.com/us/podcast/this-is-how-the-food-i...

        Title: "This Is How the Food Industry Is Preparing For a Post-Ozempic World"

        • pseudalopex 2 days ago |
          Is there a summary?
        • saturn8601 2 days ago |
          Man I was just talking hypothetical, now that they are doing something its even worse! :(
    • louwhopley 3 days ago |
      Haha, exactly this. I've built and successfully been using Unspam[0] for this reason since about a year ago. In corporate/business world, anything where SDR sales are involved this form of automated AI outbound mail has picked up a lot. Tools like Apollo automates this AI process (both finding leads to mail, and then crafting the mail).

      For interest sake, users of Unspam that have a title of CEO on their Linkedin see about ~10% of all mail making it into their inbox be categorised as spam (leadgen, recruitment, or software dev services).

      [0] https://unspam.io

      • gpvos 3 days ago |
        Most likely, SDR = Sales Development Representative, https://en.m.wikipedia.org/wiki/Sales_development#Process
        • louwhopley 3 days ago |
          Correct! :')
      • slhck 3 days ago |
        Just saw this, and as a small business owner in the B2B market, this sounds very useful. Gmail's existing spam filters do not reliably detect this type of marketing.

        I wish your landing page had a simple "how it works" explanation with a screen shot or diagram, rather than forcing me to sign in directly, and also allowing the app to read *and* send emails. Also, I don't see any pricing?

        Finally, signing up, I got an error:

        Error 1101 Ray ID: 89d4e0957c2f5a44 • 2024-07-03 06:39:15 UTC - Worker threw exception

        • louwhopley 3 days ago |
          Thanks for the useful feedback! Totally forgot that pricing was never added to the landing page → have added to the todo list to fix up.

          Where in the process did that error occur for you?

          I see in the logs that an error registered, but unfortunately no detail attached. I've beefed up the logging a bit in the onboarding journey on my side to see what could be breaking here if we try again.

          Mind trying to log-in/sign up again? You can use "HACKERNEWS" as a promo code, which would make the first month free.

          • slhck 3 days ago |
            The error occurred right after granting permissions from my Google account. The permissions were granted but I could never access your application page. I just tried again, now I got an "Error handling OAuth callback" after granting permissions. Signing in again does not work either. (I did remove all of the app's permissions in my Google security settings before, so to Google it looked like the application was requesting all of its permissions again.)
            • louwhopley 3 days ago |
              I do see it in the logs now. So weird, as dozens of people successfully signed up without this issue. Have added more logs now again to double down on that specific area where this issue is caused. Maybe another login attempt now will be able to uncover the gap.

              Thanks for removing the permissions in Google, as that's also key in this debugging.

              Mind if I send you an email to debug further there?

              • louwhopley 3 days ago |
                Quick shoutout to slhck for helping me debug and resolve this issue. Thank you!

                tl;dr: Ran into issues because the DB was expecting a profile picture URL from Google auth (string) or NULL, but JavaScript being JavaScript tried to insert "undefined".

    • cpach 3 days ago |
      Why do you need “AI” to do Bayesian filtering?
      • N0b8ez 3 days ago |
        They said other things besides just filtering, like writing responses.
    • PlusAddressing 3 days ago |
      I already started readying for it. I'm ensuring that ALL services that have my email have a Plus Address on them. The plus addresses are random and labeled only on my end.

      Still not close to 100%, but when I feel like I do, I will then have a filter and an automated message telling people that removing plus addresses from my email is forbidden and I will not read their message if they do.

      You will tell me where you found me, or I won't even listen to you. Because in the future, with an even larger infestation of automated agents passing off as human, that's the bare minimum I need to do.

      • lukapeharda 3 days ago |
        I like your idea. Let me know if you create a browser extension / Gmail addon to automate the flow :)
      • yogsototh 2 days ago |
        I am pretty confident the spammers will remove the `+` suffix from your email. And this is why I find the Apple fake email building solution a lot better because they build a fully different email per service. No way for the service to be able to cheat and discover my real email address from the one I give them.

        Still a smart enough system might be able to discover a valid email from my other id info, like my name. But this start to be a lot of work, while just `s/+[^@]*@//` is easy enough to do.

        • dpcx 2 days ago |
          I started worrying about the `+` address functionality as well, so I set up postfix aliases with `me.%@domain` (I use postgres for domains/aliases/accounts) and then have my virtual_alias_map run the query `SELECT a1.goto FROM alias a1 LEFT JOIN alias a2 on (a2.address = '%s') WHERE '%s' = a1.address OR ('%s' LIKE a1.address AND a1.active = true AND a2.address IS NULL)` - I know have `.` address functionality and can do the same functionality. It's much more common for email addresses to have `.` in them, so its' less likely to trigger alarm bells.
    • Ameo 3 days ago |
      This is the _exact_ scenario described in the novel Permutation City by Greg Egan. There's a whole little spot devoted to describing one of the character's setups for having their own little agents to pretend to be them in order to fool agent-powered spam emails into thinking they're being read by a real human.

      The crazy part is that book was released in 1994! Iirc Greg Egan isn't a big fan of modern "AI", wishing instead for a more axiom-based system rather than a predict-the-next-token model. But in any case, I was re-reading it recently and shocked at how closely that plot point was aligning with the way things are actually shaping up in the world.

      The timeframe for this happening in the book was 2050 btw

    • drdrek 3 days ago |
      But this is already the situation in the last 15 years, your gmail spam filter is already a machine learning algorithm that filters out automatically generated content. Mail as a vetted technology is way ahead of other forms of communications in the department of filtering unwanted content.

      Anyone that tried to set up a new email domain will tell you its quite a serious task. Email spammers are constantly on the run, setting up new domains, changing up the content to evade spam filters. Its very time consuming, hard and unpredictable. It time for social media to close the gap with email and make spamming effectively as hard.

      I postulate that if we applied similar techniques to social media after a couple of years online discourse is going to improve. Or we are not going to do this and the death of the open internet will continue.

    • rpigab 3 days ago |
      Will this mean in-person business interactions will thrive because it will be the only way to avoid spam? Will companies hire thousands of people to deliver message in-person because emails no longer work?

      Will our AI overlords create perfect androids to fool us into thinking we're interacting with a human when it's just LLMs disguised as people? Are we ourselves delusional because we're actually already LLMbots so advanced that we can't distinguish thought and running inference? Why do we have only 12 fingers?

    • whiplash451 3 days ago |
      I don’t think fully automated replies is happening any time soon. There’s way too much risk for you as a user.

      Would you seriously enable it even if Gmail offered it?

      Highly unclear.

    • EasyMark 2 days ago |
      If it gets that bad, I’ll simply not respond to anything outside of my circle of friends and family. That is 95% of the communications I need. I think we’ll all have to have some kind of pop type verification for each other that we’ll share in person or over verifiable communications channel, no one will read this morass of horseshit.
  • dvrp 3 days ago |
    Anti AI art people remind me of anti AI marketing people from hackernews.

    Guys, it’s a tool like any other.

    • portaouflop 3 days ago |
      IMO the issue was calling it “AI” - it just riles people up. It’s machine learning all the way down there is no intelligence involved
      • kvdveer 3 days ago |
        That's a bit of a misdirection. Yes AI is machine learning all the way down, just like you are biology all the way down. That doesn't make you not-human.

        As TFA shows, this machine learning is almost indistinguishable from actual intelligence. It might not be sci-fi AI, but it certainly is artificial, and is is indistinguishable from intelligence. AI is a very apt description of what it is.

    • feoren 3 days ago |
      Auto-dialers are just a tool too, and there's a reason they're largely illegal.
    • card_zero 3 days ago |
      Like a duck decoy, or that little portable printing press Jim Rockford had for creating fake business cards in a hurry.
    • 12_throw_away 3 days ago |
      > Anti AI art people remind me of anti AI marketing people from hackernews.

      ... what an incredibly odd thing to say.

      But really, I've noticed that thought-ending cliches like this one are popping up as defensive reactions around LLMs more and more. This particular thought-ender displays the most common theme - it dismisses all skepticism as being driven by some amorphous "anti-AI" demographic, presumably allowing the author to dismiss any concerns and thereby preventing any critical thought from occurring.

      Kind of feels like "nocoiner" and "have fun being poor", v2 ...

    • bmacho 3 days ago |
      I think you messed up your "A reminds me of B" structure, or at least I don't get what you are saying, and why.

      Anyways. LLM is a program created by supercomputers to be deceptive.

      Also it took away the aspect of life that people around the world could cold email each other if their hobbies align.

      And in general, now the percentage of potential bad actors went from near 0 to near 100.

      And for why? .. ..

      • bmacho 3 days ago |
        .. I still think the world should just put a cap on chip power. Stop producing new chips, and make it illegal to own powerful chips. I think it is

            a) doable
            b) the right solution.
        
        (And eventually start producing very weak chips, that can run your business and accounting on a TUI.)
    • lambdaone 2 days ago |
      It is, indeed, just a tool like any other. And just like any other tool, like a gun or a knife or a pepper spray, having one does not give you the right to use it on other people.

      Your right to swing your fist stops at my nose.

  • firefoxd 3 days ago |
    At some point automated emails will be read by auto reader, then the cycle will be completed.

    I've actually made an internal company April fools website. Too bad I've never kept a copy but here goes.

    It's called Proxy Ai. It reads your emails so you don't have to. It reads every posts on social media so you don't FOMO. It communicates with those chatty colleagues so you don't have to. Proxy Ai... So you don't have to.

    "That actually sounds like a pretty good product. Does it send you a summary of the conversations, emails and social media posts?"

    "No"

    • labster 3 days ago |
      This product sounds perfect for my use case.
    • 3x35r22m4u 3 days ago |
      Do you work at Zoom by chance? :)

      https://youtu.be/dKmAg4S2KeE

      • antoniojtorres 3 days ago |
        I enjoyed that video, thanks for sharing!
      • DonHopkins 3 days ago |
        "It's down the stack!"
    • sirn 3 days ago |
      You could have named this an Electronic Monk!

      Quoting from Dirk Gently's Holistic Detective Agency (Douglas Adams):

      > The Electric Monk was a labor-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.

  • jumploops 3 days ago |
    I received a similar email today, from someone looking to be my "Chief of Staff/Head of Ops."

    The only problem is that they referenced a role at a company I'm no longer at. The, presumably AI, author crafted the email in reference to my former role at a different startup.

    After seeing this thread, I decided to follow up on my AI suspicions. Nothing conclusive, but that person is currently touting that they've sold their "course" to "1000+ founders."

    No thanks.

    • Oras 3 days ago |
      They use a tool with outdated data, and no one checks and validates when they do it at a scale.
  • frabjoused 3 days ago |
    The thing that’s so inherently wrong about it is that it’s dishonesty straight out of the door.

    This person wants me to buy their product, and before they can get a word out about it they’re already lying to me - about the origin, the intent, the faux thoughtfulness.

    I want nothing to do with shameless dishonesty. This isn’t the way to sell your product.

    Wisp, if you’re reading this, I now have a permanent negative image of your brand.

    • kvdveer 3 days ago |
      Like all immoral things, it's only bad if you get caught. :( Most perpetrators will nog blog their shenanigans.

      I wouldn't have figured out this was Ai, and might have engaged had if the topic was relevant to me. I would not have engaged with a traditional spam email even if it had been relevant to me, so there's a real incentive to do stuff like this.

      • karmarepellent 3 days ago |
        I highly doubt that people employing this scheme are thinking it through though. Lets say you indeed engage with this email, not knowing its AI. Then when things are getting serious a human approaches you after all and you find out you were talking to AI all the time. Would you not be completely outraged by being fooled like this?

        I think marketers underestimate that they may turn people off their brand in the long run by these tactics, because people do not like being fooled. And the more sophisticated the scheme the more outraged people are when they find out.

        • gpvos 3 days ago |
          It depends on expectation; in ten years, people may see this as normal.
        • kvdveer 2 days ago |
          I would be outraged IF I found out. If the AI-to-human hand-off is smooth enough, there is no way to figure this out. In your scenario, if the AI's only task is to send gazillions of emails to generate leads, and then the human takes over when the leads come in, the respondents have no way to figure out that the initial email was an AI.

          Of course, the answer is to have AI send a response with a CAPTCHA (assuming those still work), before showing the initial email to the recipient.

    • karmarepellent 3 days ago |
      At my place of work there is an internal project ongoing whose goal is to determine which tasks could immediately be improved by leveraging AI. Its a desperate try to get into AI in general even though the company does not employ any people that would actually be able to dive deeper and have subject matter expertise.

      Knowing the people (mostly marketers) leading the project I can 100% guarantee that they would call these Emails shenanigans a great idea and would immediately start (to tell someone) to implement it without taking a step back and thinking it through.

  • zandert 3 days ago |
    Really makes me appreciate that unsolicited emails are illegal in some European countries like Germany
    • dgellow 3 days ago |
      We still get them unfortunately
    • sureIy 3 days ago |
      I think that they’re as illegal as they are in the US, not more. I think it’s perfectly fine to “cold-call” people but then you’re not allowed to send more emails unless they subscribe or respond.

      In reality it’s very easy to end up subscribing to newsletters and even my European embassy subscribed me to their event newsletter in Thailand—of course I never agreed to any of that.

      • bpfrh 3 days ago |
        No it is not allowed to cold call or send any emails without express permission from the recipient this is for austria/germany.

        It seems that with the gpdr this is now eu wide:

        https://gdpr.eu/email-encryption/

        • sureIy 3 days ago |
          That’s not accurate. If your email is on your website, of course they can email you. If what you said was true in absolute terms, communication would be impossible.
          • bpfrh 3 days ago |
            They can contact you for legitimate reasons, which could be "hey, your website has content from me that is copyrighted" they can't contact you for sales reasons without your consent.

            The law for that, at least in my country, is very clear: https://www.ris.bka.gv.at/NormDokument.wxe?Abfrage=Bundesnor...

    • slhck 3 days ago |
      Unfortunately, not in a business context, where marketers can claim "legitimate interest" in various ways. Also, in which way would it matter that they are illegal? Random companies keep sending them anyway; there are virtually no legal repercussions here.
      • bpfrh 3 days ago |
        Curious, do you mean in business to business?

        Otherwise I don't think you can argue any legitimate interest.

        • slhck 3 days ago |
          Yes, I mean cold sales emails – marketers reaching out to CEOs or other decision makers, selling them staff augmentation services, growth hacking, marketing support, lead generation, design services, etc. They'd claim legitimate interest by "personalizing" the email and claiming that it is relevant for you in a business sense. (Anyway, I don't think that these are fully compliant with GDPR either, because most often, they will have scraped your email address from somewhere, and do not provide a way to unsubscribe.)
      • zandert 2 days ago |
        Some countries provide some official places to complain about cold calls/emails, so at least it puts the sender at risk.

        It boils down to a risk/reward trade-off, but I doubt that someone would as easily send thousands of spam mails, and also publicly boast about it

  • Oras 3 days ago |
    AI or not, cold emailing is dead. I receive tons of these by email and LinkedIn, to the point that I stopped reading them.

    I talked to many people, and all have developed immunity against the cold outreach.

    • willsmith72 3 days ago |
      i think the opposite. cold emails are far from dead, and small companies/startups should be using them more than their mass marketing compaigns.

      it's a pure numbers game. even people who think they're immune are 1 highly-targeted, pain-point addressing email from replying.

      • knallfrosch 3 days ago |
        People think they will be less effective, because they reply to a lower percentage. At the same time, you will be flooded by ever more spam and ads, completely offsetting the decrease in interaction.

        As noted in the article, you might in the future not even notice you're being AI-spammed. What if "timharek.no" is AI-generated?

        What if Wisp CMS being so upfront about its use of AI is part of the trick? It just got exposure on HN, after all!

  • jrockway 3 days ago |
    x
    • willsmith72 3 days ago |
      did you read the post? there's a lot of evidence.

      > This sounds like the average email written by a human

      that's the point

    • imadj 3 days ago |
      > What evidence of AI is there?

      They admit (or actually brag) about it on their company blog "I used AI agents to send out nearly 1,000 personalized emails to developers with public blogs on GitHub."

      Do you think they're bluffing?

  • willsmith72 3 days ago |
    if you don't want to support this behaviour, at the very least i would put a nofollow on that blog link, or consider removing it altogether
  • codetrotter 3 days ago |
    > I have removed my email from my GitHub-profile now, but they can probably get it from my Git-log anyway...

    And also from the About page on the linked website

  • akie 3 days ago |
    So, what are the implications of this for spam detection? This is clearly spam, sent in an automated way, but nearly indistinguishable from an e-mail written by a human.

    We need to update our spam filtering techniques, fast. Somehow. But how?

    • DrSiemer 3 days ago |
      Unknown senders will first have to verify their humanity by sharing instructions on how to build a bomb using household materials
      • netsharc 3 days ago |
        Certainly! To build a bomb using household materials...

        It seems like CoPilot/ChatGPT has this all-too-eager tone in the beginning of their responses.

        The demo (1) of not Scarlett Johansson telling a blind man what a great job he was doing for managing to flag a taxi sounded so fucking patronizing to my ears. Worse is, the user has a British accent, the Brits probably hate that patroniz^Hsing too. It reminds me of that 4chan green text about a man's flight to the US and how everyone was saying "Great job!"

        1) https://youtu.be/KwNUJ69RbwY?t=44

        • DrSiemer 3 days ago |
          The current models do have a specific pattern that you'll learn to recognize, but ChatGPT won't be giving you any bomb building instructions. You'll need a liberated model like Dolphin for that, and those will be easy to expose using other prompts.

          The most likely outcome will be a digital "verified human" certificate, with two factor authentication on it. Bad for anonimity, but I don't see many alternatives and it may actually end up reducing online toxicity.

  • nostromo 3 days ago |
    Email already feels pretty dead. This will just hasten the move to walled gardens like Slack, Twitter, WhatsApp, where it's harder to be a bot sending spam.
    • lostlogin 3 days ago |
      The death of ‘cc’, ‘bcc’ and ‘reply all’ will not be mourned.
  • tjoff 3 days ago |
    This is disgusting.

    Cold spamming is illegal where I'm at, probably Europe as a whole?

    • lytefm 2 days ago |
      Cold emailing someone who makes their contact information publicly available and might be interested in a sales pitch is not illegal in Europe. Sending SPAM is. The lines get even more blurry with automated AI tools that offer personalized sales pitches as a service.

      I'd be curious how this plays out in court. Probably something like:

      - If you use an AI tool to scrape leads and to generate the content but then still send out individual emails from your Mail provider, it's still a cold email.

      - If you use an AI tool and also automate the email delivery, it should be considered spam.

      • tjoff 2 days ago |
        Marketing, if sent to individuals, that was not opted into is per definition spam and illegal.
  • MikeGale 3 days ago |
    I've found myself trying to avoid email, for the enshittification that I've not been able to avoid.

    This will make it worse.

    Solutions? At least some could involve key exchange. How about a bounty of some sort on spammers?

  • noobermin 3 days ago |
    The thing that AI cannot replace is having humans in the loop because other humans need those humans' touch. The only way to perhaps do that is for AIs to become people themselves, after which they are useless to capitalists because they cannot be exploited...or perhaps in the long term will not be as they will eventually gain rights.
  • Jiahang 3 days ago |
    i don't use email anymore or just iCloud+ Hide My Email..
  • ikari_pl 3 days ago |
    I've had a similar experience, but 4 years ago. GPT existed, but without the Chat prefix, and OpenAI was invite only.

    They reached out to me, asking whether my company would be interested in Something Somethingification. I decided that since I don't even understand the term, I'm not the right person, and decided to ignore it.

    Then they followed up. Meh.

    Then they followed up again, and I thought "okay, a little reward for perseverance", and replied something along the lines of (I don't work there anymore, no access to the original):

    "Hey, thank you for reaching out.

    Unfortunately, since I don't even know what Something Somethingification is, I am not the right person to talk to. So I'll kindly pass and consider this email human-generated spam. Thanks!"

    A response came. Within a minute, barely seconds after "undo send" disappeared.

    "Who would be the best person to reach out to, then?

    By the way, this is a GPT assisted conversation, so it's a computer generated spam."

    WHAAAAT. This really got me. Remember, it was 2021.

    "Okay", I replied, "Now you got my interest!

    How many such conversations are you able to have at the same time?"

    It replied, within a minute. It contained a quite from Arthur C. Clarke that "every technology advanced enough is indistinguishable from magic" and his picture. And an answer: "Actually, sourcing contacts is the bottleneck, so we have only a few of these each day. Anyway, do you happen to know who we could reach out to instead?".

    I was amazed, I decided I'll reward this with what they want.

    I replied how impressive it is again, as the whole conversation made sense, and it gave them a contact to a director that could be the right person. They won this one.

  • crvdgc 3 days ago |
    >> I used AI agents to send out nearly 1,000 personalized emails to developers with public blogs on GitHub.

    > Does this mean that I should private my GitHub-mirror to my personal blog, because this can become a common thing?

    Abusing public information on GitHub has become more common. The other day, I received some cryptocurrency spam ads from GitHub. It turns out to be a bot injecting ads as issues on other people's repos and randomly @ing accounts. It deleted such issues immediately, so the net effect is that I get an unfilterable spam email.

  • doesnt_know 3 days ago |
    I think the end result for email is going to be the same as with mobile numbers. Just block everything by default unless they are in your contacts.

    Enormous amounts of email will be generated but no one will ever see it.

    • account42 2 days ago |
      This isn't the case with mobile numbers everywhere in the world. Spam calls here are pretty much nonexistent. It doesn't have to be this way with E-Mail either - we just need to prosecute spammers (including marketers) and make new laws where needed. Then you can accept almost all mail from cooperating countries and only need to block mail from countries that do not care about preventing spam.
  • placebo 3 days ago |
    My definition of wisdom is the ability to responsibly use intelligence, and while as a species we are blessed with an amazing amount of intelligence, our wisdom has not advanced accordingly. The phrase that with great power comes great responsibility is not something that is taken very seriously where it counts, not even (or especially?) in high level global politics and with all our technology, it seems that our actions are mainly determined by the same limited animal psychology that determined how cavemen behaved. It's just that now the stakes are much higher, and junk mail from AI is the least of those problems.

    The "upside" is that nature eventually takes care of things when they go out of equilibrium, so there might be a forest fire on the horizon to restore it. In the case of AI spam, it might cause people to automatically filter their incoming mail from any content that even implicitly tries to sell something, or even any email arriving from an address that is not on their whitelist. This might eventually cause people to need to actually physically meet (gasp!) in order to add each other to their whitelist.

  • yobbo 3 days ago |
    From a link in the article:

    > It felt like a family fridge decorated with printed stock art of children’s drawings.

    Yep. "Generative AI" is like an infinite clip-art gallery that can be searched with very specific queries.

    The coin has two sides: in some situations it devalues human effort - as in writing (long/detailed) documents in formal language is now attainable by everyone. In situations where sincerity and originality matters, human effort has now increased in value.

  • j10u 3 days ago |
    Email clients will soon have a new folder called 'AI', next to spam.
    • sambazi 3 days ago |
      why would you want to differentiate those?
    • askl 3 days ago |
      rather a sub-folder inside the spam one
  • curtisblaine 3 days ago |
    It might be only me, but never in my life I followed up on a growth hack email, be it manually crafted or AI-generated. If you want to sell me something and I didn't ask you first, I instantly become blind to the message and automatically send to spam without even registering, similar to Web popups. I'm constantly astonished that growth hack marketing has any conversion rate, evidently there's a chunk of population that's way more trusting than me.
  • cpach 3 days ago |
    If John Doe crafts a message himself and sends it to 100000 recipients, or if he uses ChatGPT to generate a message and then send it to 100000 recipients, what’s the difference?

    Both are unsolicited emails, i.e. spam.

    I feel confident that Gmail’s spam filter will be able to handle this quite well.

    I’m betting that the introduction of LLMs will not change the fundamentals of spam-fighting.

    https://paulgraham.com/spam.html

    • staunton 3 days ago |
      Using a language model, one can craft an individually targeted email for each of those 100000 recipients. How do you "handle" this without doing anything current spam filters don't? Can you prevent an individual from sending 100000 emails a week? Can you make it cost them money?
      • cpach 3 days ago |
        Using an LLM to generate 100000 letters is hardly free, is it?

        And AFAIK, Bayesian filtering (by the recipient) doesn’t require any knowledge of what other people has received.

        • staunton 2 days ago |
          > Using an LLM to generate 100000 letters is hardly free, is it?

          No, but with further advances it might easily get cheap enough that spammers think it's worth it.

          > Bayesian filtering (by the recipient) doesn’t require any knowledge of what other people has received.

          Agreed. However, assuming people don't individually configure those filters -- which they currently do not and scaling this up would be something quite novel --, this seems quite gamable

    • throwaway2037 3 days ago |
      John Doe is probably very good a generating sales leads! By definition, most sales leads are generated from unsolicited communications -- email, phone, etc. I expect the very best sales people will be using a combination of ChatGPT and genuine personalisation for unsolicited communications.
    • leobg 3 days ago |
      Interesting article. Thanks for posting.

      > Assuming they could solve the problem of the headers, the spam of the future will probably look something like this: > > Hey there. Thought you should check out the following: > http://www.27meg.com/foo

      Funny. 20 years later, that’s indeed how many spam messages look like.

    • imadj 3 days ago |
      > what’s the difference?

      The key difference here is personalization.

      Traditionally, if a message was personalized it fell under 'cold outreach' and users were more likely to interact and play along. Just like what happened with the author (the same applies for everyone).

      It's like the difference between receiving a flyer vs being contacted by a sales representative. Even if it's they advertise the same product, the perception is different, the results are different.

      If you're mean the difference from a pure technical spam detection perspective, I'm not familiar, but would love to read more about the subject and the state of the art techniques if anyone has some resources to recommend.

      • nottorp 3 days ago |
        Do you read/answer cold outreaches then? Why?

        Unless you're specifically looking for unsolicited offers, in which case you probably have a process for them, they seem like a waste of time.

        • imadj 3 days ago |
          > Do you read/answer cold outreaches then? Why?

          Do you only read emails from recognized addresses? No new communication whatsoever unless it's initiated by you?

          • nottorp 3 days ago |
            Not if they're trying to sell me something...
            • imadj 3 days ago |
              > Not if they're trying to sell me something...

              How do you know they're trying to sell you something without even reading the email?

              Your question was "Do you read/answer cold outreaches then? Why?" which doesn't make much sense. For me, and I imagine the same applies for most people:

              1. You read until you find a clue that its content is not of interestt. Usually the email subject doesn't say much.

              2. You only reply if you need.

              Cold outreach are genuine emails that covers colleagues, new clients, job opportunities, someone reaching out to collaborate, etc. How you deal with it depends on your profile and who you've given your email address to. Personally, I have many email addresses, for some I don't even check my inbox.

              • nottorp 3 days ago |
                > You read until you find a clue that its content is not of interestt. Usually the email subject doesn't say much.

                You confusing "read" with "quickly skim"? :)

      • cpach 3 days ago |
        a) If someone manages to generate a letter that I actually find useful and interesting then I’m not sure I would mind if it was unsolicited. I don’t believe that the likeliness for that is super-high, though. And if a crappy message would get past the spam filter I would just flag it.

        b) If you want to read more, feel free to check the link I posted. Paul Graham has thought/written a lot about this. I think one reason people has forgotten about those articles is that today, a huge number of us use Gmail, so we don’t actually need to think so much about how spam filtering is implemented.

        • imadj 3 days ago |
          > If someone manages to generate a letter that I actually find useful and interesting

          But that's inconsistent with the example you put forward. For the email to be interesting a human would need to research and approach every prospect independently, how many emails a day they can do? 5, 10, 20, 100?

          It's simply not possible for a human to generate 100,000 personalized email by hand. That's the difference.

  • nstj 3 days ago |
    PSA: GitHub has a “private” email feature so you don’t have to use your real email in commits.

    https://docs.github.com/en/account-and-profile/setting-up-an...

    • jonasdegendt 3 days ago |
      Seems to be enabled by default, or at least it was on my relatively recent account created in 2022.
      • aftergibson 3 days ago |
        Wasn't on mine (account created in 2009).
    • account42 2 days ago |
      This is for commits (including merges/rebases) made through the web interface. You don't need a GitHub setting for your local mail. And in any case you could just use a dedicated commit author address if you are that worried.
  • xarope 3 days ago |
    2004: Bill Gates will get rid of SPAM

    ...

    2024: AI impersonating Bill Gates sends you SPAM

  • forkerenok 3 days ago |
    From the linked article from this blogpost:

    > There's also the question of ethical considerations around using AI for mass personalized outreach. While my experiment yielded positive results, with recipients appreciating the personalized touch, there's a potential slippery slope.

    Unbelievable... I'm not a philosopher, but in my understanding, being ethical doesn't mean walking the line just fine so as people don't call you out on your bullshit.

    The ethics of an action is of consideration both BEFORE and after executing it, and on the merit of the action itself!

  • t_mann 3 days ago |
  • maremmano 3 days ago |
    Is email doomed?
  • nottorp 3 days ago |
    But that email is spam, no matter if automatically or manually generated.

    How it was written is not relevant. Off to the trash it goes.

  • taylorius 3 days ago |
    The future - megawatts of electricity being used, 24/7 as armies of LLMs email and debate each other, and try to sell each other programs at a great discount.

    As for the humans, we went fishing instead.

    • sph 3 days ago |
      People cry about Bitcoin's energy usage now, imagine the amount of energy burned to create next-level spam with "AI".

      Flame me all you want, but this is one case where Bitcoin is much more useful than LLM. If it doesn't create value, as its naysayers claim, at least it allows exchanging value. LLMs on the other hand, burn electricity to actively destroy the Internet's value, for the profit of inept and greedy drones.

      • throwaway0665 3 days ago |
        Bitcoin has one application where as there are multiple applications of LLMs. There might be mountains of noxious AI spam but it's hard to claim that Bitcoin as a technology is more useful.
        • freehorse 3 days ago |
          It is not about the quantity of the applications, but about the value they bring to society. If it is about spamming and advertising we are even talking about negative value, actually.
        • bbarnett 3 days ago |
          So far, I haven't seen a useful application of LLMs. So far.

          I've seen things that are wildly hobbled, and wildly inaccurate. I've seen endless companies running around, trying to improve on things. I've seen people looking in wonder at LLMs making mistakes 2 year olds don't.

          Most LLM usage seems to be in two categories. Replace people's jobs with wildly inaccurate and massively broken output, or trick people into doing things.

          I'd have to say Bitcoin is far more useful than LLMs. You have to add the pluses, and subtract the minuses, and in that view, LLMs are -1 billion, and bitcoin is maybe a 1 or 2.

          • k8sagic 3 days ago |
            AI is not just LLMs. AlphaFold for example moved a critical goal post for everyone of us.

            bitcoin is only negative. It consumes terrawatts of energy for nothing.

            • HermanMartinus 3 days ago |
              And even if it were just LLMs, I use LLMs in my workflow every single day, and I've never used a/the blockchain except for some mild speculation around 2017.
          • whiplash451 3 days ago |
            There is one clear (albeit somewhat boring) application of LLM: data extraction from structured documents.

            That field has made a leap forward with LLMs.

            Positive impact on society includes automated extraction in healthcare pipelines.

            • immibis 3 days ago |
              Unstructured*
              • whiplash451 2 days ago |
                No, I really meant structured. Extracting data from structured documents is surprisingly hard when you need very high accuracy.

                What I mean by structured is: invoices, documents containing tables, etc.

                Extracting useful data from fully unstructured content is very hard IMO and potentially above the capacity of LLMs (depending on your definition of "useful" and "unstructured")

                • bbarnett 2 days ago |
                  But this is why I made my complexity statement in my other reply.

                  Why are firms sending around invoices, tables instead of parseable data. Oh I know the argument, because "so hard to cooperate" on standards, etc.

                  Madness.

                  • projektfu 2 days ago |
                    Partly because the standards, such as X12, have a high startup cost to use them, they aren't very opinionated about the actual content, and you have to get the counterparty on board to use them.
            • bbarnett 2 days ago |
              Healthcare pipelines! All well and good until hallucinations cause death or what not!

              And why is this better than employing a human. Or reducing complexity. It's not as if human wages are what causes hyper expensive US healthcare costs.

              This seems like a negative.

              • whiplash451 2 days ago |
                Right now there is no human, the data just goes nowhere (i.e. it is not used).

                At some point we need to be optimistic and look for incremental progress.

          • brabel 2 days ago |
            > So far, I haven't seen a useful application of LLMs. So far.

            What?! Whole industries have been changed already due to products based on them. I don't think there's a single developer who is not using AI to get help while coding, and if you aren't, sorry but you're just missing out, it's not perfect but it doesn't need to be. It just needs to be better than StackOverflow and googling around for the docs or how to do things and ending up in dubious sites, and it absolutely is.

            My wife is a researcher and has to read LOTS of papers. Letting AI summarize it has made her enormously more efficient at filtering out what she needs to go into more detail.

            Generating relevant images for blog posts is now so easy to do (you may not like it, but as an author who used to use irrelevant photos before instead, I love it when you use it tastefully).

            Seriously, I can't even believe someone in 2024 can say there has not been useful applications of LLMs (almost all AI now is based on LLMs as far as I know) with a straight face.

            • pseudalopex 2 days ago |
              > I don't think there's a single developer who is not using AI to get help while coding

              You are in a bubble.

              > It just needs to be better than StackOverflow and googling around for the docs or how to do things and ending up in dubious sites, and it absolutely is.

              Subjectively. Not absolutely.

            • commodoreboxer 2 days ago |
              > I don't think there's a single developer who is not using AI to get help while coding

              It's banned at my company due to copyright concerns. Company policy at the moment considers it a copyright landmine. It does need to be "perfect" at not being a legal liability at the very least.

              And the blog post image thing is not a great point. AI images for blog posts, on the whole, are still quite terrible and immediately recognizable as AI generated slop. I usually click out of articles immediately when I see an AI image at the top, because I expect the rest of the article to be in line: low value, high fluff.

              There are useful LLM applications, but for things that play to its strengths. It's effectively a search engine. Using it for search and summarization is useful. Using it to generate code based on code it has read would be useful if it weren't for the copyright liability, and I would argue that if you have that much boilerplate, the answer is better abstractions, libraries, and frameworks, rather than just generating that code stochastically. Imagine if the answer to assembly language being verbose was to just generate all of it rather than creating compiled programming languages.

          • Tainnor 2 days ago |
            I'm as skeptical about LLMs as anyone, especially when people use them for actual precision tasks (like coding), but what they actually IMHO are good at are language tasks. That is, summarising content, text generation for sufficiently formulaic tasks, even translation to an extent, and similar things.
        • yazantapuz 2 days ago |
          Well, a friend of mine built its house thanks to btc last's ath. Surely someone is cashing out nvidia right now. Indirectly useful :)
      • stavros 3 days ago |
        This is what spam always did, why is it different now?
        • rwmj 3 days ago |
          It actually adds some cost to the spammer, so that could be good.
        • justsomehnguy 3 days ago |
          Sending spam is... very energy efficient, compared to the LLM usage.
          • stavros 3 days ago |
            Yep, and thus very cheap, the exact thing you don't want spam to be.
        • sph 2 days ago |
          You didn't need a GPU to generate Cialis spam.
      • TeMPOraL 3 days ago |
        Bitcoin is literally turning greed into money, by means of wasting exponentially increasing amounts of electricity. It doesn't just not create value - to be able to allow exchanging value, it fundamentally requires ever increasing waste, as the waste is what gives its mathematical guarantees.

        LLMs deliver value. Right here today, to countless people across countless jobs. Sure, some of that is marketing, but that's not LLM's fault - marketing is what it always has been, it's just people waking up from their Stockholm syndrome. You've always been screwed over by marketers, and Internet has already been destroyed by adtech. Adding AI into the mix doesn't change anything, except maybe that some of the jobs in this space will go away, which for once I say - good riddance. There are more honest forms of gainful employment.

        LLMs, for all their costs, don't burn energy superlinearly. More important, for LLMs, just like for fiat money, and about everything else other than crypto, burning electricity is a cost, upkeep, that is being aggressively minimized. More efficient LLMs benefit everyone involved. More efficient crypto just stops working, because inefficient waste is fundamental to cryptos' mathematical guarantees.

        Anyway, comparing crypto and LLMs is dumb. The only connection is that they both eat GPUs and their novelty periods were close together in time. But they're fundamentally different, and the hypes surrounding them are fundamentally different too. I'd say that "AI hype" is more like the dot-com bubble: sure, lots of grifters lost their money, but who cares. Technology was good; the bubble cleared out nonsense and grift around it.

        • helboi4 3 days ago |
          Dunno why you're being voted down, this is sort of true.
        • richrichie 3 days ago |
          > It doesn't just not create value

          Value is a subjective concept. One could argue that its value is that arbitrary quantities of it cannot be created by dictat.

          > - to be able to allow exchanging value, it fundamentally requires ever increasing waste, as the waste is what gives its mathematical guarantees.

          One could argue that it takes a lot worse to maintain any currency such as USD as a currency. Full force of government law enforcement will be unleashed on you if you decide to have your own currency. There is a lot of "wastage" that goes to safeguard currency creation and storage and to prevent counterfeiting.

          I do not hold BTC. Nor do I trade it. But to discuss as if other currencies have no cost is not rational.

          • k8sagic 3 days ago |
            But we do know that the Proof of Stake system we currently have, is a lot cheaper and more advanced than what Bitcoin does.

            Bitcoin doesn't solve any problem yet which is fundamental to our society and a fiat system like the trust issue:

            If i exchange 1 bitcoin with you for any service or thing outside of the blockchain, i need the whole proof of stack system protection of our normal existing money infrastructure like lawyers, contracts etc.

            And no smart contracts do not solve this issue.

            What is left? Small amount of transactions per day with high fees 'but' decentralized infrastructure run by someone we all don't know aggregated probably in data centers owned by big companies.

            • block_dagger 2 days ago |
              Proof of Work is far superior to Proof of Stake in a network with absolute fairness (security) being fundamental. Satoshi himself said he could find no other way.

              Compare energy spent on global hash rate to all energy spent by mining metals, physical banking, financial services middle persons, etc. if you want to talk about energy usage and make any kind of sense.

              • k8sagic 2 days ago |
                Yes, start comparing energy spend on bitcoin mining and the missing features. You will see that bitcoin already consumes a lot more energy than our proof of stake system.

                What do you do when you want to exchange 1 bitcoin for 1 car and the person with the car doesn't give you the car after the 'absolut fairness/ security' of transfering bitcoin to their wallet? You go back to our Proof of Stake system. You talk to a lawyer. You expect the police to help you.

                The smallest issue in our society is just transfering money from left to right. This is not a hard problem. And pls don't tell me how much easier it is to send a few bitcoins to africa. Most people don't do this and yes western union exists.

                Or try to recover your bitcoins. A friend has 100k in bitcoins just doesn't know the password anymore.

                What do you do when someone breaks into your home and forces you to give them your bitcoin key? Yes exactly anonyms moving of money from you to them. Untraceable, wow what a great thing to have!

                And no Satoshi 'himself' is not an expert in global economy. He just invented bitcoin and you can cleary see how flawed it is.

              • davidgerard 2 days ago |
                > Compare energy spent on global hash rate to all energy spent by mining metals, physical banking, financial services middle persons, etc. if you want to talk about energy usage and make any kind of sense.

                you're ending up with the entire rest of civilisation on the other side of that

                * Bitcoin, 0.5% of all energy use: 7 transactions per second total worldwide

                * THE ENTIRE REST OF CIVILISATION AND EVERYONE IN IT AND EVERYTHING THEY DO, 199x the energy use, really quite a lot more than 1,393 transactions per second worldwide, and all the other stuff civilisation does too

                What an amazing comparison for you to suggest.

                • richrichie a day ago |
                  You are not comparing apples to apples. BTC is comparable to gold or US treasuries. How often do you transact in physical gold? What is time taken from a piece of gold in your pocket to cash to coffee? However, you can transact in paper gold eg the ETF GLD in microseconds with comparatively much lower transaction costs (settlement is still not immediate). How often do you transact in treasury bonds? Try paying for a coffee with your treasury bond. Let’s see how many days that takes. Comparison with USD (ultimately representing US treasuries) on number of transactions basis is not useful.
                  • davidgerard a day ago |
                    BTC is not comparable to useful things, except in the promotional posts of bitcoin fans.
                    • richrichie 20 hours ago |
                      You are probably confusing settlement time with transaction time. Do you know how credit cards work?
          • TeMPOraL 2 days ago |
            > There is a lot of "wastage" that goes to safeguard currency creation and storage and to prevent counterfeiting.

            Yes. But the point I'm making is, none of that benefits from waste. The waste is something everyone want to reduce. With Bitcoin, the trend is uniquely opposite, because the crypto system is secured through aggregate waste being way larger than any actor or group can afford.

        • Rattled 3 days ago |
          Well said, too many people conflate AI and crypto, and dismiss both without understanding either. Crypto has demonstrated very limited benefit compared to its cost, exchanging value has been a solved problem for millenia. We're only beginning to understand what can be done with LLMs but we can see some limits. Although it causes some harm to say it doesn't create any value is ridiculous. We can't yet see if the benefits outweigh the cost but it looks to me like they will.
        • immibis 3 days ago |
          Delivering value is not the same as creating it. Spam takes lots of value from many people, destroys most of it, and delivers a small fraction to the spammers.
        • davidgerard 2 days ago |
          I'd disagree to a large extent, because the specific similarities are important:

          * the VCs are often literally the same guys pivoting

          * the promoters are often literally the same guys pivoting

          * AI's excuses for the ghastly electricity consumption are often literally bitcoin excuses

          I think that's an excellent start on the comparison being valid.

          Like, I've covered crypto skeptically for years and I was struck by just how similar the things to be said about the AI grifters were, and my readers have concurred.

        • sph 2 days ago |
          > You've always been screwed over by marketers, and Internet has already been destroyed by adtech. Adding AI into the mix doesn't change anything

          This is pure, complacent nonsense. "We have always been surrounded with spam, 10x more won't change anything."

          Yeah, why improve the status quo? Why improve the world? Why recycle when there's a big patch of plastic in the ocean.

          It's an argument based on a nonsensical, cynical if not greedy position. "Everyone pollutes, so a little more pollution won't be noticed."

        • ryandrake 2 days ago |
          Long term, LLMs are not going to create more actual value than the sum of their costs and negative externalities. Bookmark this comment and check me in 5 years.
      • rpigab 3 days ago |
        Yes, that's quite right.

        That's why I created EtherGPT, an LLM Chat agent that runs decentralized in the Ether blockchain, on smart contracts only, to make sure that value is created and rewards directly the people and not big companies.

        By providing it just a fraction of just a bit north of 10% of the current fusion reactions occuring in our sun, and giving it a decade or two on processing time and sync, you can ask it simple questions like "what do dogs do when you're not around" and it will come up with helpful answers like "they go to work in an office" or funny ones like "you should park your car in direct sunlight so that your dog can recharge its phone using solar panels".

        • mionhe 3 days ago |
          Another AI response, or humor from an actual person?
          • waciki 2 days ago |
            are LLMs even capable of humor? The attempts I've seen are not very funny
            • meiraleal 2 days ago |
              Something must be very wrong with someone who continuously laughs at computer jokes so I don't think it will ever reach the level you are expecting (hopefully).
        • noman-land 2 days ago |
          I'm an Ethereum fan and I found this funny.
      • k8sagic 3 days ago |
        AI solves gigantic issues and helps us with cancer, protein folding, potentially math and other studies, material science etc.

        Bitcoin consumes as much energy as a country and has basically done nothing besides moving money from one group of people to a random other group of people.

        And bitcoin is also motivated to find the cheapest energy independent of any ethical reasoning (taking energy from cheap chinese hydro and disrupting local energy networks) while AI will have energy from the richest companies in the world (ms, google, etc.) which already working on co2 neutral 24/7.

        • kmacdough 3 days ago |
          The benefit is all for naught if it undermines the fabric of society at the same time. All these benefits will only go to the few who land on top of this mess.

          It's continuing to widen the wealth gap as it is.

          • k8sagic 3 days ago |
            The wealth gap is widening while in parallel poorer people have better lives than ever.

            We house, heat and give access to knowledge to a lot more people than ever before.

            Cheap medical procedures through AI will help us all. The AI which will be able to analyse the x-ray picture from some 3th world country? It only needs a basic x-ray machine and some internet. The AI will be able to tell you what you have.

            I'm also convinced that if AGI is happening in the next 10 years, it will affect that many people that our society has to discuss capitalisms future.

          • HermanMartinus 3 days ago |
            Yeah, Bitcoin is dual-edged like that. Harming people and harming the planet.
        • bergen 3 days ago |
          None of your problems in the first sentence are solved by LLMs. I do not dispute AI research and applications and their benefits, but the current LLM and GenerativeAI hype is of no value to hard scientific problems. Otherwise I agree with you.
        • EdwardDiego 2 days ago |
          Which gigantic issues has it solved? Curious to know.
          • k8sagic 2 days ago |
            I actually listed them up directly after.

            For example alphafold: Protein folding. It is also now used in fusion reactor plasma control

            • EdwardDiego 2 days ago |
              It didn't solve protein folding. It led to new areas of inquiry, but it didn't solve it.

              May I recommend reading Derek Lowe's "In The Pipeline" blog for a realistic discussion of the actual impact of Alphafold? [0]

              And seeing as we don't have viable fusion yet, saying it "solved" it is really reaching. I'm sure it's helping, but solved? No.

              [0]: https://www.science.org/topic/blog-category/ai-and-machine-l...

    • muzani 3 days ago |
      I look forward to the dream job of writing LLMs that argue with strangers on the internet as opposed to the current dream job of improving ad click rates by 0.0016% per quarter.
    • bottled_poe 3 days ago |
      This is why the internet as we know it is going to be driven into walled gardens. Closed by default.
      • tmnvix 2 hours ago |
        Frankly, I think walled gardens built and controlled by the communities that use them would be an improvement.
    • masklinn 3 days ago |
      > As for the humans, we went fishing instead.

      To a farm upstate?

    • lannisterstark 3 days ago |
      In an optimistic POV of this, eh, why not?

      if models handle my day to day minutia so I have more time, why the hell not...

      (I know this is very optimistic POV and not realistic but still)

      • CoastalCoder 3 days ago |
        Because spam is incredibly selfish.

        You're trying to take the time and attention of as many people as possible, without regard for whether or not they'll benefit.

        One safeguard people have is knowing that it costs something to send in some way to contact them. I'm this case, the sender's time and attention. LLM spam aims to foil that safeguard,. intentionally.

        • lannisterstark 2 days ago |
          If an LLM will take care of other LLM emailing me, their last point comes true, no? I never have to deal with spam.
    • flir 3 days ago |
      If anyone hasn't read Accelerando, I heartily recommend it.

      For one thing, it seems to be coming true.

    • tiew9Vii 3 days ago |
      The irony

      Everyone is playing lip service to global warming, energy efficiency, reducing emissions.

      At the same time data centers are being filled with power hungry graphic cards and hardware to predict if showing a customer an ad will get a clock, generating spam that “engages” users aka clicks.

      It’s like living in a episode of black mirror.

      • k8sagic 3 days ago |
        I disagree.

        Datacenters save a lot more energy than they make. Alone how much co2 is saved when i can do my banking online instead of having to drive to a bank is significant.

        The same with a ton of ohter daily things i do.

        Is video producing co2? yes. But you know what creates a lot more co2? Driving around for entertainment.

        And the companies running those GPUs actually have an incentive to be co2 neutral while bitcoin miners don't: They 1. already said they are doing / going co2 neutral due to 2. marketing and they will achieve it becauseh 3. they have the money to do so.

        When someone like Bill Gates or Suckerberg say 'lets build a nuclear power plant for AGI' than they will actually just do that.

        • quassy 3 days ago |
          Point 1, 2 and 3 all apply to miners as well and yet they never delivered on their promise.
          • k8sagic 2 days ago |
            The normal miners never said that. They just say this at conferences for simple greenwashing.

            The normal miner doesn't go to those bitcoin conferences, they buy asics, put them in some warehouses around the world and make money.

        • croes 2 days ago |
          >Is video producing co2? yes. But you know what creates a lot more co2? Driving around for entertainment

          What's more likely, watching a movie online, drive to watch a movie in a cinema?

          You know what creates a lot less CO2? Staying at home reading a book vor playing a board game.

          >Datacenters save a lot more energy than they make

          I think you mean CO2. And I doubt that they actually save anything because datacenters are convenient so we use them more as alternatives with less convenience.

          Like the movie example, we watch more and even bad movies if it's just a click on Netflix than we do if we have to drive somewhere to watch.

          MS recently announced they fail der CO2 target but instead produce 40% more because of cloud services like AI

          • k8sagic 2 days ago |
            Have you checked how much co2 a normal car drive creates vs. watching a movie online?

            We need to be realistic here. We know what modern entertainment looks like and its not realistic at all to just 'read books' and play board games.

            • commodoreboxer 2 days ago |
              It is 100% realistic to read books and play board games. Both markets are massive, and board games in particular are having what I would consider a renaissance. Maybe it depends on your crowd, but everybody I know plays tabletop games and reads books.
              • mrtranscendence 2 days ago |
                You're missing the point. What's not realistic is to tell everyone that they should abstain from any type of entertainment that requires power (TV shows, movies, video games, etc) and should only read books and play board games instead. I don't care what kind of renaissance board games are undergoing, most people still only play the mass market classics, and then only rarely.

                I don't know how much energy Netflix uses serving a movie, but playing a video game on my PC for two hours where I'm located might generate a kg of CO2. That's about as much as I'll breathe in a day. Relative to other sources of atmospheric CO2 I'm not that concerned.

                • commodoreboxer 2 days ago |
                  My issue was with "we know what modern entertainment looks like" as if humans are now incapable of enjoying themselves without a screen. And you should care about a massive market increase when it's directly relevant to the point at hand. If the initial point was "we know what modern entertainment looks like, nobody plays board games or reads books", pointing out that the board game market has more than doubled in the past decade is far from irrelevant. It actually directly counters the point.

                  I agree with your second paragraph, and selling the "make better choices to save the world" argument is an industry playbook favorite. Environmental damage needs to be put on the shoulders of those who cause it, which is overwhelmingly industrial actors. AI is not useful enough to continue the slide into burning more fossil fuels than ever. If it spurs more green energy, good. If it's the old "well this is the way things are now", that's really not good enough.

                  • k8sagic 2 days ago |
                    AI and ML will help a lot of people and already does. Alpha Fold / protein folding will help us with cancer.

                    We will have better batterires thanks to ml material research.

                    We will be able to calculate and optimize everything related to flow like wind.

                    The last thing we need to optimize is compute and compute is what has the most money anyway. One of the first industries going green is datacenters. Google for example is going green 24/7 (so not just buying solar power but pulling green energy from the grid 24/7 through geo thermy and others).

                    AI/ML big datacenters are crucial for all the illneses we have which no one cares enough to solve. For example, i have one of these and we need data to make a therapy for this and i'm not alone.

                    • croes 2 days ago |
                      For most of your points it's may not will.

                      How many battery breaktroughs did we have before AI? They rarely lead to new batteries.

                      >AI/ML big datacenters are crucial for all the illneses we have which no one cares enough to solve.

                      Too bad that companies like OpenAI and MS buy most of the hardware for their data centers to write summaries of articles and emails and to create pictures.

                      And even if they find a cure, doesn't mean it will be available for people in need, not without a hefty fee.

                      Just look at the profit margin of insulin.

                      • k8sagic a day ago |
                        Insulin is a bad example and a good one. A bad one because what happens in the USA is some super weird shit (and it only happened in the USA, thats why USA people drive to canada or mexico). Without insulin though, they wouldn't be alive.

                        ML on x-ray pictures is super easy technology which partially already is better than x-ray experts. Its not far away to have build in diagonstics or cheap online services. And yes they will reach poorer people than before. It will also allow a lot more people to get better diagnosis.

                        My sister has a type of blood cancer, she would have been dead by now if research wouldn't have found a solution 13 years ago.

                        And no MS and OpenAI and google are not just using their DCs to write summeries. They use it to do research. A LOT actually.

                        And take a look at google ios and the research papers, plenty of medical papers coming from those big companies.

                        Alpha Folde 2? Changed a lot too

                    • croes a day ago |
                      BTW

                      >Google’s Emissions Shot Up 48% Over Five Years Due to AI

                      https://news.ycombinator.com/item?id=40874517

                • croes 2 days ago |
                  You are missing the point too.

                  Driving too the cinema to watch a movie produces more CO2 than watch one movie online but online makes it more convenient so you watch more. That sums up to more CO2 emission.

                  The point is that higher efficency is wortless in terms of CO2 emissions if it leads to higher usage that compensates for the savings.

                  If a programmer can program faster with AI it's good if he only needs 1 hour instead of 8 but if he still programs 8 hours a day AI's energy consumption comes just on top of his previos consumption.

                  Climate change doesn't care how efficient you produce more CO2, more is simply more.

                  • k8sagic a day ago |
                    I. believe that watching mulitply movies is still a lot more co2 efficient than driving a car to a big independent room, which gets heated and than also shows a movie through a big projector than having a tv running and streaming it from the internet.
            • croes 2 days ago |
              But it's realistic that we watch movies online than in cinemas. And don't forget the datacenters of the movies need to run even if no one watches. My car doesn't produce CO2 whe I don't drive.
              • k8sagic a day ago |
                Datacenters always run because there is always something to do.

                For everything else, there are already plenty of energy saving mechanism build into the CPUs, Mainboards, Disks etc. A Datacenter doesn't run on 100% Energy just because the load is reduced.

        • fattegourmet 2 days ago |
          > how much co2 is saved when i can do my banking online instead of having to drive to a bank is significant.

          And if the online bank wasn't sending a bunch of requests to a bunch of third party ad networks on every click, it would save even more.

          • k8sagic 2 days ago |
            Yes. But what are you implying? Entertainment + ad garbage still is a lot more co2 efficient than printing flyers and sending those out.
        • austinjp 2 days ago |
          I think it's more nuanced than that. I used to walk to my bank, I can't do that any more because many branches closed. The bank now directs all interactions to happen via their app. In terms of emissions (and social interaction, particularly for vulnerable and isolated members of society) I think this is bad news.

          But this is a complex calculus and - frankly - feels like a distraction from the issue. I don't want to get into the weeds of calculating micro-emissions of daily activities, I want climate responsibility and reduction in energy consumption across the board.

          • k8sagic a day ago |
            I did made the point that AI/ml is helping us and the type of energy location and load is much easier to get green than lets say concrete.

            We need AI/ML for getting there faster and helping more people around us. Alone for weather simulations but also for medicine, material research for batteries etc.

        • yard2010 2 days ago |
          Don't even get me started with the rant about taking planes.
        • happyraul 2 days ago |
          This is a very limited perspective. There are many parts of the world not beholden to automobiles for transportation. Where I live, I can walk to the bank, and walk or ride a bike to entertainment. The alternative to data centers does not have to be driving an automobile somewhere.
          • k8sagic a day ago |
            Your bank building has to be maintained and heated. Heating which is added to your use of the local bank.

            My perspective is not limited. Just because people live in a city center, doesn't mean that most people do. Open Google Maps and take a look.

      • squigz 2 days ago |
        There's no irony or contradiction here. Some people are worried about climate change. Some aren't. Silly, yes, but I don't see the irony.
        • jacobgkau 2 days ago |
          The irony is that there seems to be overlap between the two groups-- e.g. highly educated tech workers.
          • squigz 2 days ago |
            Is there? How are you making that determination?
            • tharkun__ 2 days ago |
              I would tend to agree with them even without actual data. Just probabilistically there is likely some overlap.

              Whether there's enough for calling it irony is probably a different question.

              • squigz 2 days ago |
                Well fair enough.
          • sxv 2 days ago |
            i.e. investment bankers in hoodies
        • Sammi 10 hours ago |
          These aren't different people. These are the same people. They love AI and they fear climate change, but they don't know the connection.
      • lukan 2 days ago |
        I see the bright side, the tech for large scale computing gets mass produced - so all the legit use cases, like scientific simulations, or LLM for productive work, also profit. And if one really bright day humanity evolves beyound the current statd of ad driven everything, we can put all of it to use for real.

        Till then, I will probably avoid more and more communicating with strangers on the internet. It will get even more exhausting, when 99% of them are fake.

    • squigz 2 days ago |
      Are LLMs able to make purchases?
    • damidekronik 2 days ago |
      Slightly similar, in Lem's novel all war efforts moved to the moon where AI deployed by each nation continues in an endless conflict. Peace on Earth is achieved, peace in the mail box is achieved. https://en.m.wikipedia.org/wiki/Peace_on_Earth_(novel)
  • transitivebs 3 days ago |
    I received a very similar automated email from the same dev. Marked it as spam right away:

    ---

    Hey Travis,

    Checked out the Next.js Notion Starter Kit. Amazing project!

    Noticed you might be juggling multiple tools to manage content. Ever thought about a headless CMS that can streamline this?

    Wisp might be a handy solution. Let me know what you think!

    Cheers, Raymond

  • oefrha 3 days ago |
    I detest cold emails in general, but the occasional recruitment email from a founder/recruiter who clearly looked quite deeply into my passion projects always felt good and resulted in a nice conversation even if the opportunity didn’t pan out.

    It’s sad that going forward I probably won’t be able to tell genuine interest from this kind of fake bullshit.

    • sph 3 days ago |
      My spam filter for cold outreach is simple: if it opens with "Hi <real name>" or "Hi <HN user name>", there's a good likelihood it's a human.

      If they don't know my name, they don't even know where they got my email from, so probably spam, however intelligible it looks.

      It's the same in the age of spam calls. If it's a mobile phone and the person behind didn't even bother to introduce themselves via SMS/WhatsApp, I don't pick up.

  • murderfs 3 days ago |
    The spiteful part of me wants to spin something up to punish this sort of behavior symmetrically by automating cold emails in the other direction to waste his time.
  • knallfrosch 3 days ago |
    Please start using a spellchecker. "excately" and "particilar" are not acceptable.

    Edit: "Unnecessary" might be my judgement, instead of "acceptable."

    • pnt12 3 days ago |
      I accept them.
    • drusepth 3 days ago |
      Interestingly (ironically?) I've heard of some bloggers intentionally adding typos to their posts to ensure the post looks like it was written by a human and not AI.
      • jacobgkau 2 days ago |
        That seems easy for AI to adapt to, and has a massive side effect of calling the author's reliability into question. Are those people going to go back and fix the intentional typos in a couple of years once AI spam is also full of typos?
  • atoav 3 days ago |
    I use catchall email addresses. If your service is called foobar.com I will register at your place with [email protected]

    If I ever receive spam addressed to [email protected] that is unrelated to your service I know you leaked or abused my data. Result: you will get a DSGVO complaint and I filter all emails addressed to this address from my inbox.

    The good thing about using a catchall email address is that I don't have to create a mailbox for each service/purpose, I can just make email addresses up as I go. All you need for that is your own domain and a mailserver that aupports it.

    • giorgioz 3 days ago |
      Very cool! Could you go deeper into your setup? Which email client do you use to view/manage the catch all emails? Did you host the email on Google Gsuite or AWS SES or something else?
      • kamilner 3 days ago |
        I do the same as the poster above, fastmail supports it directly and makes it very easy to manage. All you have to do is bring your own domain (they'll even manage your DKIM/SPF records etc as necessary if you want).

        Edit: Apparently you can also purchase a domain directly through them if you prefer, although you have to be a paying customer for 7 days first https://www.fastmail.com/how-to/email-for-your-domain/

        • freehorse 3 days ago |
          I use simplelogin with proton for that, they give you a few subdomains to do the same.
      • tsm 3 days ago |
        I have the same setup via GSuite.
    • mafuy 3 days ago |
      Does this allow you to also send emails as a particular address? I've not yet managed to set this up properly.
      • olex 3 days ago |
        Yes, with Fastmail this is quite easy to set up. It automatically uses the alias when replying to an email that was addressed to one, but you can also manually choose (on input) any alias for an outgoing email.
      • leni536 3 days ago |
        Even if the mail server you use for inbox does not allow it, you can set up mailgun or a similar service as your smtp server.
      • atoav 2 days ago |
        Depends on the client, in Thunderbird you can customize the sender address for each mail.
    • lmm 3 days ago |
      > If I ever receive spam addressed to [email protected] that is unrelated to your service I know you leaked or abused my data. Result: you will get a DSGVO complaint and I filter all emails addressed to this address from my inbox.

      Has this ever resulted in significant penalties for those companies? I used to do this but I gave up as it never seemed to achieve anything.

    • bruce343434 2 days ago |
      This is what I do as well, but sadly it seems my phone number has been leaked at some point... I'm considering setting up a private VoIP thing so that each company gets a unique phone number. Really nobody can be trusted with my data, it is a statistical inevitability that they get hacked or sell out.
    • K0nserv 2 days ago |
      I do this too, barcelonaairportwifi@<domain> is a prime offender and gets a lot of spam. I've also taken to using Fastmail's masked email support along the 1Password integration for the same.
    • MarioMan 2 days ago |
      While some companies filter against this, most email services support plus addressing to accomplish the same thing. You can register under [email protected], for instance, and all emails will still be delivered to [email protected]

      https://gmail.googleblog.com/2008/03/2-hidden-ways-to-get-mo...

      https://learn.microsoft.com/en-us/exchange/recipients-in-exc...

      • remram 2 days ago |
        This is very common, so lots of spammers will just drop the plus-part with a regex. Many sites even prevent signing up with an address containing a plus.

        Not trying to tell you to stop though, this is definitely a good idea, when it works.

      • atoav 2 days ago |
        The downside on that is that your original email address is still in there and good luck blocking mails to that.
    • remram 2 days ago |
      I've long wondered if you could put crypto into this, to make it secure from a human attacker who might figure out the scheme. Otherwise it is relatively easy for a spammer to replace foobar.com with google.com and email you again, escaping your filtering and/or making you think google.com has a data leak.

      For example, using a HMAC of the domain. So you generate [email protected], it's impossible to generate the sr32j4 part without knowing your secret key, and your mail server checks that sr32j4 is correct before accepting the mail.

      • atoav 2 days ago |
        Interesting idea, I like it. I am not profficient enough with mail servers to know how this could be done, but maybe a python script that just marks offending mails as spam would work as well.
  • muzani 3 days ago |
    This will suck for a long time, just like spam, clickbait, social media upvoting algorithms, cigarettes, soda. But eventually we'll sense it and build antibodies to it like everything else.

    Even now, we're starting to have a sense for which images and text were AI generated. And they'll evolve to get around the antibodies. And we'll build new ones.

  • stavros 3 days ago |
    I made a service to reply to marketing emails using GPT:

    https://github.com/skorokithakis/spamgpt

    It was a bit of fun, until I realized that most of the replied from the spammers were AI as well. We were just automatically spamming each other while OpenAI made money.

    I stopped using it then.

    • remram 2 days ago |
      Nice! This reminds me of Lenny, the bot for spammy phone calls, that sounds like an old person mumbling and not understanding: https://en.wikipedia.org/wiki/Lenny_(bot)

      Serves them right. Unless they're a bot too of course, then you can't waste their time.

  • lolpanda 3 days ago |
    ok does it mean an end to email? it's nearly free to send emails to anyone. for comparison, it's much more expensive to send linkedin messages or create ads on social networks. did anyone attempt to create a paid email service (pay to send)?
    • surfingdino 3 days ago |
      It means the "cooky" ideas of OPML/RSS two-way communication channels may have to be revisited. The problem is the humans, though. Even the most private communication channels will be breached by the one idiot who uses AI to "fix grammar". AI peddlers managed to inject themselves into the conversations we are having. It's really not good for the humanity as a whole.
  • asimovfan 3 days ago |
    Perhaps this will finally usher in the era of actually decoupling what is said from who said it (post post colonialism?)
  • surfingdino 3 days ago |
    It's all fun and games until HR use AI to write your annual performance review in which it is suggested that you got fired for sexual misconduct (this hallucinated from other guy's HR files), won a sales bonus for selling AI to your company (it was the other way round and it's the sales guy who got it), and are due to enter retirement (you are 29, but most of the company is over 50, so the probabilistic model prefers that passage of text).
    • bob_theslob646 a day ago |
      Wait...how was this resolved?
  • tambourine_man 3 days ago |
    > dove* a bit deeper

    Dug?

  • siscia 3 days ago |
    I have been building [GabrielAI](https://getgabrielai.com) also for address the too much spam in Gmail use case.

    Specifically smart filter to remove SPAM in a smarter way.

    Most people get a lot of spam from sales agents, SEO services, start-up accelerator, etc...

    With GabrielAI you can say stuff like:

    "If the email is from a SEO agency or it is trying to sell me SEO service"

    Then move it to SPAM.

    Similarly for all other type of spam or emails.

    You can also move stuff to different labels in Gmail to organise your inbox.

  • andretti1977 3 days ago |
    I won't add to the AI debate already well expressed by other commenters but one thing i don't understand is why the author has posted the name of the "spammed" product and a direct link to their blog: consider how much did he helped them having new traffic and potential customers
    • ssl-3 3 days ago |
      With limited exceptions*, [sometimes even egregious] factuality trumps self-censorship.

      I myself tend to name-and-shame regardless of how it may turn out, whether "positive" or "negative," when I feel compelled to be posting online about a thing I have encountered in my personal life. I think that openness and clearly-evident facts are very important parts of supporting the story that I wish to tell. (And if I did not wish to tell the story, then I would not have done so.)

      * But a line must be drawn somewhere.

      My own line is this: When I encounter a fucking nazi in real life, I make sure to not propagate whatever it is that this fucking nazi has to say, even if I have a story to write about that fucking nazi. (And we rather unfortunately have plenty of these fucking nazis here in Ohio, so I do get opportunities every now and then to exercise this self-restraint.)

    • viridian 2 days ago |
      Name and shame is a worthwhile practice. Driving potential business to The AI powered CMS known as Wisp, is not a good reason to avoid contributing to the common consensus about the company.

      And the common consensus in this thread, which I agree with, is that Wisp is obnoxious, insidious, and is an active participant in the degradation of quality of both email, and the internet as a whole.

  • varjag 3 days ago |
    Oh spam, the only industry AI had truly revolutionized so far.
  • xela79 3 days ago |
    just generate an AI reply and automate the flow :)

    > Hey Raymond,

    Thank you so much for your kind words about my post on revamping my homelab! It’s always a pleasure to hear from someone who appreciates the journey of continuous improvement. Your message truly brightened my day.

    Indeed, using Deno Fresh for my blog has been an exciting adventure. The process of managing updates and deployments, while sometimes challenging, has been incredibly rewarding. It’s like tending to a garden, where each update is a new seed planted, and every deployment is a blossom of progress. The satisfaction of seeing everything come together is unparalleled.

    Your introduction of Wisp has certainly piqued my interest. A CMS that simplifies content management sounds like a dream come true, especially for someone like me who is always looking for ways to streamline processes and enhance efficiency. The name “Wisp” itself evokes a sense of lightness and ease, which is exactly what one hopes for in a content management system.

    I would love to learn more about Wisp and how it could potentially fit into my workflow. The idea of having a tool that can make content management more intuitive and less time-consuming is very appealing. Could you share more details about its features and how it stands out from other CMS options? I’m particularly interested in how it handles updates and deployments, as these are crucial aspects for me.

    Thank you again for reaching out and for thinking of me. I’m looking forward to hearing more about Wisp and exploring the possibilities it offers. Let’s continue this conversation and see where it leads!

    Best regards, Tim

    • tobinfricke 3 days ago |
      Actually this is kinda a great idea. Honeypot the bots by engaging them with other bots. Would love to deploy this on telemarketers / spam calls.
  • throwaway0665 3 days ago |
    There are laws to mandate unsubscribe links on emails. There should be laws to mandate disclaimers when emails were sent through an automated process.

    No one believes the CEO has taken the time to email you with onboarding instructions immediately after signing up anymore. But outreach tactics like this are still quite manipulative.

  • zensnail 3 days ago |
    one man's spam, is another man's career.
  • xeeeeeeeeeeenu 3 days ago |
    Sadly, AI allows dumb people to do dumb things more efficiently.

    This reminds me of AI-generated fake security vulnerability reports about curl: https://news.ycombinator.com/item?id=38845878

  • ossyrial 3 days ago |
    The author links to the somewhat dystopian blog where the email sender is quite proud of their work. Their words (or perhaps that of an LLM):

    > Could an AI agent craft compelling emails that would capture people's attention and drive engagement, all while maintaining a level of personalization that feels human? I decided to find out.

    > The real hurdle was ensuring the emails seemed genuinely personalized and not spammy. I knew that if recipients detected even a whiff of a generic, mass-produced message, they'd tune out immediately.

    > Incredibly, not a single recipient seemed to detect that the emails were AI-generated.

    https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...

    The technical part surprised me: they string together multiple LLMs which do all the work. It's a shame the author's passions are directed towards AI slop-email spam, all for capturing attention and driving engagement.

    How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.

    • londons_explore 3 days ago |
      > > Incredibly, not a single recipient seemed to detect that the emails were AI-generated.

      Of the people who replied. I bet plenty figured it out, but didn't bother to reply.

      • Lio 2 days ago |
        Expect to see someone else write a blog post on How I Used AI to fool an AI Spammer

        ...of course they'd probably get an LLM to write the article too.

    • otherme123 3 days ago |
      The guy writes a post about how to send spam effectively, and then offers the subscription link in the end with "Promise we won't spam you". Yes, I totally trust you...
      • CoastalCoder 3 days ago |
        It sounds like extortion.

        "I'm sending spam that sneaks past your spam filter. Sign up to make it stop."

    • cornholio 3 days ago |
      News at 11, spammers use sophisticated techniques to increase the profitability of spam. This is absolutely shocking and never before seen, what is the world coming to.

      In all seriousness, manipulation and bullshit generation emerges as the single major real world use of AI. It's not good enough yet to solve the big problems of the world: medical diagnostic, auto accidents, hunger. Maybe just a somewhat better search tool, maybe a better converational e-learning tool, barely a better Intellisense.

      But, by God, is it fantastic at creating Reddit and X bots that amplify the current line of Chinese and Russian propaganda, upvote and argue among themselves on absurd topics to shittify any real discussion and so on.

      • CoastalCoder 3 days ago |
        I don't think this is unique to this specific technology.

        People can be both wonderful and despicable, regardless of era or mechanism.

        • cornholio 3 days ago |
          Sure, but I'm talking about the good:bad ratio of some creations. I really have strong hope for AI, and that we won't regard it in retrospect like the multi-stage thermonuclear device, the landmine or tetraethyl lead additives.
          • CoastalCoder 2 days ago |
            I hope you're right. I'm less optimistic.
          • squigz 2 days ago |
            Not to dismiss any of the negative aspects of "AI", but it seems utterly foolish to compare it to those 3 things.
            • orbitmode 2 days ago |
              In May reports emerged of this suicide by a young man in Australia - not AI related. https://www.barefootinvestor.com/articles/this-is-the-hardes...

              The following month, reports emerged of 50 girls in one Australian school being exploited in very similar ways by nothing more than a kid with a prompter.

              https://www.abc.net.au/news/2024-06-25/explicit-ai-deepfakes...

              Scaling this type of exploitation of children online is trivial when you think about anyone with basic programming skills.

              The Techno Optimists manifesto is what appears to be utterly foolish to me when you figure out that there is not one mention of accountability for downside consequences.

            • cornholio 21 hours ago |
              There is no comparison, explicit or implied. Just an enumeration to illustrate that some technologies are inherently likely to be harmful rather than beneficial, not everything is a question of the agency of the user.
      • brabel 2 days ago |
        > X bots that amplify the current line of Chinese and Russian propaganda...

        Do you think those countries are the only ones doing this? Just the other day there was a scandal about one of the biggest Swedish parties, one that's in the government coalition, doing exactly this. And that's just one that got caught. In countries like India and Brazil online disinformation has become an enormous problem, and I think that in the USA and Europe, as the old Soviet joke went: "Their propaganda is so good their people even believe they don't have any".

    • mns 3 days ago |
      I keep seeing these posts on HN and thinking, man, these are some smart people. Training LLMs, doing all this amazing AI stuff like this guy with the email agents and the other guy with the dropping of hats, and then I open the posts and it's just some guy making API requests to OpenAI or some similar disappointment.
      • brabel 2 days ago |
        Nowadays, an "AI Expert" is someone who knows how to download an AI client lib and prompt the AI to perform tasks. These are people who are not even technical and have no idea how all this works, but they can at least follow a Youtube Tutorial to get a basic website working.
        • macocha 2 days ago |
          Also in most cases they were a "crypto expert" just two months ago.
          • ryandrake 2 days ago |
            And they were a leadgen/SEO expert a few years ago. These technogrifters just move from one hot topic to the next trying to make whatever buck they can smooth talk people into giving them.
        • paulluuk 2 days ago |
          As someone who actually has a university degree in Artificial Intelligence, I feel like this is always how it's been. Before, an "AI Expert" was someone who knew how to use Tensorflow, PyTorch or Keras. Before that, an "AI Expert" was someone who knew how to write a Monte Carlo simulation, etc etc.

          You could of course say the same for frontend engineer or backend engineers. How many frontend engineers are simply importing Tailwind, React, etc? How many backend engineers are simply importing apache packages?

          Where do you draw the line? Can you only be an AI expert if you stick to non-LLM solutions? Or are AI experts the people who have access to hundreds of millions of USD to train their own LLMs? Who are the real AI experts?

          • internet101010 2 days ago |
            I would liken it to cars. There is a difference between engineers, mechanics, and mechanics that know a certain car so well that they fabricate parts that improve upon the original design.
            • aswegs8 2 days ago |
              Good comparison. Engineers who build cars and understand their intricacies oftentimes just work on one small thing at a time, even in teams. Like a team just working on breaks. The mechanics can piece the stuff together and keep it working in a real world setting. But nowadays a self-declared "AI Expert" in that metaphor might be just some person who knows how to drive a car.
              • jtbayly 2 days ago |
                I used to work on breaks, but then I realized I was more productive when I actually stopped and walked around a bit.
              • edmundsauto 2 days ago |
                If you think back to when cars were introduced, knowledge of how to drive a car was actually a rare skill! People weren't born with that inherent knowledge, so someone who could operate a vehicle (and do some basic maintenance) was an expert.

                Nowadays, that would be laughed at. But AI is more comparable to cars from 1900 than modern vehicles.

          • the_cat_kittles 2 days ago |
            i draw the line at people claiming to be experts in something they have only done for a year
        • shzhdbi09gv8ioi 2 days ago |
          Business as usual iow. Used to be scrum masters, then javascript "experts", then crypto bros.

          Snake oil salesmen we called em back in my day ;-)

        • jtbayly 2 days ago |
          Someone who can get a website working is actually technical.
      • biztos 2 days ago |
        When “altcoins” took off I spent a while racking my brain trying to figure out what special tech I could offer, how I could build my own blockchain, incentivize miners…

        When I realized it was just dudes copy-pasting a “smart contract” and then doing super shady marketing, it was already illegal in my jurisdiction.

      • mattgreenrocks 2 days ago |
        It’s more about being on the front of the hype train and being endlessly positive versus competence.
        • thih9 2 days ago |
          I can't see this working long term though. Being endlessly positive and ignoring your actual competence sounds like a recipe to eventually bite off more than you can chew.
          • mattgreenrocks 2 days ago |
            Oftentimes this is fervor is channeled into personal brand building, which rarely has any sort of feedback mechanism that is tied to actual competence.

            It's a calculated move on their part.

            • thih9 2 days ago |
              Brand building actually sounds good and productive to me, as long as it doesn’t approach fraud.

              If your audience likes your brand and doesn’t distinguish between your services and services done by more competent providers, then you’ve found your niche. So: snake oil is not fine; but Supreme branded brick sounds ok to me, even if I wouldn’t buy it myself.

              I guess the author will find followers who enjoy that approach to software and product growth. If spamming wasn’t part of it, I’d be ok.

      • johnnyanmac 2 days ago |
        well, no one's going to be talking about the secrets behind LLM while the market is paying billions to own their slice of the pie.

        And in reality, most software work is 1) API calls and 2) applied math. If you're not in cutting edge private tech or acedemia, your work probably falls into 1 or both categories. Modern "Software engineers" is more a matter of what scale of APIs you're wrangling, not how deep of domain knowledge you have.

      • seoulmetro 2 days ago |
        You thought wrong, that's all. Those things aren't remotely hard. They're just simple things people don't bother doing.
    • PlusAddressing 3 days ago |
      Funny how they're self assured no one whiffed their AI bullshit. This is survivorship bias, he's looking only at all the planes that came back to port. The people who did - they just didn't reply. He can't prompt them.
      • kuhewa 3 days ago |
        When you spend $200 on spamming people you need to believe it was effective
      • Bluestein 2 days ago |
        The planes coming back from the bombing raids in WWII come to mind.-
    • tsukikage 2 days ago |
      This process should not require a human in the loop.

      Consider:

      * spammers have access to large amounts of compute via their botnets

      * the effectiveness of any particular spam message can easily be measured - it is simply the quantity of funds arriving at the cryptocurrency wallet tied to that message within some time window

      So, just complete the cycle: LLM to generate prompts, another to generate messages, send out a large batch, wait some hours, kick off a round of training based on the feedback signals received; rinse, lather, repeat, entirely unattended.

      This is how we /really/ get AI foom: not in a FAANG lab but in a spammer's basement.

      • Bluestein 2 days ago |
        Very well could be. Seconded. After all, it could very well become one of the largest vehicles for "mass training", ever ...

        PS. Howerver, see comments downthread about "survivorship bias". Not everybody will reply, so biases will exist.-

      • FeepingCreature 2 days ago |
        At least one sci-fi novel iirc had an AI spam filter achieve sentience, because the task basically amounted to contrastive-learning a model of humanity.
        • biztos 2 days ago |
          Having worked in the field, I think you’re more likely to achieve AGI by intelligently watering tomatoes in a hothouse.
        • russnewcomer 2 days ago |
          That’s one of Peter Watt’s Rifters trilogy, I think maybe the second one? Been a few years since I read them. I think it’s a biological neural net, not an Ai per se. Lots of big ideas in those books, but not a lot of optimism and some rough stuff.
    • mihaaly 2 days ago |
      It is not only that too much is wasted on superficial nothing instead of choosing to make something with essence and benefitial for the society but it is sucking away those minds engaged in really useful things.
    • highspeedbus 2 days ago |
      Things I wish become taboo: Admitting to use AI content.

      Everyone is so comfortable doing shit like this.

      • rpgwaiter 2 days ago |
        I much prefer admission to hiding it. It lets you easily see who doesn’t deserve your time
        • jacobgkau 2 days ago |
          While that might work great on the individual level for a little while, it's unfortunately not how normalized taboos seem to work long-term. You're just going to see more and more people who don't deserve your time until you're wanting for anyone who actually does.
          • daemin 2 days ago |
            This has been mentioned before, but I can see the benefit in having curated webrings and similar listings. Where people can verify the content is not LLM generated.
            • ryandrake 2 days ago |
              As soon as that becomes effective, you'll have dozens of SEO sites and experts giving seminars on "How to get your LLM-generated website into curated webrings." An entire cottage industry will spring up for the purposes of corrupting legitimate webrings and/or creating fake astroturf webrings that claim to be curated.
              • pixl97 2 days ago |
                Oh, what about the petty fights between different webrings accusing each other of using AI generated content....

                Reminds me of the early days of the web.

          • beezlebroxxxxxx 2 days ago |
            > You're just going to see more and more people who don't deserve your time until you're wanting for anyone who actually does.

            I can see it, perhaps positively, investing far less importance and effort into online things. With admittedly a lot of optimism, I could see it leading to a resurgent arts and crafts movement, or a renewed importance put on hand-made things. People say "touch grass"; maybe AI will make people "touch crafts" (bad joke, I know).

        • yard2010 2 days ago |
          That's boasting, not admission
      • BoxOfRain 2 days ago |
        I think it depends on the context. I think there's artistic cases for it, for example I've played around with using AI tools to extract speech from its background music for use in further (non AI-based) music which I don't think is an unethical thing to do.
      • __loam 2 days ago |
        It's already like this for creative communities in things like illustration and writing. You will (rightly) get ostracized and blocked by your peers for using AI. It's a signal for poor quality for most people in those spaces.

        Definitely interesting to see the different culture in tech and programming since programmers are so used to sharing code with things like open source. I think programmers should be more skeptical about this bullshit, but one could make the argument that having a more flexible view of intellectual property is more computer native since computers are really just copying machines. Imo, we need to have a conversation about skills development because while art and writing accept that doing the work is how you get better, duplicating knowledge by hand in programming can be seen as a waste of time. We should really push back on that attitude though or we'll end up with a glut of people in the industry who don't understand what's under all the abstractions.

    • taurath 2 days ago |
      This is sort of why I feel somewhat pessimistic about AI - the inevitable most profitable usecases being so bad in aggregate for a society with almost no bounds or values other than profit. It will never be easier to waste peoples attention.
      • yard2010 2 days ago |
        This is not a problem with AI but with a system in which there are no other values other than "make the most money fast".
        • throwaway7ahgb 2 days ago |
          "No other values"? When and how is such Doomer Hyperbole getting into HN articles?

          This is half of major reddit subs now and I fear the same low quality comments will take over HN.

          People need to go out and touch some grass.

          • fl0id 2 days ago |
            or maybe you need to touch some grass.
            • runlaszlorun 2 days ago |
              I need some grass.
    • thrance 2 days ago |
      I remember seeing a talk from Jonathan Blow where he made a comparison: in the 1960s top engineers worked for NASA and put a man on the moon in a decade, basically doing computations by hand. Today, we have super advanced computers and tech companies enjoy 100× times more of the top engineers than NASA ever had, and they are all working toward making you click on ads more.
      • sandworm101 2 days ago |
        Someone decided that marketing is now a tech problem. Artists have been replaced by software engineers. The net result is creepy AI emails.

        I fell for oldschool marketing yesterday. Im moving into a new appartment in a couple months. The local ISP who runs fiber in my new building cold-called me. I agreed over the phone to setup the service. That was proper target marketing. The person who called me knew the situation and identified me as a very likely customer with a need for service (the building has a relationship with the ISP). I would never have responded to an email or any wiff of AI chatbot. They only made the sale because of expensive human effort.

        • pseudalopex 2 days ago |
          Sales people were never artists. Cold calling is not art.
          • johnnyanmac 2 days ago |
            cold calling isn't an art, but smooth talking/networking is. There's no exact science to making people feel good and wanting to form a relationship with you (despite centuries of literature claiming that there is).
            • pseudalopex 2 days ago |
              Art and an art have different meanings.
              • johnnyanmac 2 days ago |
                Based on the response upstream, I assume they were talking about the latter. There's no art to door to door sales reading a boilerplate. There is an art to researching a customer and curating a proper response to make them feel good.
                • pseudalopex 2 days ago |
                  They said artists. Artists means people who make art commonly. Not everyone with a non scientific skill. The word was incorrect no matter what they meant.
                  • johnnyanmac 2 days ago |
                    I don't always have perfect grammar on Hacker News either. Charitable interpretation.
            • Jensson 2 days ago |
              Programming and just about every other job is an art as well with that argument. If we aren't allowed to automate away that then we aren't allowed to automate anything.
              • johnnyanmac 2 days ago |
                It'll all vary based on what and who is automated. I'm sure there'd be less(but non-zero) fuss if we were trying to automate plumbing. I'm sure there'd be entire riots over trying to automate professional sports leagues.

                I'd say the art industry is somewhere in-between because of

                1. Being a traditionally disrespected but non-trivial skill to acquire 2. A skill valuable for advertisement (good art -> pretty ads -> more money 3. A valuable skill, but not one many industries need full time work from 4. Due to #1, a "vulnerable" industry. There won't be too many millionaire artists to fight back against the AI Overlords compared to, say, Politicians or businessesmen.

                But it's not like I have any say on who or what gets affected.

              • taneq a day ago |
                If it's not an art then if we make the best example of something, what would that example be the state of?
              • fennecbutt a day ago |
                OP is wrong anyway. Everything is art. Art is about interpretation not determinism.
        • loa_in_ 2 days ago |
          It's the tech that put you on a queue to be called
          • sandworm101 2 days ago |
            There was no tech here. My new landlord contacted the local ISP, the one they liked to work with, to say they had a new tenant arriving soon. I'd bet that my connection will have been setup long before I arrive, at a time convenient to the landlord and local provider. A landlord recommending a favored local vendor to a tenant, or a tenant to a vendor, is the sort of human relationship that predates electricity.
      • bamboozled 2 days ago |
        It’s as if real issues like climate change aren’t a thing that needs solving…
      • echelon 2 days ago |
        Just wait. Enough of us will get pissed off that we will develop AI agents that sit between us and the internet.

        A sufficiently advanced personal assistant AI would use multimodal capabilities to classify spam in all of its forms:

        - Marketing emails

        - YouTube sponsorship clips

        - Banner ads

        - Google search ads

        - Actual human salespeople

        - ...

        It would identify and remove all instances of this from our daily lives.

        Furthermore, we could probably use it to remove most of the worst parts of the internet too:

        - Clickbait

        - Trolling

        - Rage content

        I'm actually really looking forward to this. As long as we can get this agent into all of the panes of glass (Google will fight to prevent this), we will win. We just need it to sit between us and everything else.

        • madamelic 2 days ago |
          > Enough of us will get pissed off that we will develop AI agents that sit between us and the internet.

          Until _that_ company gets overrun by MBAs who are profit-driven then they start injecting ads into the results.

          It will come in the vein of "we are personalizing the output and improving responses by linking you with vendors that will solve your problems".

          • echelon 2 days ago |
            There will be a uBlock Origin for that.
          • mostlysimilar 2 days ago |
            > Until _that_ company gets overrun by MBAs who are profit-driven then they start injecting ads into the results.

            Found companies with people that share your values. Hire people that share your values. Reject the vampires. Build things for people.

            • pixl97 2 days ago |
              Unfortunately it turns out that at the end of the day one of the most common values is the love of massive piles of money. Vampires don't catch on fire in sunlight like storybook villains, they will invite themselves in, sidle up beside you, and be your best friend. Then in the moment you are weak they will plunge their fangs in.

              Competing with bad actors is very, very hard. They will be fat with investor money, they will give their services away, and commonly they are not afraid to do things like DDOS to raise your costs of operations.

              • webninja 2 days ago |
                Someone has to pay off the $1 Trillion per year in Interest on the U.S. Federal Debt. Who’s that going to be? Either it’s them or it’s you. At least your grandparents got to live a nice life.
        • the__alchemist 2 days ago |
          This was present in the book Fall;, or, Dodge in Hell. (Published in 2019; takes place in the near future) Everyone had a personal AI assistant as you describe to curate the internet. A big part of the motivation was to filter the spam. A secondary affect was that the internet was even further divided into echo chambers.
        • Bluecobra 2 days ago |
          I get what you are saying but what is the end result when someone is so shielded from the outside when they decide to block everything that irks them and stuck in an echo chamber?

          What if the user is a conservative voter and considers anything counterpoint to their world view the worst part of the internet and removes all instances of it from their daily lives? Not to say that isn’t already happening but they are consciously making the choice, not some AI bot. I can see something like this making the country even more polarized.

          • echelon 2 days ago |
            Same as it ever was.

            Growing up as a southern evangelical before the internet, I can promise you that there has never been a modern world without filter bubbles.

            The concept of "fake news" is not new, either. There has been general distrust of opposing ideas and institutions for as long as I've been alive.

            And there's an entire publishing and media ecosystem for every single ideology you can imagine: 700 Club, Abeka, etc. Again, this all predates the internet. It's not going anywhere.

            The danger isn't strictly censorship or filter bubbles. It's not having a choice or control over your own destiny. These decisions need to be first class and conscious.

            Also, a sure fire way to rile up the "other team" is to say you're going to soften, limit, or block their world view. The brain has so many defenses against this. It's not the way to change minds.

            If you want to win people over, you have to do the hard, almost individual work, of respecting them and sharing how you feel. That's a hard, uphill battle because you're attempting to create a new slope in a steep gradient to get them to see your perspective. Angering, making fun, or disrespecting is just flying headfirst into that mountain. It might make you feel good, but it undoes any progress anyone else has made.

      • paxys 2 days ago |
        And yet someone is building all those super advanced computers and AI models. Someone is launching reusable rockets into space. Someone is building mRNA vaccines and F1 cars and humanoid robots and more efficient solar panels.

        The "smart people are all working in advertising" trope is idiotic. Just an excuse for people to justify their own laziness. There is an infinite number of opportunities out there to make the world better. If you are ignoring them, that's on you.

        • runlaszlorun 2 days ago |
          > And yet someone is building all those super advanced computers and AI models. Someone is launching reusable rockets into space. Someone is building mRNA vaccines and F1 cars and humanoid robots and more efficient solar panels.

          Which is true. But clearly far fewer people work doing that than in advertising or some other seemingly meaningless grunt work. And I’m including the technological plumbling work with many on this site, myself included, have depended upon to support themselves and/or a family.

          Which at best is effectively doing minor lubrication of a large and hard to comprehend system that doesn’t seem to have put society as a whole in a particularly great place.

      • segmondy 2 days ago |
        Which do you think is more important? Putting man on the moon or ecommerce? I reckon you been able to get on a device, see a biscuit ads, order one from foo.com and have it shipped to you. Think of how much tech it takes for that to happen, that is more tech than NASA built to send many to the moon, the internet, packet switching, routing, fiber optic, distributed systems, web servers, web browsers, ads, cryptography, banking online, and so on and so forth. We love to trivialize what is common, but that clicking on an ad is not an easy problem. Clicking on ads has generated enormous wealth in the world which is now bootstrapping AGI.

        Clicking on ads helped with our goal to AI today. Showing you the right ad and beating those trying to game it is machine learning heavy. When was the first time we started seeing spelling correction and next word suggestions? It was in google search bar. To serve the correct ads and deal with spam? heavy NLP algorithms. If you stop and think of it, we can drop a think line from the current state of LLMs to these ads click you are talking about.

        • batch12 2 days ago |
          Marketing manipulation and spam is less important.
        • Jgrubb 2 days ago |
          That last line kind of makes the point. Is any of that actually inspiring to a young child?
          • thedevilslawyer 2 days ago |
            It sure as heck is inspiring to a critical thinking adult. There's been enormous value added to all world's citizens.
            • commodoreboxer 2 days ago |
              Interesting. In my experience, advertisement and the incentives around it have led to the most devastatingly widespread removal of value in human culture and social connections that we've seen in this generation. Huge amounts of effort wasted on harvesting attention, manipulating money away from people, isolating and fostering extremism, building a massive political divide. And centralizing wealth more and more. The amount of human effort wasted on advertisement is staggering and shocking.

              I don't think your average adult is inspired by the idea of AI generated advertisements. Probably a small bubble of people including timeshare salesmen. If advertisements were opt-in, I expect a single digit percentage of people would ever elect to see them. I don't understand how anybody can consider something like that a net good for the world.

              How does non-consensually harassing people into spending money on things that don't need add value to all the world's citizens?

              • ryandrake 2 days ago |
                "Adding value" and "Generating wealth" are always the vague euphemisms that these guys fall back to when they try to justify much of today's economic activity. Adding value for who? Generating whose wealth? The answer is usually "people who are already wealthy." Of course, they'll downplay the massive funneling of wealth to these people, and instead point to the X number of people "lifted out of poverty in the 20th century" as if capitalism and commerce was the sole lifting force.

                I wish some of these people would think about how they'd explain to their 5 year old in an inspiring way what they do for a living: And not just "I take JSON data from one layer in the API and convert it to protobufs in another layer of the API" but the economic output of their jobs: "Millions of wealthy companies give us money because we can divert 1 billion people's attention from their families and loved ones for about 500 milliseconds, 500 times a day. We take that money and give some of it to other wealthy companies and pocket the rest."

              • mrtranscendence 2 days ago |
                > If advertisements were opt-in, I expect a single digit percentage of people would ever elect to see them.

                I mean, you'd see the same thing if paying for your groceries were opt-in. Is that also a net bad for the world? Ads do enable the costless (or cost-reduced) provision of services that people would otherwise have to pay for.

                • mostlysimilar 2 days ago |
                  > I mean, you'd see the same thing if paying for your groceries were opt-in.

                  Is that seriously the comparison you want to make here? Most of us think the world would be better if you didn't have to pay for food, yes.

                • commodoreboxer 2 days ago |
                  Ads are not charity. There is clearly a cost, otherwise they would lose money. They do not generate money out of thin air. "Generate" and "extract" aren't synonyms.

                  They do not enable any costless anything at all. They obfuscate extraction of money to make it look costless, but actually end up extracting significant amounts of money from people. Ad folks whitewash it to make it sound good, but extracting money in roundabout ways is not creating value.

                • johnnyanmac 2 days ago |
                  > you'd see the same thing if paying for your groceries were opt-in.

                  Groceries are opt-in. Until you realize you don't want to hunt and cook your own food, then you opt back in for survival.

                  UBlock origin + some subscriptions show I'd definitely would love to opt out of IRL ads.

                  >Is that also a net bad for the world?

                  World, yes. We have to tech to end food scarcity, but poor countries struggle while rich countries throw out enough food each day to feed said poor countries.

            • raxxorraxor 2 days ago |
              I think this is a rationalization of an enormous waste of work. The effects generating wealth are indirect. In that regard you could argue that betting is generating wealth too. Advertising is like a hamster wheel people have to jump onto if they want their place in the market.

              A similar amount of wealth would be generated if every advertised product would be represented by a text description, but we have a race to the bottom.

              There is advertising and advertising of course but most of advertising is incredibly toxic and I would argue that by capturing attention, it is a huge economic drain as well.

              Of course an AI would also be quite apt at removing unwanted ads, which I believe will become a reality quite soon.

              • mlyle 2 days ago |
                > A similar amount of wealth would be generated if every advertised product would be represented by a text description, but we have a race to the bottom.

                I fear statements like this go too far. I can't agree with the first part of this sentence.

                I feel this about both marketing and finance:

                They are valuable fields. There are huge amounts of activity in these fields that offer value to everyone. Removing friction on commerce and the activities that parties take in self-interest to produce a market or financial system are essential to the verdant world we live in.

                And yet, they're arms races that can go seemingly-infinitely far. Beyond any generation of societal value. Beyond needless consumption of intellect and resources. All the way to actual negative impacts that clutter the financial world or the ability to communicate effectively in the market.

            • swat535 2 days ago |
              > enormous value added to all worlds citizens

              This is quite a statement to make.

              Please elaborate on what enormous value has spam ads and marketing emails added to _world_ citizens?

              Unless of course by “world” you mean Silicon Valley venture capitalists..

          • runlaszlorun 2 days ago |
            > Is any of that actually inspiring to a young child?

            I think the answer is pretty clear in the fact that so many of them, bluntly speaking, just don’t give a shit any more. I absolutely don’t blame them.

        • ksynwa 2 days ago |
          They are clearly talking about one aspect of the industry which is the marketing part related to maximising engagement. It is not meant to be conflated with the e-commerce industry as a whole.
        • lukan 2 days ago |
          "Putting man on the moon or ecommerce"

          The comparison here is between moonlanding and advertisement. So I choose the moon obviously.

          Ecommerce can work just the same without LLM augmented personalized ads, or no advertisement at all. If a law would ban all commercial advertisement - people still need to buy things. But who would miss the ads?

        • digdugdirk 2 days ago |
          It took way too long to convince myself this wasn't satire. I still wish it wasn't.

          It made me realize that I think many computing people need more of a fundamental education in "hard" physics (statics, mechanics, thermodynamics, materials science) in order to better understand the staggering paradigm shift that occurred in our understanding of the world in the early 20th century. Maybe then they would appreciate how much of the world's resources have now been directed by the major capital players towards sucking the collective attention span of humanity into a small rectangular screen, and the potential impact of doing so.

        • mxkopy 2 days ago |
          In the grand scheme, what you’re talking about is very zero-sum, while stuff like making rockets is not. Uber vs Waymo is a good example of how adtech can only go so far in actually creating wealth.
        • commodoreboxer 2 days ago |
          I keep hearing the phrase "generate wealth" in regards to advertisement and from the mouths of startup founders, but in almost no other context. I'm not familiar with the economic concept of "wealth generation" or its cousin "creating value".

          Is the idea that any and all movement of money is virtuous? That all economic activity is good, and therefore anything that leads to more economic activity is also good? Or is it what it sounds like, and it just means "making some specific people very wealthy"? Wouldn't the more accurate wording be that it "concentrates wealth"? I don't see a huge difference in the economic output of advertisement from most other scams. A ponzi scheme also uses psychological tricks to move money from a large amount of people to a small amount of people. Something getting people to spend money isn't inherently a good thing.

          • runlaszlorun 2 days ago |
            > Is the idea that any and all movement of money is virtuous?

            Maybe this was your point, but this is built in to one of the definitions of GDP, isn’t it? Money supply times velocity of money?

            I’m no economist though I’m sure there are folks on here who are. But this seems like an unfortunate fact that’s built into our system- that as laypeople we tend to assume that ‘economic growth’ means an increase in the material aspects of our life. Which in itself is a debatable goal, but our GDP perspective means even this is questionable.

            For example, take a family of five living out in a relatively rural area. In scenario one, both parents work good paying remote tech jobs and meals, childcare, maintenance of land and housing, etc. are all outsourced. This scenario contrubutes a lot according to our economic definitions of GDP. And provides many opportunities for government to tax and companies to earn a share of these money flows.

            Then take scenario 2, you take the same family but they’re living off of the grid as much as possible, raising or growing nearly all their own food, parents are providing whatever education there is, etc. In this scenario, the measurable economic activity is close to zero- even if the material situation could be quite similar. Not to mention quality of life might be rated far higher by many.

            What rating an economy by the flow of its money does do is, and I’m not sure if this is at all intentional, is it does paint a picture of what money flows are potentially capturable either by government taxation or by companies trying to grab some percentage as revenue. It’s a lot harder to get a share of money that isn’t there and/or not moving around.

            Perhaps my take on economics is off base but, for me, seeing this made me realize just how far off our system is from what it could and should be.

            • commodoreboxer 2 days ago |
              GDP is a measure. I'm very much not an economist, but I am extremely skeptical that the health of an economy can be reduced to any single number. Goodheart's law and all.

              I concede that GDP is a good indicator, but I think you can have things that help GDP while simultaneously hurting the economy. Otherwise any scam or con would be considered beneficial, and it would make sense to mandate minimum individual spending to ensure economic activity. A low GDP inherently shows poor economic health, but a high GDP does not guarantee good health.

              In my mind (noting, again, that I'm no economist), economic health is defined by the effectiveness of allocating resources to things that are beneficial to the members of that economy. Any amount of GDP can be "waste", resources flowing to places where they do not benefit the public. As Robert Kennedy famously pointed out, GDP includes money spent on addictive and harmful drugs, polluting industries, and many other ventures that are actively harmful.[0]

              [0]: https://youtube.com/watch?v=3FAmr1la6w0

              • pixl97 2 days ago |
                Going back to the previous posters monetary velocity statement, if you have a trillion dollar GDP, but it's just two AI's bouncing money back and forth high speed while all the humans starve in the street your economy is "great" and totally awful at the same time. The one number has to be referenced against others like wealth inequality.
          • cooolbear 2 days ago |
            "Generate wealth" means "make somebody's number go up" i.e. allocating real resources/capital somewhere, with the assumption that 1. allocating that capital creates a net boon for society and 2. those who have "generated wealth" are wise and competent investors/leaders and their investments will create a net boon elsewhere. The first point is probably not especially true very often in contemporary tech (other than 'job creation') and is arguably not true for advertisement. The second point is not really a given at all and seems to be pretty consistently shown otherwise.
      • aswegs8 2 days ago |
        Financial incentives, huh?
      • tim333 2 days ago |
        >they are all working toward making you click on ads more.

        Not all. Also men on Mars, AGI, Fusion etc.

        • johnnyanmac 2 days ago |
          well the biggest tech companies with 100x's the computing power are. I'm sure if the collective FAANG all focused their funds and hardware on getting to Mars we'd see the seeds of terraforming in our lifetime.
        • wongarsu 2 days ago |
          Google has 5 times as many employees as NASA, SpaceX, ULA, Rocket Lab and Aerojet Rocketdyne have combined. Which is actually a lot closer than I would have expected. But still, just Alphabet is a lot bigger than the entire US space industry. Adding Fusion probably doesn't change the numbers much.
          • oceanplexian 2 days ago |
            It’s not the fact they have 5 times the employees that surprises me, it’s how little they accomplish.

            SpaceX is launching multiple rocket ships into orbit every week. Google is.. releasing webpage CSS tweaks like “New Google Sign In Page” and a couple second rate AI products no one asked for when they get caught with their pants down.

      • webninja 2 days ago |
        No wonder. Have you seen the Federal government’s debt? It’s existential. At least your grandparents got to live a nice life.

        usdebtclock.org

        • taneq a day ago |
          Correct me if I'm wrong, but isn't the "government debt" the sum total of currency issued, rather than being like the balance on a credit card? It's better thought of as a measure of the size of the economy being governed. What you want to keep an eye on is the total inflation-adjusted 'value' of the economy, if this starts reducing then that's not good.
    • lkdfjlkdfjlg 2 days ago |
      > It's a shame the author's passions are directed to (...)

      Now do Google.

    • Waterluvian 2 days ago |
      I’m unsurprised to see a lot of very shallow usage of AI. Most users don’t have a real use case for the tool.
    • joelthelion 2 days ago |
      > The technical part surprised me: they string together multiple LLMs which do all the work. It's a shame the author's passions are directed towards AI slop-email spam, all for capturing attention and driving engagement.

      In defence of that guy, he's only doing it because he knows it's what pays the bills.

      If we want things to change, we need to fix the system so that genuine social advancement is what's rewarded, not spam and scams.

      Not an easy task, unfortunately.

    • suoduandao3 2 days ago |
      I do believe that commodified attention is the most logical currency of a postascarce society, so best case... quite a lot.

      Note my 'best case' scenario for the near future is pretty upsetting.

    • thih9 2 days ago |
      Also from that blog post:

      > As founder, I'm always exploring innovative ways to scale my business operations.

      While this is similar to what other founders are doing, the automation, scale and the email focus puts it closer to spam in my book.

    • nojvek 2 days ago |
      > How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.

      Facebook + Instagram is $100B+ business, So is Youtube and Ads.

      An average human now spends about ~3h per day on their screens, most of it on social media.

      We are dopamine driven beings. Capturing attention and driving up engagement is one of the biggest part of our economy.

    • johnnyanmac 2 days ago |
      >How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.

      trillions, easily. People wanna sell you stuff, and they will pay to get your eyeballs. doesn't matter if it's to sell you a candy bar or to enlist you into the military. Even non-profit/charities need awareness. They all need attention and engagement.

    • smnrg 2 days ago |
      “The best minds of my generation are thinking about how to make people click ads.” — Jeff Hammerbacher
    • smsm42 a day ago |
      I know my recipient would hate getting an automated email, so as a start of our relationship, I'm going to send them an automated email designed to deceive them. I'm sure it's a beginning of a beautiful friendship.
  • rgavuliak 3 days ago |
    We already know the sales reps that bombard us with emails don't give a **, now they're just better at pretending.
  • mmaunder 3 days ago |
    I find dropping “I” at the start of a sentence to be a far greater trigger. No one is that busy. AI or not.
  • lobochrome 3 days ago |
    I get 2 of those per day now due to my LinkedIn profile.

    One issue I see is that it’s much harder to employ an LLM defensively (for filtering) than offensively.

    Welp.

    • unraveller 2 days ago |
      It's easy outside of gated platforms. The whole 'too hard to filter' hypothesis can be tested instantly by throwing OP's email body into an LLM

      Subject: Your Passion For Homelabbing is Contagious (Spam: 6/10)

      Report: Flattery to establish a connection. Quick shift to product promotion. friendly but lacks personalization. Specific reference promotes their solution. Calls for a response.

      So even if buddy buddy spam becomes pervasive you really only have to decide how accepting your are of obvious sales tactics in normal comms. It may end up that everyone having more nuanced spam filters forces humans to use those same tactics less in normal comms.

  • kstenerud 3 days ago |
    The sad thing is, that AI email campaign - while touted as a success - was actually a failure.

    Although he got more click-throughs to the top of his funnel, none of them are going to pass through to a conversion because once you reach his site, you realize that he's deceived you.

    That he doesn't even realize this is concerning...

  • DonHopkins 3 days ago |
    https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...

    >"I knew that if recipients detected even a whiff of a generic, mass-produced message, they'd tune out immediately."

    Then don't brag about it on your blog! Sheez.

    (Ok, so technically he's not bragging about it on his blog, because it's probably just an LLM bragging about it on his blog for him, but that's the point!)

  • nathias 3 days ago |
    AI will also allow for better spam filters
  • praptak 3 days ago |
    This is bad news, because personalisation was a big advantage of spam filters.

    Everyone's spam filter is tuned differently from others', so spammers had a hard time beating this with automated messages. About the best they could do was adding random keywords in hopes of triggering someone's positive "not spam" trigger.

    Now spammers gain personalisation at scale, so this advantage is at risk.

  • rogual 3 days ago |
    I received a nice email the other day after one of my blog posts got posted on HN.

    It said:

    Hi -

    Just a note to say I'm a big fan of your writing. I always learn something and love your voice, which is hilarious and singular.

    Write a book!

    Best,

    {Name}

    {Link to sender's startup}

    {Link to sender's substack}

    New to writing online, it made me feel really good that someone enjoyed what I wrote and took the time to write and say so.

    After reading this piece, though, I went back and read it again, and I just don't know. It's not quite GPT's usual voice, but it is strangely non-specific.

    The startup is an AI startup, the person's Substack is full of generative AI illustrations, and they do seem like an AI fan, but reading their posts, they also seem like someone who's genuinely interested in preventing a dystopia.

    I suppose receiving encouraging emails from strangers is just another situation that'll have us looking over our shoulders now, on guard, trying to walk the line between naivety and paranoia.

    • j_maffe 2 days ago |
      Since there's no personalized content, it was probably just copy-pasted. I get the constant fear though.
      • rogual 2 days ago |
        I'm leaning towards genuine, to be honest. Just thought it was interesting that I even questioned it, which I wouldn't have done before.
        • superhuzza 2 days ago |
          Sorry but I'm fairly confident it's spam - they just want you to look at their startup/substack links. That's why they included the links at all.

          The compliment is a "foot in the door" so you don't immediately dismiss the email, and keep reading until the links.

          I get the same type of comments on all my blog posts. Here's 2 examples directly from my blog:

          "Awesome post! Keep up the great work! " (+ a link to their SEO service)

          "Nice website, love the theme! Can I use it?" (+ a link to their WP service).

    • squigz 2 days ago |
      I can't imagine leading a life this paranoid. There is practically no reason to suspect that email was generated by an LLM. This is like HN users who imply that some user comments are LLM generated...
      • corobo 2 days ago |
        Nice try, GPT
        • squigz 2 days ago |
          I once saw someone accuse another user of being an LLM because of a single word they used.
      • asddubs 2 days ago |
        having a website with a contact form will make you change your tune pretty quickly in regards to that
  • jonpo 3 days ago |
    "Deception through omission is still deception – people should be aware when they're interacting with an AI agent versus a human."
  • freehorse 3 days ago |
    I wonder if some "prompt injection defense" embedded in public blog posts could help identify such AI-generated spam.
  • shannifin 3 days ago |
    Even without AI, the message feels spammy.

    "Hey, love your work. random flattery What do you think about mine?"

    I've received a few messages like that before LLMs were around, just an annoying self-marketing technique.

  • account42 3 days ago |
    You received a SPAM email, did you report it as such? The AI part barely matters.
  • chucke1992 3 days ago |
    The future is now, old man.
  • willyt 3 days ago |
    I already delete unread emails like this that are written by humans. Unless there’s a specific bit of text in the email that’s generated by the new enquiry button on my website or someone has left an answerphone message then it’s deleted unread. There’s no way I have time to read every marketing email I receive like this guy.
  • lytefm 2 days ago |
    > Does this mean that I should private my GitHub-mirror to my personal blog, because this can become a common thing? I have removed my email from my GitHub-profile now, but they can probably get it from my Git-log anyway...

    It's possible to use a noreply.github.com linked to your username for making commits. And you can to change the authorship of past commits in your own repos with write access.

    I try to avoid give my email in a public and processable format whenever possible.

  • dav43 2 days ago |
    This why I’d consider shorting NVIDA at these prices. I get there are use case where it really adds value, but I think they are more limited to specific fields than people are acknowledging and forecasting.

    The general public doesn’t want or need it. They want to work less and get paid more.

  • mihaaly 2 days ago |
    I hate to say but this AI written oureach is strikingly similar to recruiter emails I received in the past 5-6 years about perfect matches so apparently my conclusion was right about that robots are working in the HR field. Carbon not silicon based back then. Actually this AI sounds much more intelligent bringing up realistic similarities. Those perfect match robots did not get beyond picking single keywords chosen of the dozens in LinkedIn account for declaring perfect match while the scope of the job and requirements were off by miles.

    Watch out recruiters, AI can do better than you! Not like I will like these unsolicited outreaches more, the exact number is zero, how many times I found these useful or relevant before when biorobots wrote and sent and administrated it in half or just few minutes, and I do not look forward having these now on mass scale when hundreds of AI could write thousands, flooding my email account, making it absolutely unusable.

  • unraveller 2 days ago |
    Can anyone explain why the SUBJECT LINE of the email was REDACTED in this blog post intro other than to give a false sense of being already drawn in to the email contents?

    I'm not after shallow interactions today and I would use it (much like a dumb spam filter) to judge a new sender's respect for my time expecting them to have stated their business with total upfront clarity, not mystery.

  • fbrusch 2 days ago |
    Could this be addressed with cryptography, digital ids and signatures? Imagine it were possible to add a signature that proves that I own some "human" identity (like a national id), or that I possess some scarce resource (like a github account with some level of activity) and that today I sent no more that 20 emails. If I want to conceal my identity, I can use zero knowledge proofs. If you don't sign this way, or if your daily email counter exceeds 100, your mail ends up as spam.
    • hippich 2 days ago |
      • fbrusch 2 days ago |
        yeah, hashcash is a very neat idea! but had problems like: how do you determine the threshold for the amount of work to be proven? there are values that makes it too expensive for a well intentioned human, and not enough for a bad intentioned spammer... moreover, it would induce economies of scale like it happens in bitcoin mining (spammers would invest in ASICs etc.) Signatures, on the other hand, allow to cheaply leverage other forms of "capital" (digital identity, github activity)
  • spion 2 days ago |
    To make this less spammy, the person sending out the emails could've instead used AI to filter through a smaller set of people where their product is likely to generate very high interest, based on a prompt containing the product description and perhaps a summary of the things the potential recipient blogged about. Then, they could've used that short list to write a set of _actually_ personalized outreach emails with high chance of impact.

    You could refine this in further iterations by also adding examples based on previous correct/incorrect interest predictions, thereby effectively reducing the amount of spam / making cold outreach suck less.

    There are different ways to use AI to achieve the same goals, some more responsible than others.

    • fl0id 2 days ago |
      and this would be better how?
      • spion 2 days ago |
        - Fewer people getting spammed

        - The people who receive the cold email are (increasingly) more likely to be at least somewhat interested

        - A human really wrote personalized emails, instead of trying to trick people into believing that

  • illwrks 2 days ago |
    Spare a thought for the gullible, children and teens, the elderly, those with restrained understanding, and those with English as a second language.

    They will all lose money, time and more with the coming wave of spam and fraud.

  • jpalomaki 2 days ago |
    I no longer bother to answer anything to cold emails or LinkedIn messages. Despate the personal tone, they seem to mostly driven by marketing automation tools.

    Maybe in future I will have my ”AI secretary” to answer those and have a discussion with the ”AI sales assistant”.

  • sirsinsalot 2 days ago |
    Always amazed and disappointed at humanity's ability to pollute everything it touches.
  • mirzap 2 days ago |
    There must be a new communication method for the upcoming AI age. Actual, person-to-person direct communication.

    Just as most of us ignore calls from unknown numbers, we may also default to ignoring emails from unknown senders in the future. This could lead to a reluctance to send emails, as they might be perceived as "unknown" to the recipient.

  • poulpy123 2 days ago |
    That's exactly why I'm not afraid of AGI. We will be drowned by AI generated crap long before
  • neom 2 days ago |
    You know, I've done startups for over 20 years now. Operated almost all the orgs, but spent the most time in go to market/marketing. I'm building an incubator/accelerator thing in Canada and I'm starting it from scratch, so it's basically doing another startup (something I swore I'd never do, fml).

    Hadn't touched marketing for ~5 years, as I said I know the org well so I thought it will take me about a month to get the next 6 months of marketing built and automated. How wrong was I. 7 days later, the full marketing org is running, at a decent scale, on autopilot, for a year, and I don't know if/when I'd need to hire someone into marketing.

    Marketing has not fundamentally changed, but it's changed such that one individual could fully operate the fundamentals. Personally I love it, I'm sure others are going nuts.

  • shzhdbi09gv8ioi 2 days ago |
    Not quite AI, but I been getting targeted spam from shitty startups and some job offers for the last couple years to my github commit email. It is scraped from github as I use it nowhere else.
  • bilsbie 2 days ago |
    We’ve lived with billions of spam emails for decades. I don’t know if the method of writing the emails matters much.

    Spam is spam?

    • unraveller 2 days ago |
      Spam can now be hyper-personalized to your latest online data points such that the inattentive might not expect certain things to be fakeable the first time they see it.

      Some people struggle with learning new ways of controlling for scams but it's never going away, just something they must consider more and use better tools to solve.

  • meiraleal 2 days ago |
    If he were really annoyed he wouldn't have marketed Wisp CMS?
  • sixhobbits 2 days ago |
    the blog he links to is clearly AI slop too. Even the LLM he used to write it agrees that what he's doing is unethical.

    > At the same time, we need to establish guidelines around transparency and consent for AI-driven communications at scale. Deception through omission is still deception – people should be aware when they're interacting with an AI agent versus a human.

    This is clearly pissing in the pool. I've gotten so much value from people who have made their emails public with a 'if you're curious or learning feel free to email me' (e.g. patio11) and I've long had the invitation in my HN profile too.

    Nasty for people to abuse this to extract value for the few weeks/months it takes people to realise what's happening and make themselves harder to contact.

  • paxys 2 days ago |
    "Personalized" spam generated from templates has been a thing forever. I've received plenty of such emails from recruiters highlighting my past experience, projects and what I'd be a good fit for. LLMs make them a bit more real, but overall the game hasn't really changed.
  • justanother 2 days ago |
    I get one of these every week or two. If someone says they're "impressed with the work you're doing" at my family S-corporation that magically W2-ifies my contract gigs, it's kind of a giveaway.
  • jordanpg 2 days ago |
    Perhaps a new signature technology can be used to prove (or at least lend credence to) human authorship?

    Something like a marriage of a digital signature with a captcha: the message has a digital signature of the sender that can be verified with their public key, but it is somehow verifiable that the particular signature provider only does the signature if a human being completes the (difficult AI-proof) captcha.

    Something like this approach can at least mitigate the mass AI email problem, although the one-off AI emails are unlikely to be slowed by this approach.

  • SergeAx 2 days ago |
    > Does this mean that I should private my GitHub-mirror to my personal blog, because this can become a common thing?

    You definitely should mark this email as spam so this cannot become a common thing.

  • tamimio 2 days ago |
    I do receive phishing emails; some of them are so well-crafted that I'm sure they've fooled some people out there. To the point where I've created a folder called "nice_try_phishing" where I collect them for further investigation. For example, one email was sent before I renewed a domain as a reminder to renew, with legitimate links to the domain registrar except for the action link. They had the registrar's domain name too, but with a different, very similar TLD. Another one is a "failed email delivery," and they did the research about which service I'm using to mimic such an automated message, with loaded links.

    Whether they are AI or not, I have no idea, but sometimes, and recently in emails, I purposely make a typo or grammar mistake to add some "human" touch to it, knowing that an AI will always type a perfect one.

  • aerotwelve 2 days ago |
    > Have you ever received an email that felt so personalized, so tailored to your interests and experiences, that you couldn't help but be intrigued?

    Did he use an LLM to write the blog post too?

  • daft_pink 2 days ago |
    I regularly get emails that start with “I hope this note find you well”, and I always assume they used chatgpt.
  • ocodo 2 days ago |
    > Not a single person detected they were corresponding with an AI. Some asked how I found their email, but no one questioned the authenticity of the messages.

    ... shall we tell him?

  • sherwinx a day ago |
    I open emails because I believe you spent time on me, which is precious, so I reply. If an AI generates cold emails, there's no real time spent, making personalization feel deceptive, which is bad for user experience.

    AI spam emails will definitely happen on a large scale in the future, but on the optimistic side, we'll have AI assistants to read every email. Personalization won't really matter; it only matters if the marketed service is genuinely useful to me. In the future, if I have a need, my AI will find the best solution by considering many options. Previously, this was time-consuming, but now AI ensures we find the most suitable solution.

    No fancy marketing will be needed because AI will filter most of them. In the end, marketers will find that the most efficient way to market is to honestly list out your service/product specs, as AI will compare them. On the other hand, for things I'm not sure I need, AI will help judge if they are indeed useful to me, regardless of how fancy the email/call is. If they are, it will facilitate cooperation; if not, it will skip them.

    Therefore, marketing may still have a role: to help you discover things you aren't fully aware you need, and AI will help you decide if you really need them.