• tunesmith 10 hours ago |
    What a wonderful story. I've also had an experience of someone in a writing community being able to name a story I read in my youth, it is really a unique feeling to have a distant hazy memory made real due to the wisdom of another. It's a similar feeling to finding the perfect word for an uncertain feeling you've never been able to give voice to.
    • WillAdams 9 hours ago |
      Same for me, I couldn't recall the title or author of _Hit the Bike Trail!_, but someone on the /r/cycling subreddit had the thought of looking through _Publishing Weekly_ archives to identify it.

      Still much appreciated, gifted it to some cousins of mine who are the age I was when I read it.

  • rossdavidh 9 hours ago |
    "You will even notice how it neatly covers everything I could remember – giving equal weight to each data point and deftly joining them all together.

    And again, what ChatGPT here had to offer was utterly – absolutely – false.

    Like a fluent and practised (but unwise) liar it had contrived an account that fitted only the available information."

    A fundamental flaw in modern "AI" is that it has no model of reality against which it is checking its (or anyone else's) words. It isn't even lying; it has no concept of the difference between truth and lies, therefore it cannot be made to stop lying, because it isn't even lying, it's just spewing language that sounds good.

    This makes it a passable tool for sales, but an extremely poor tool for anything which requires accuracy of any kind. It's not that it is trying to be accurate and failing (in which case further work on it might be expected to improve it); it is not attempting to be accurate; indeed the concept of accuracy is not anywhere in its architecture. Vocabulary, grammar, syntax, yes; accuracy or truth, no. It will not get better at being truthful as it is worked on, it will only get better at being persuasive.

    • ben_w 7 hours ago |
      > A fundamental flaw in modern "AI" is that it has no model of reality against which it is checking its (or anyone else's) words. It isn't even lying; it has no concept of the difference between truth and lies, therefore it cannot be made to stop lying, because it isn't even lying, it's just spewing language that sounds good.

      For the early ones, the reality against which they were checking their words, was their training corpus. Now they've also got RAG and search.

      In the context of "find a story that fits this description", even the original training corpus was pretty effective. Not perfect, but pretty good… for stuff that was well represented.

      If all Transformer models could do was vocabulary, grammar, and syntax, they wouldn't have ever been usable as more than a very light form of autocomplete.

      Even word2vec got analogies (man is to woman what king is to queen, etc.).

      > it will only get better at being persuasive

      I kinda agree, unfortunately: it will *also* keep getting better at being persuasive, and this is indeed a bad thing. "LGTM" is easier to test for than "correct" — but to me, that's an example of humans having limits on how well we can model reality, how well we can differentiate between truth and lies, etc.

  • cschmidt 8 hours ago |
    I also had a science fiction book from my childhood that I kept trying to find. Eventually I did find the title and author through a chat with ChatGPT, unlike in this case. (It was Midworld by Alan Dean Foster, if anyone is curious. I'm not sure why that particular book stuck in my head.)
  • CobrastanJorji 8 hours ago |
    It's funny. As soon as he described his problem, I suspected ChatGPT would enter the picture. It's often significantly better than search engines for finding the name of even an obscure work from a description, so of course folks on book-finding subreddits would use it a lot.

    But the author's absolutely right to warn that it also regularly fails us, and the author's also right the celebrate the folks who are trained specifically in finding this sort of information in ways that the automated tools can't replicate yet.

    • sumtechguy 8 hours ago |
      They were also pointing out an interesting point that ChatGPT does. It treats everything as relevant. Whereas the librarian who found the book. Systematically discarded possible 'facts' and substituted others (goblins->demons) to find out what was going on. Not sure any AI does this currently.
      • ben_w 7 hours ago |
        ChatGPT does do that for me, when I'm using it for tasks like David Allen Green's book hunt.

        This has yet to help. If it can find it, it (so far) has not needed to change the details I provided; if it doesn't know, it will change them to something thematically similar but still not find what I wanted (and if I insist on requiring certain story elements that it had changed, it will say something along the lines of "no results found" but with more flowery language).

      • jcutrell 6 hours ago |
        I suspect that, given a reasonable prompt, it would absolutely discard certain phrases or concepts for others. I think it may find it difficult to cross check and synthesize, but "term families" are sort of a core idea of using multi-dimensional embedding. Related terms have low square distances in embeddings. I'm not super well versed on LLMs but I do believe this would be represented in the models.
    • mrob 7 hours ago |
      I think a more robust approach would be to restrict the generative AI to generating summaries of book texts. First summarize every book (this only has to be done once), and then use vector search to find the most similar summaries to the provided summary. Small mistakes will make little difference, e.g. "goblin" will have a similar embedding to "demon", and even entirely wrong information will only increase the number of books that have to be manually checked. Or better yet, develop an embedding model that can handle whole books at once and compare the vectors directly.

      Perhaps somebody with more compute than they know what to do with could try this with Project Gutenberg as a proof of concept.

    • metalliqaz 7 hours ago |
      It's also interesting that years of trying on Twitter and Reddit failed, but asking on Bluesky succeeded. I'm certainly not claiming that Bluesky is some kind of great leap forward compared to Twitter. But it could be that being a new service it just isn't as crowded with bots, spam, and BS -- thus allowing the signal to come through.
    • jimnotgym 7 hours ago |
      I was sent a photo of a page from a book with a great piece of writing. He didn't know the book. I ocr'd the page and pasted it into ChatGPT. It lead me on a merry dance where it started unequivocally that it was a book it couldn't have been. It then started making up books from similar authors. Every time I said, 'there is no such book', it appologised and then made up a new book. It was like talking to a salesman, trying to bullshit it's way through a pitch.

      I put a short piece of it into Google books and it found it! I asked ChatGPT about the actual book and it claimed to know it!

      It was a book called Blood Knots by Luke Jennings. I bought it, and before I read it I saw the friend who sent me the excerpt, and gave it to him. A year later I saw the same book, shelf soiled, in an independent store. It was worth the wait, it was a great read.

      I also saw David Allen Green (author of the above) ask his question on Bluesky on my first day using it. Somehow I feel part of this story

  • jjulius 8 hours ago |
    Reminds me of an old radio broadcast or some kind of audio recording that I've been trying to find for ~25 years. My mom had listened to it when she was younger, and had somehow managed to get it onto cassette tape for us to listen to when we were kids. It was some kind of Christmas story we'd listen to while decorating cookies, a kind of crazy tale that you never heard anywhere else, involving the towns of "Twinkle Twankle" and "Twonkle Twonkle" and other crazy wordplay like that. Unfortunately, that's the only unique bit that I remember, save for recalling a melody or two here and there and the timbre of the narrator's voice, neither of which help in tracking it down.

    I'd love the satisfaction of tracking it down some day just like this person did.

    • Reubachi 8 hours ago |
      You'd be happy to know that googling "Twankle Twonkle Twonkle" will yeild the result you're looking for :) I just found a few that look to be exactly what you describe.
      • teruakohatu 7 hours ago |
        I am not the OP but nothing Google served up to me resembled an old radio show. , even after instructing Google to search for the exact phrase.
      • jjulius 7 hours ago |
        No it doesn't[1]. Dropping the quotes just yields a bunch of results with those words, but nothing resembling what I'm looking for. I was very confident when I posted my initial comment that this was the case - after all, those four words are the only thing I recall, and therefore are what I have frequently Googled for. :)

        [1]https://imgur.com/a/BYM3d1r

  • inanutshellus 7 hours ago |
    I have one of these "40 year quests" too, but it's a cartoon.

    Maybe you clever folk will be more ingenious than I've been.

    The story goes like this:

    An old king's life is upended one day when a beautiful, mysterious woman appears and says she'll grant youth and her hand in marriage to the man that completes some challenges. The only challenge I remember was that she sets up 3 cauldrons: One had boiling oil. One had milk. The last had ice-cold water.

    The king wants to see it work first so he points to a random little boy and orders him to jump into the cauldrons or he'd be put to death.

    The boy leaps into each cauldron and there's a terrible delay on the last one. The cold water cauldron even freezes over.

    The boy breaks out of the last cauldron and has been transformed into a strapping young man.

    The king, seeing proof that it works, decides to jump into the cauldrons. However, when he hops out, he's still an old man.

    The woman announces that the magic only works once, and she and the stable boy walk away together, arm-in-arm.

    ...

    I've searched for it online a fair bit but I've never found it.

    Some details from my memory:

      * The cartoon was very short (less than 30 minutes. probably closer to 10 or 15) 
      * It had no dialog, only sound effects and music.
      * A woman's voice narrated it. I can still hear her.
      * Now that I'm grown, I see it having a Slavic or Russian aesthetic.
      * The woman had black hair and a long white dress.
      * The king was very short with a big white beard.
      * The boy, when he turns into a man, has pointy boots and shoulder pads. :)
      * Probably made between 1975 and 1985
      * Part of an anthology (many cartoons on one VHS tape... ours had been run so much that it started to skew and stretch the image)
    
    ...

    In my mind, it's aesthetically very similar to an heirloom that my grandmother made and I assume that's why I've always wanted to find it.

    ChatGPT and the intertubes in general haven't been very useful.

    • brazzy 7 hours ago |
      > Now that I'm grown, I see it having a Slavic or Russian aesthetic.

      Maybe it was in fact produced in Russia or one of the former Warsaw Pact countries? They had their own animation tradition, and some of it was translated in the West (like The Little Mole from Czech is), but I can easily see how such works could be very obscure to English-language searches.

    • niccl 7 hours ago |
      This sounds slightly familiar. Were you in the UK at the time? There was a series on BBC during the children's watching time (pre-6:00 pm, I'd guess, about the same time that Belle and Sebastian [0] showed) that had Slavic fairy/folk tales. Not quite cartoons, but definitely a cartoonish vibe. A little like The Story Teller [1], but much earlier.

      Sadly, I can't recall any more about it than that, but maybe it'll help that you're not alone. And of course this could be nothing at all related to what you're after.

      [0] https://en.wikipedia.org/wiki/Belle_and_Sebastian_(1965_TV_s... [1] https://en.wikipedia.org/wiki/The_Storyteller_(TV_series)

    • romanhn 7 hours ago |
      Oh hey, that totally sounded familiar :) Pretty sure this is from Konyok Gorbunok (The Little Humpbacked Horse), a 1975 Soviet cartoon based on a famous Russian fairy tale. The bit you're describing is at 1:07:30 of https://youtu.be/fKc22eSL1gA.

      This doesn't quite fit several of the points you remember (very much in line with the post!), so perhaps it was some other edition of that same story.

      EDIT: so, I just plugged your description into ChatGPT and it gave the exact same answer, including an identical timestamp! Weird.

  • jimnotgym 6 hours ago |
    Sidenote: David Allen Green, the author of this blog, is a brilliant writer on constitutional law in the UK. It is quite a subject since Britain doesn't have a written constitution. He was a wonderful guide through the Brexit chicanery.
    • freedomben 5 hours ago |
      For people like me wondering how it's possible not to have a written constitution: https://en.wikipedia.org/wiki/Constitution_of_the_United_Kin...

      tldr: it is actually (literally) "written", but not all in the same place. It's scatterred throughout various places.

      • jfengel 2 hours ago |
        The US Constitution is "written" but is essentially incomprehensible without a stack of Supreme Court decisions the size of Mount Everest. (Perhaps literally.) None of the words mean what you would think they mean.
  • chrisguilbeau 6 hours ago |
    I'll add my experience to the mix. I was in Thailand in the early 2000s and we were eating at a night market and I heard a song that sounded like something by the Beatles or another 60s band. I started looking for what it might have been when I got back to the states a year later; did it say doctor Jones? Friends and google were no help. Anyway, 20 years later I asked ChatGPT and it come back with "New York Mining Disaster 1941" by the Bee Gees... Simply incredible. I suppose there will be fewer of these decade long searches now!
  • empath75 4 hours ago |
    Somewhat hilariously, I just asked chatgpt 4o with web access, and it found this blog post, and then authoritatively repeated verbatim the same wrong answer he was given from ChatGPT on reddit.

    https://chatgpt.com/share/678061d1-7920-8000-9b72-0b2d7ea60e...

    I gave up trying to get it to fix its answer.

  • ThinkingGuy 3 hours ago |
    For anyone who has a similar quest (trying to find semi-remembered media from one’s youth, based on vague details), r/tipofmytongue on Reddit is a great resource. People there often succeed where AIs fail.
  • jnsie 3 hours ago |
    Lovely and well told story though I'm not sure what the following excerpt has to do with anything:

    > The second is that nowadays the real problem perhaps is not with Christmas decorations staying up too late, but with them going up too early, and with shops selling Christmas wares and playing Christmas music well before Advent, let alone Christmas