Hi HN! I was mesmerized by the Claude Computer Use reveal last week and was specifically impressed by how well it navigated websites. This motivated me to create Cerebellum, a library that lets an LLM take control of a browser.

Here is a demo of Cerebellum in action, performing the goal “Find a USB C to C cable that is 10 feet long and add it to cart” on amazon.com:

https://youtu.be/xaZbuaWtVkA?si=Tq9lE6BXv9wjZ-qC

Currently, it uses Claude 3.5 Sonnet’s newly released computer use ability, but the ultimate goal is to crowdsource a high quality set of browser sessions to train an open source local model.

Checkout the MIT licensed repo on github (https://github.com/theredsix/cerebellum) or install the library from npm (https://www.npmjs.com/package/cerebellum-ai)

Looking for feedback from the HN community, especially on: What browser tasks would you use an LLM to complete? Thanks again for taking a look!

  • theredsix 7 days ago |
    OP here, happy to answer any questions you may have!
    • hugs 7 days ago |
      Thanks for using Selenium!
    • philonoist 6 days ago |
      What do you think about this tool changing the landscape of software testing?

      I think you could change the roles of SDETs and other quality assurance jobs dominated by Selenium and Playwright. I mean think about it. It would half the number of testers needed to do the same work.

      • theredsix 6 days ago |
        I think if you added additional function calls to detect visual bugs or breaking flows, tools such as this could automate much of QA in addition to detecting non-intuitive UI design patterns.
    • david_shi 5 days ago |
      Any plans for a python version?
      • theredsix 5 days ago |
        It's on the roadmap! A few other priorities are higher at the moment, but we'll be excited to see a PR for it in the meantime.
      • theredsix 3 days ago |
        Update: We had a contributor start a Python port, stay tuned!
  • Jayakumark 7 days ago |
    Can this work with local models ?
    • theredsix 7 days ago |
      Not at the moment, since you need a local model with strong segmentation capabilities (x, y) and none exist ATM. We hope to train one in the future and one of Cerebellum's roadmap items is to create a the ability to save your sessions as a training dataset.
      • digdugdirk 7 days ago |
        Do you not think it could work with a shim layer that handled the browser interaction via code and selenium?
        • theredsix 6 days ago |
          Selenium works on webdriver v4 and the screenshot is transferred as an image by the webdriver protocol. Perhaps modifying DOM before triggering the screenshot and then reverting the changes can work. PRs are welcome!
      • Jayakumark 6 days ago |
        Any idea on how does Sonnet does this, is the image annotated with bounding boxes on text boxes etc. along with its coordinates before sending to sonnet and it responds with box name back or co-ordinate back or ? is SAM2 used for segmenting everything before sending to sonnet ?
        • theredsix 6 days ago |
          They don't discuss this at all on their blog other than "Training Claude to count pixels accurately was critical." My speculation on how they accomplished it is either explicit tokenizer support with spacial encoding similar to how single-digit tokenization improves math abilities or an extensive pretraining like Molmo.
  • its_down_again 7 days ago |
    > but the ultimate goal is to crowdsource a high quality set of browser sessions to train an open source local model.

    Could you say more on this? I see that it's an open-source implementation of PLAN with Selenium and Claude's Cursor, but where will the "successes" of browser sessions be stored? Also, will it include an anonymization feature to remove PII from authenticated use cases?

    • theredsix 7 days ago |
      The next step will be adding functionality to convert and save a BrowserStep[] into a portable file format and addition conversation functions to turn those files into .jsonl that can be fed into the transformers library etc. For the PII piece, there's no current plans to introduce anonymization features but open to suggestions.
  • 0x3331 7 days ago |
    Very cool!
  • imvetri 4 days ago |
    You don't need LLM.

    Build interface to build knowledge graph.

    Nodes containing words, verbs are action, nouns are past verb. Action is movement on space.