For the past 3 years I have been working on a Golang port of plausible analytics dashboard.

vince is a single binary, single user with multiple website system with zero runtime dependency.

Key featues:

- Automatic TLS - Outbounds link tracking - File downloads tracking - 404 pages tracking - Custom event tracking

And so much more, basically everything that you see on plausible dashboard except funnels and custom properties.

You can use vince as a drop in replacement for plausible for personal websites.

The goal is to make the plausible dashboard easily accessible for people who like to self host.

All features not related to the dashboard are non goal, hence not implemented.

Full dashboard demo hosted on 6$ vultr instance https://demo.vinceanalytics.com/share/vinceanalytics.com/v1/...

  • cebert 6 days ago |
    If you haven’t checked it out yet, Serverless Website Analytics, is a great solution for this too. It’s easy to deploy and very inexpensive to run. I’ve been using it and am quite happy with it. https://github.com/rehanvdm/serverless-website-analytics
    • gernest 6 days ago |
      Interesting, I just checked the readme. Very similar but looks like it only works with AWS and has a lot of moving pieces.

      How do you deal with location data, do you purchase maxmind db license or use their free versions.

      Both maxmind and db-ip free versions of city data miss city geo id values, rendering city data useless for many cases.

      With vince, I had to index embed the whole city data from geonames database to work around this.

      • reincoder 5 days ago |
        > How do you deal with location data, do you purchase maxmind db license or use their free versions.

        > Both maxmind and db-ip free versions of city data miss city geo id values, rendering city data useless for many cases.

        I work for IPinfo.

        I think you might find my conversation with Goatcounter's dev interesting: https://github.com/arp242/goatcounter/issues/765

        I pitched him to use our free country database because of MaxMind's EULA issues. MaxMind does not permit distribution of the database and requires end users to use their own token. Moreover, they actually charge thousands of dollars when you distribute the "free" database with a commercial intent.

        Now, we have a free IP to Country database that we offer under a straight CC-BY-SA 4.0 license without an EULA. It is free, comes with daily updates, has full accuracy, and you can even commercially redistribute the database (via providing us an attribution).

        I understand we do not have a free city database to offer, nor is our database lightweight because we have full accuracy. But you can check it out if you are interested. We do have a version with ASN (ISP) information as well.

  • 8ig8 6 days ago |
    Matomo is another one…

    https://matomo.org/

  • rgbrgb 6 days ago |
    > Full dashboard demo hosted on 6$ vultr instance https://demo.vinceanalytics.com/share/vinceanalytics.com/v1/...

    404 page not found

  • aaronbrethorst 6 days ago |
    Looks interesting. What sort of memory requirements does it have and how does it persist data?
    • gernest 4 days ago |
      The demo, which survived HN hug of death is running on 6$ vultr instance.

      RAM : 1GB

      STORAGE: 25 GB

      so far bandwidth used is 3.6GB

      So, you can successful deploy vince on low spec servers depending on your expected traffic.

  • colesantiago 5 days ago |
    Great project keep it up it's good to see competition in this space.

    Plausible gets crazy expensive on their hosted option and it complex to setup (needs elixir + high memory requirements)

    If Vince gets 1:1 parity with plausible and has the option to use clickhouse, I'll consider moving a few servers and people I know over.

    Love that Vince is also a single binary as well.

  • just-tom 5 days ago |
    The screenshot on your homepage looks very similar to plausible's https://plausible.io/ which is also open-source analytics software. Is it based on it? What are the differences?

    Edit: Just noticed the feature comparison in the readme.

    • dewey 5 days ago |
      Also Plausible is almost stock TailwindUI elements + including the default color, so many sites look like that.
  • brokegrammer 5 days ago |
    This is amazing! I self host Plausible but don't like depending on Clickhouse and Postgres because they're annoying to upgrade.

    What kind of database is this using though? I don't know enough Go to figure it out from the source.

    • tricked 5 days ago |
      I checked the go.mod and it seems to be importing a module named pebble by cockroachdb i assume that's where everything is stored

      https://github.com/cockroachdb/pebble

    • akshayshah 5 days ago |
      It uses Pebble, the key-value store that backs CockroachDB.
      • colesantiago 5 days ago |
        Just saw this notice:

        > WARNING: Pebble may silently corrupt data or behave incorrectly if used with a RocksDB database that uses a feature Pebble doesn't support. Caveat emptor!

        Slightly worrying for now running this in prod if there is a risk for silent data corruption, but hopefully in a few years Vince would have drivers for Postgres / Clickhouse.

        • rickette 5 days ago |
          This just warns about using Pebble with an existing RocksDB which isn't the case here. Pebble powers CockroachDB which is a Serious Database.
          • kamikazechaser 2 days ago |
            And Ethereum's state store. Which is an even more serious "database".
        • dangoodmanUT 5 days ago |
          Reread the sentence, it says if you mix it with RocksDB (another database that has compatible file formats)
  • t0mas88 5 days ago |
    It says GDPR compliant and no cookies on the project page. How are unique visitors calculated? And I'm assuming it can't link conversions to campaigns without some cookie-alternative?
    • withinboredom 5 days ago |
      No idea, but generally, a bloom filter would get you there without any identifying information being stored. The counts would merely be estimates at that point, not exact values.
    • pdyc 5 days ago |
      most likely through one way ip hashing bounded by time duration. If you have utm's in your url than it can track otherwise probably not.
    • beeb 5 days ago |
      At least for Plausible, they state this (https://plausible.io/blog/google-analytics-cookies):

      > Instead of tagging users with cookies, we count the number of unique IP addresses that accessed your website. Counting IP addresses is an old-school method that was used before the modern age of JavaScript snippets and tracking cookies.

      Since IP addresses are considered personal data under GDPR, we anonymize them using a one-way cryptographic hash function. This generates a random string of letters and numbers that is used to calculate unique visitor numbers for the day. Old salts are deleted to avoid the possibility of linking visitor information from one day to the next. We never store IP addresses in our database or logs.

      • chrismorgan 5 days ago |
        > Since IP addresses are considered personal data under GDPR, we anonymize them using a one-way cryptographic hash function.

        Um... hashing IPv4 addresses, even with salt, does literally nothing to anonymise (assuming the output space is at least ~32 bits, which I think is safe to assume): they’ll still be PII. IPv6 addresses I’m not so confident about; maybe it would be sufficient for some parts, but it’s definitely inadequate for some concerns.

        (For IPv4, enumerating all four billion inputs is so completely practical that “one-way” is nonsense.)

        I’m almost certain this is legal theatre.

        • gizzlon 5 days ago |
          hm.. are you saying they need scrypt or something similar?
          • kadoban 5 days ago |
            The problem, in general with hashing IP addresses (especially ipv4) is that there's not that many of them.

            If I tell you the value is either 1 or 2, but I hashed it with sha256 to make it secure, that's bullshit, right? You can just hash both and see which it is.

            Same concept applies regardless of the hash algo, and still applies if you have more than 2 possible values, 4 billion or so possible ipv4 addresses is _not_ that many values to a computer.

            Other common places this problem occurs is with any other restricted set of values, eg phone numbers and email addresses (most are at like 5 domains and are easy to guess/know).

          • chrismorgan 5 days ago |
            The “PII” label is taint that is probably impossible to dispel completely/perfectly, and difficult to dispel sufficiently (and deanonymising is an arms race).

            Lossless techniques do nothing to dilute that taint.

            Lossy techniques are necessary to get anywhere, such as disregarding certain bits of the address, or Bloom filters.

        • kadoban 5 days ago |
          If what they're doing is using a secure salt and then throwing the salt away once a day that _might_ be doing something.
          • chrismorgan 5 days ago |
            What I understand they’re doing is storing the salt in one place, a set of hashed IP addresses in another place, then daily trashing the lot after counting the number of elements in the set and storing that.

            Information-theory-wise, this is no different to just storing the actual IP addresses (and deleting them daily after tallying, as before). It does mean that you need to obtain two things instead of just one, but if you get access to it all, it’s straightforward to reverse the lot (though computationally a little expensive), and easy to check a single value for a match.

            The technique may be considered reasonable effort at protecting against casual abuse, but it’s not technically effective of itself, and it doesn’t stop the data from being PII. The important aspect is that the PII is deleted within 24 hours. My personal opinion is that the hashing part should probably be considered snake oil and whitewash, at least for what they’re claiming—I don’t say it’s useless, but it definitely doesn’t do what they’re touting it for.

            Unless they’re actually keeping the hashed values for some reason after one day, and associating them with other records? In which case, disregard part of what I say, it’s obviously better than persisting IP addresses long-term! But also it’s extremely dubious to call that anonymisation as they do, because you can so often tie things together, behavioural patterns and such, to deanonymise. It’s frighteningly effective.

            • tingletech 5 days ago |
              If you throw away the daily random salt (but keep the obscured IP address), how can you check a single value for a match the next day?
              • chrismorgan 5 days ago |
                Refer to my understanding in the first paragraph—I don’t think they’re retaining the hashed values after a day either? If they are, sure, apply my last paragraph, you can’t do a single match any more. (But the whole thing would still definitely be susceptible to deanonymisation.) But at the very least, it’s easily reversible for up to 24 hours.
        • Semaphor 5 days ago |
          One way if you have a salt? Enumerating won’t help, you need to know the salt, which gets deleted.

          That said, the whole IP thing is weird to me. Not only are we allowed to log IPs directly for security reasons, we even *have* to log IPs in certain cases (newsletter subscriptions).

          • kadoban 5 days ago |
            > That said, the whole IP thing is weird to me. Not only are we allowed to log IPs directly for security reasons, we even have to log IPs in certain cases (newsletter subscriptions).

            The point of designating something as PII isn't that we then _never_ store or use it, it's to carefully consider if we actually need it or not (and what protections we can add for the values we do need to store/use).

            We're meant to stop the practice of just collecting and storing all data, without consideration for the harms that causes.

        • jszymborski 5 days ago |
          What matomo does is mask parts of the IP address (you choose how much).
        • alkonaut 3 days ago |
          Couldn't this be done with a Bloom filter in such a way that (in exchange for a small error rate) you'd not keep any individual hashes?
    • awongh 4 days ago |
      As a side consideration, according to the varying opinions in response to this question it’s not really clear what constitutes PII (personally identifying information).

      When I researched this topic it was strange to me that no one seems to agree. Is it just arm-chair internet answers? Or is it actually that the letter of the law is actually ambiguous? What are the real world consequences of using this when it’s possible it violates GDPR? Or, what are the chances there would be consequences?

      • t0mas88 a day ago |
        PII in the general public's definition is a name, address etc. The confusion in these discussions comes from European regulations defining browsing behaviour as personal data, which makes GDPR applicable to it. Even if that browsing behaviour data is in layman's terms anonymous "and thus not PII" it is considered personal data under EU rules.
  • drchaim 5 days ago |
    this is great, congrats!
  • lovegrenoble 5 days ago |
    Is is a Plausible clone? https://plausible.io
    • __jonas 5 days ago |
      From the Readme:

      > vince started as a Go port of plausible with a focus on self hosting.

  • samdung 5 days ago |
    This is great. I'm def going to use it.

    Minor bug: "See Live Demo Dashboard" url is wrongly pointed.

  • pdyc 5 days ago |
    Looks exactly like plausible, may be change the ui a bit to avoid legal issues.
    • carlosjobim 5 days ago |
      I was going to say that it looks exactly like BeamAnalytics, and now I'm confused to who's copying who...
      • dewey 5 days ago |
        • huhtenberg 5 days ago |
          It's not just the looks that are the same. The UX / mechanics are way too similar too, e.g. how you can apply filters (by URL, by referrer, by browser, etc.) to narrow down the stats view.
          • rkuodys 5 days ago |
            I would say pretty much the idea is as follow: "Let's do it so User would know how to use it before we are big", and once you're big enough - you can set the trend. But at the beginning it's just not worth it and highly risky
      • serial_dev 5 days ago |
        I'm wondering when copying becomes just following industry best practices...

        Twitter, Threads, Mastodon, Blusky all look the same. Project management apps all reuse the same UI patterns. The "AI" logo looked pretty much the same for all companies for a while. Video sharing websites all use YouTube's layout. Forums like Reddit and HN share quite a lot in their looks.

        If you want to display website analytics, you will want to show the most important metrics at a glance, you'll need graphs showing visitors over time, top sources and pages... There is only so much you can do to display those and have users understand what's going on on your website.

    • NelsonMinar 5 days ago |
      What legal issues are you imagining?
  • notRobot 5 days ago |
    The dashboard demo isn't working :(
  • zoidb 5 days ago |
    My go-to self hosted GA alternative is goatcounter https://www.goatcounter.com. It would be interesting to know what advantages it has over it.
    • james-bcn 5 days ago |
      Oh I like that main dashboard. Very simple.
      • TravisPeacock 5 days ago |
        If you like that there is https://www.piratepx.com/ which is even more minimal (though less data), I also built something even MORE minimal (only API calls) https://github.com/teamcoltra/ninjapx but I'm certainly not recommending it. It is super simplistic (also the readme is embarrassing)
    • huhtenberg 5 days ago |
      Does it allow filtering visited page list by a specific referrer and vice verse?
      • zoidb 4 days ago |
        Yes, it does if I understand what you mean. You can see the traffic distribution (what paths were accessed) broken down by referrer.
  • cpursley 5 days ago |
    How would y’all go about building analytics into a professional marketplace type of app where you can provide the professional with their own profile page stats (in a reliable way)?
  • Oras 5 days ago |
    If you don't have plans to offer saas, what are you trying to achieve from it?

    I mean, it is quite nice to have binary installation hosted on a single VPS, but will you support it?

    • rrr_oh_man 5 days ago |
      FOSS lives!
  • rasso 5 days ago |
    Does this work on your average 10,-/month shared hosting server? If so, it might really be „for everyone“. Otherwise, we are stuck with matomo.
    • diggan 5 days ago |
      > Does this work on your average 10,-/month shared hosting server?

      Since they usually offer software via cPanel and alike, seems unlikely unless you give it lots of time for the project to first get popular enough to get on the "admin panels" mind, and secondly for them to integrate it.

      Besides, do people really pay 10 USD/month for shared hosting? Sounds really expensive when you can grab VPSes for half that price and run whatever software you want, not just what they've packaged for you. I guess ongoing maintainace is included in that price, but still sounds kind of expensive for what you get.

      • rasso 5 days ago |
        I don‘t know… around here (Germany), that‘s pretty common. No need to manage anything, no usage-based cost, … my favourite is https://all-inkl.com. OG no-bs hosting for boring tech.
  • manishsharan 5 days ago |
    I think the reason some of us continue using Google Analytics is its demographic data. That information is not available elsewhere as far as I know , which I admit is not a lot.
  • paradite 5 days ago |
    Not sure why I would use this over Plausible CE on docker. Does it consume less memory/CPU?

    Also I am pretty sure Plausible CE doesn't limit number of sites / events, unlike what's listed in "Comparison with Plausible Analytics".

  • written-beyond 5 days ago |
    Code quality is pristine, really great job! I see that you've used protocol buffers, can you expand on why? I am aware of the benefits it offers but I think it adds a bit of mental overhead initially due to it being an additional type system you have to understand.

    Also why are you using pebble exactly? I was interested in seeing how you're managing your geo databases because that's usually the most mind numbing part of handling analytics if your cloud provider doesn't add that information into the request header already. However, I can't understand why you'd use pebble over something like sqlite.

    • gernest 5 days ago |
      Thanks,

      > Why protocol buffers ?

      They are very good for defining API boundaries, in vince we only use them for configuration and admin structure. We use Roaring Bitmap based storage, so fundamental units persisted are Bitmap containers.

      > Also why are you using pebble exactly?

      Well, vince is write heavy and any LSM based key value store would have been nice. It happens pebble is the best option for us.

      Also, we don't use transactions (We batch writes and use snapshots for reads). Combining with the fact we rely on pebble batch Merge api.

      The merge api allows us to do efficient updates. Since we only store bitmap containers, when doing update we just do a container union of observed values of a key.

      Bitmap unions are pretty fast and efficient.

      I hope I covered all your questions.

      • written-beyond 5 days ago |
        It answered them alright, but really opened a few hundred more. I appreciate your time!
  • skeptrune 5 days ago |
    Cool that there are so many of these now. Currently self hosting plausible and it does seem quite barebones. Will have to give this a shot!
  • gonafr 5 days ago |
    How this compares to umami (https://umami.is/)?
    • arcastroe 5 days ago |
      I'm also interested in this. They seem to have very similar UI
  • vextea 5 days ago |
    There seems to be some mentions of selling licenses (and pricing) in the source. What are the plans around that?

    https://github.com/vinceanalytics/vince/blob/f0c2c3cc38cbd8c...

    • gernest 5 days ago |
      When I started working on vince, I thought I could bootstrap a sustainable business, that was about 3 years ago.

      My dream for a business is practically dead now. That snippet is a relic of early days of vince and I will remove it.

      I am currently looking for work, and will be maintaining vince as usual (I do a lot of open source stuff) since I also use it with my hobby projects.

      I'm struggling finding remote roles now, since remote now means Remote US or Remote EU and I'm stuck here in Tanzania.

      So, don't worry, I also use vince so I will keep hacking on it.

      • vextea 5 days ago |
        Makes sense, wish you the best of luck!
  • sira04 5 days ago |
    Looks great!

    I found a small bug, if you click Expand in the Top Pages section, the Time on Page column has NaNs.

    Dark mode for the dashboard and showing realtime current visitors in the <title> would be great.

  • cchance 5 days ago |
    "see live dashboard" button on main page just... goes to the top of the page lol
  • slyall 5 days ago |
    Going through the docs I find you don't actually have a bit about how to make your website to use it. I mean I can work it would and it'll be obvious to proper front end developers but at not point do you say:

    "Add the following line to you page source to send data to Vince"

  • lomkju 4 days ago |
    Nice Work! Very easy to install and use.

    I deployed this on our cloud (excloud.in) in less than 2 mins.

    Anyone you can use the below k8s manifest to deploy it to their k8s cluster. Just change the admin password before doing so.

    https://gist.github.com/lomkju/90fe7500d8cf854bf3b7c2f26aa58...

    • gernest 4 days ago |
      Thanks, that is very nice setup.

      Does it always pull the latest vince image?

      Just FYI, we also have simple helm charts, and the repository is hosted on https://vinceanalytics.com/charts

      • lomkju 4 days ago |
        > Just FYI, we also have simple helm charts, and the repository is hosted on https://vinceanalytics.com/charts

        Oh cool, didn't see that in the docs.

        > Does it always pull the latest vince image?

        Yes haven't specified any tag so should default to latest.

  • kukkeliskuu 4 days ago |
    This is great. For me the commercial Plausible is just not plausible. I have a site with 2M page views, with most of the pages cached, which keeps the server costs minimal, I pay around 50 USD per month. I don't get much revenue from the site. I want to show visit counts on the site. For 2M page views, Plausible (with the stats API) would cost 189 USD per month, quadrupling my costs.
    • gernest 4 days ago |
      This is one of the reason I created vince.

      For reference, the demo is hosted on a 6$ vultr instance, the last 3 days it handled about 11.9K pageviws with 4.3K unique visitors.

      I have just checked the vultr dashboard.

      Bandwidth = 3.37 GB ,vCPU usage = 1% (yep one percent) , Current charges = 1.06$.

      Majority of the bandwidth is for outgoing data serving the dashboard.

      I carefully designed vince to be extremely efficient for web analytics workloads.

      Please give vince a try.

    • openplatypus 4 days ago |
      Hi, just FYI, the Wide Angle Analytics (my product) will cost you between 30 and 90 EUR per month for 1M and 10M accordingly.

      There are many web analytics providers with surprisingly high prices.

      We are cheaper and even planning on creating free tier by making smart use of resources and avoiding overpriced cloud providers.

    • maeil 3 days ago |
      2M page views and not much revenue does sound like a choice. I have no affiliation to Plausible but 2M pageviews per month has such high revenue potential that if you'd monetize it (which frankly is the logical assumption they'd operate on), $189 month would be a trivial expense.
      • kukkeliskuu 2 days ago |
        You are partly correct, although it really depends. My site is in Finnish, which makes Google AdSense the only really viable option, unless I want to spend lots of time finding affiliate marketing revenue. That pays approximately 1.3 euros per 1000 page views, and does not work well with mobile page views on my site. I get 2M page views on high season, now it is off-season and visit counts are lower. I really get only around 20 euros per day on ad revenue, which makes around 600 euros per month. 200 euros per month cost is not "trivial". I have some other revenue, but that is small as well. Header bidding companies are interested to work with you if you have 5M+ page views per month. On a longer term, I think there is potential, but sure, I have made the decision to make the site foremost a public service, and revenue is secondary.
        • maeil a day ago |
          That's an incredibly low CPM, especially since your visitors will almost solely be from Finland, right? It's not uncommon to earn ~$1k/month purely from ads on blogs with ~100k unique visitors/month. So your CPM seems >20 times less than that.

          Do the better paying ad networks all reject you solely because of the language?

  • QuasarLogic 4 days ago |
    can we compare it with Shynet? https://github.com/milesmcc/shynet

    Shynet is similarly self hostable, and has a tiny footprint..