Show HN: A Better Log Service
138 points by williebeek 15 hours ago | 83 comments
Hello everyone, there are many log services available and this is my attempt at a better one.

Most online logging tools feature convoluted UIs, arbitrary mandatory fields, questionable AI/insights, complex pricing, etc. I hope my application fixes most of these issues. It also has some nice features, such as automatic Geo IP checks and public dashboards.

Although I've created lots of software, this is my first open source application (MIT license), the tutorial for selfhosting is hopefully sufficient! Most of my development career has been with C#, NodeJS and PHP. For this project I've used PHP (8.3) which is an absolute joy to work with. The architecture is very scalable, but I've only tested up to a few billion logs. The current version is used in production for a few months now. Hope you enjoy/fork it as you see fit!

  • that_guy_iain 14 hours ago |
    This looks very interesting!

    My suggestion for the self-hosting is to create docker images and use docker-compose. The self-hosting currently is a bit of effort to setup.

    I also wonder if PHP is a good language for this. For the UI, yea that's fine and makes sense. But for the log processor that's going to need to handle a high throughput which PHP just isn't good at. For the same resources, you can have Go doing thousands of requests per second vs PHP doing hundreds of requests per second.

    • majkinetor 14 hours ago |
      No benefit using go over C#, IMO, and I am also baffled by the switch
      • that_guy_iain 14 hours ago |
        I just used go as an example, any compiled language would be better.
        • withinboredom 14 hours ago |
          I highly suspect it wouldn't be better in brainfuck...
    • majkinetor 14 hours ago |
      It uses Clickhouse, though, which should be xtremelly fast for this.
      • that_guy_iain 14 hours ago |
        Yes. But PHP still needs to process it before it goes to Clickhouse. PHP is the bottleneck.
        • axelthegerman 13 hours ago |
          If that "bottleneck" is thousands of requests per second then it doesn't really matter for smaller deployments does it? (Which seems to be the target audience and not FAANG)

          I'm not a big fan when folks call out languages as bottlenecks when they have no proof on the actual overhead and how much faster it would be in another language.

          • that_guy_iain 13 hours ago |
            To tweak a PHP deployment to handle hundreds of requests per second which is very very realistic for a basic logging for a mid-sized application you're looking at having a very beefy server setup.

            Most PHP deployments barely reach a hundred per server.

            And this is an open source project is should be designed to handle basic production workloads which it could but it'll cost you a bunch more than if you used the correct languages.

            > I'm not a big fan when folks call out languages as bottlenecks when they have no proof on the actual overhead and how much faster it would be in another language.

            Honestly, I thought it was so obvious that an interpreted language is not good for high throughput endpoints that it didn't need to be proven. I also thought it was obvious that a logging system is going to handle lots and lots of data.

            It could be easily proven by doing a bunch of work but obviously there is no point in me proving it.

            • withinboredom 11 hours ago |
              I rebuilt durable-functions in php. Durable Functions is a C# actor model runtime. My PHP implementation meets or exceeds the same benchmarks as the C# version.

              > It could be easily proven by doing a bunch of work but obviously there is no point in me proving it.

              Because you cannot prove it... :) I wrote this post a few years ago, that actually spurned some improvements in C# ... so here you go: https://withinboredom.info/2022/03/16/yes-php-is-faster-than...

              • that_guy_iain 11 hours ago |
                No, it's because I've got productive things to do that do benchmarks that have already been done repeatedly. The only way to get PHP to the same speed as compiled languages for web requests is to use experimental tooling.

                I notice your benchmarks are over 10 runs?! That's not a good sample size. And even more importantly, it's not in the same context.

                Sure once you compile PHP and have it running it'll run fast. But PHP has a very specific usage which is web applications. It's been well-known for years that PHP's performance issues are related to the fact it's an interpreted language that has to be interpreted everytime but if you compile and run repeatedly it can perform extremely well. Which is why every performance related PHP nerd is working on experimental tools to do that.

                • mrngm 9 hours ago |
                  I'm sure you've heard of PHP's opcache by now. That's not experimental and actually caches the interpreted code in memory for faster subsequent execution.
                  • withinboredom 9 hours ago |
                    Yes, and in 8.4, uses an actual IR for machine code generation and is pretty cool!
                  • that_guy_iain 9 hours ago |
                    Yes, it makes it faster but does not deal with the core performance issues which is why Roadrunner, Frankenphp, etc exist.
                    • Implicated 8 hours ago |
                      I see, you're just trolling at this point.
                • withinboredom 9 hours ago |
                  > That's not a good sample size.

                  Like I said in the blog post, if I tell you the sky is blue and you don't believe me; run them yourself. FWIW, C# is faster now for that particular use case. Also, like I mentioned in a previous blog post ... which one would you rather maintain:

                  - https://github.com/TheAlgorithms/C-Sharp/blob/master/Algorit... -- merge sort in C# 130 lines

                  - https://www.w3resource.com/php-exercises/searching-and-sorti... -- merge sort in PHP 60 lines

                  PHP is often far more concise than C#, and many other languages. I code more in Go than C# or PHP these days, but even Go has its limitations where it would be easier to express in PHP than Go. There are even certain classes of algorithms that are butt-ugly in Go but quite pretty in PHP.

                  PHP is still my favorite language, even though I hardly get to use it these days.

                  > PHP has a very specific usage which is web applications.

                  Originally, yes. But it outgrew that about 10 years or so ago. It's much more general purpose now.[1][2]

                  [1]: https://nativephp.com/ -- desktop applications in php

                  [2]: https://static-php.dev/ -- build self-contained, statically compiled clis written in php

                  • neonsunset 8 hours ago |
                    You do realize that you are comparing two different implementations with different type systems that use different abstractions? Clearly you can't be serious. So, unless you are being intentionally misleading, this raises questions about the quality of "PHP solution" that is being worked on.
                    • withinboredom 7 hours ago |
                      I'm being serious in using it as an example of maintainability/expressiveness. The difference is deliberate, not accidental. I've written 15 PHP lines that would be hundreds of lines in C#, and I've written 15 lines of Go that would be hundreds of lines of PHP. Every language has its own strengths and weaknesses and levels of complexity. PHP fits into a sweet spot (IMHO) of low-levelness and high-levelness, but it is often not seriously considered due to its reputation in the 00's.
                  • phillipcarter 8 hours ago |
                    The C# to PHP comparison is not fair, as the link you gave for the C# code uses abstractions to support "arrays" that could also be backed by file storage. An equivalent translation of the PHP code is about 60 lines as well, before applying any code golfing (and including comments and whitespace).
            • mrngm 9 hours ago |
              Well, looking at our bespoke logging system in PHP handling some 15-20+ million log entry's per day on a virtualized dual-core system... it's mostly disk I/O on the underlying MySQL database (currently duplicating to Clickhouse where we'll eventually store everything). And that is central application logging for about 100 servers (think syslog), some 400 "microservices" (parts of a larger application), and a handful of backend systems.

              We're running out of disk space earlier than that PHP is a bottleneck here.

            • Implicated 8 hours ago |
              > To tweak a PHP deployment to handle hundreds of requests per second which is very very realistic for a basic logging for a mid-sized application you're looking at having a very beefy server setup.

              There's just no way that you're at all familiar with PHP of the last 10 years to think this is true.

              > It could be easily proven by doing a bunch of work but obviously there is no point in me proving it.

              Prove it. Please, show me the context and environment you think PHP would struggle to serve "hundreds of requests per second". I'd venture a bet that a plain Laravel installation on the cheapest digital ocean droplet would top this and Laravel is "slow" in relation to vanilla PHP.

        • robocat 8 hours ago |
          Are you sure PHP is the bottleneck?

          The author writes that Clickhouse takes 0.1s for an example request: https://news.ycombinator.com/item?id=42666703

          PHP would need to be adding 0.1s CPU time for processing the request for the PHP code to become the bottleneck. That seems unlikely.

          • thunky 3 hours ago |
            That 0.1s is to write 4k rows to clickhouse, not per (log write) request.
    • withinboredom 14 hours ago |
      PHP is arguably the best solution here. If a log ingestion process breaks everything, no other logs are harmed (a default shared-nothing architecture). Using something like Go, C#, etc, it might be "faster" but less resilient -- or more complex to handle the resiliency.

      > But for the log processor that's going to need to handle a high throughput which PHP just isn't good at.

      I'm sorry, but wut? PHP is probably one of the fastest languages out there if you can ignore frameworks. It's backed by some of the most tuned C code out there and should be just about as fast as C for most tasks. The only reason it is not is due to the function call overhead -- which is by-far the slowest aspect of PHP.

      > you can have Go doing thousands of requests per second vs PHP doing hundreds of requests per second.

      This is mostly due to nginx and friends ... There is frankenphp (a frontend for php running in caddy which is written in go) which can easily handle 80k+ requests per second.

      • that_guy_iain 14 hours ago |
        I'm going to have to also reply with, sorry but what?!

        PHP is one of the fastest-interpreted languages. But compiled are going to be faster than interpreted pretty much everytime. It loses benchmarks against every language. That's not to mention it's slowed down by the fact it have to rebuild everything per request.

        As a PHP developer for 15+ years, I can tell you what PHP is good at and what PHP is not good at. High throughput API endpoints such as log ingestion are not a good fit for PHP.

        Your argument that if it breaks it's fine. Yea, who wants a log system that will only log some of your logs? No one. It's not mission critical but it's pretty important to keep working if you want to keep your system working. And in fact, some places it is a legal requirement.

        • withinboredom 14 hours ago |
          > It loses benchmarks against every language.

          Every language loses benchmarks against every other language. That's not surprising. Since you didn't provide a specific benchmark, it's hard to say why it lost.

          > High throughput API endpoints such as log ingestion are not a good fit for PHP.

          I disagree; but ultimately, it depends on how you're doing it. You can beat or exceed compiled languages in some cases. PHP allows some low-level stuff directly implemented in C and also the high-level stuff you're used to in interpreted languages.

        • hipadev23 12 hours ago |
          How have you worked with PHP for 15 years and have absolutely no idea how it works or even baseline performance metrics.
          • Implicated 8 hours ago |
            He's not being truthful. Literally, there's no way that what he's saying is true. Either about various aspects of modern PHP or his experience with it.
    • williebeek 13 hours ago |
      Thanks for the tip, I will check if inserting rows with Go is any faster. For reference, inserting a log takes three steps, first the log data is stored in a Redis Stream (memory), a number of logs are taken from the stream and saved to disk and finally inserted in batches in ClickHouse. I've created it so you can take the ClickHouse server offline without losing any data (it will be inserted later).

      For reference, moving about 4k logs from memory to disk takes less than 0.1 second. This is a real log from one of the webservers:

      Start new cron loop: 2024-12-18 08:11:16.397...stored 3818 rows in /var/www/txtlog/txtlog/tmp/txtlog.rows.2024-12-18_081116397_ES2gnY3fVc (0.0652 seconds).

      Storing this data in ClickHouse takes a bit more than 0.1 second:

      Start new cron loop: 2024-12-18 08:11:17.124...parsing file /var/www/txtlog/txtlog/tmp/txtlog.rows.2024-12-18_081116397_ES2gnY3fVc

      * Inserting 3818 row(s) on database server 1...0.137 seconds (approx. 3021.15 KB).

      * Removed /var/www/txtlog/txtlog/tmp/txtlog.rows.2024-12-18_081116397_ES2gnY3fVc

      As for Docker, I'm too much of a Docker noob but I appreciate the suggestion.

    • herbst 13 hours ago |
      On the other side some people (me) are happy to have an actual self hosting setup and not being forced to use a docker setup with unknown overhead.
      • xinu2020 12 hours ago |
        Why not both? It's not much trouble to publish a Dockerfile while still documenting a normal installation.
        • herbst 12 hours ago |
          It's not, but more often than not it's just a dockerfile
    • ryanianian 13 hours ago |
      PHP trivially scales up to multiple nodes behind an LB. You're really only limited by your backend storage connection count and throughput.

      Go and friends may make for more efficient resource utilization, but it will be marginal in the grand scheme of things unless there are plans to do massively different things.

      As it is this code is very simple. I haven't used PHP in 15 years and I was able to trace through this from front-end to back-end in less than 3 minutes.

      To me it look like a really great level of complexity for the problem it solves.

      Keep it up, OP.

    • hipadev23 13 hours ago |
      > PHP doing hundreds of requests per second.

      You may want to update your understanding of PHP and Go's speed . Both of your estimates are off by a couple orders of magnitude on commodity hardware. There are also numerous ways to make PHP extremely fast today (e.g. swoole, ngx_php, or frankenphp) instead of the 1999 best practice of apache with mod_php.

      Go is absolutely an excellent choice, but your opinion on PHP is quite dated. Here are benchmarks for numerous Go (green) and PHP (blue) web frameworks: https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s...

      • p_ing 11 hours ago |
        As soon as you add C#, ASP.NET Core shoots to the top of the Fortune stack.
      • that_guy_iain 11 hours ago |
        What you're talking about is generally not considered production-ready. While you can use these tools you will almost certainly run into problems. I know this because as an active PHP developer for over a decade I'm very much paying attention to that field of PHP.

        What we see here is a classic case of benchmarks saying one thing when the reality of production code says something else.

        Also, I used go as a generic example of compiled languages. But what we see is production-grade Go languages outperforming non-production-ready experimental PHP tooling.

        And if we go to look at all of them https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s...

        We'll see that even the experimental PHP solution is 43 and being beat out by compiled languages.

        • Implicated 8 hours ago |
          > ... you can have Go doing thousands of requests per second vs PHP doing hundreds of requests per second.

          > I know this because as an active PHP developer for over a decade I'm very much paying attention to that field of PHP.

          <insert swaggyp meme here>

          As an active PHP developer as well it sounds like you have no idea what you're talking about.

          > While you can use these tools you will almost certainly run into problems.

          Which tools are "generally not considered production-ready"? From what I'm seeing on the linked list of benchmarks...

          - vanilla php - workerman - ubiquity - webman - swoole

          I'd venture to bet all of these are battle tested and production ready - years ago now.

          As someone who has built a handful of services that ingest data in high volume through long-running PHP processes... it's stupidly easy and bulletproof. Might not be as fast as go, but to say these libraries or tech isn't production-ready is rather naive.

        • hipadev23 8 hours ago |
          Nobody is suggesting PHP beats compiled. We’re arguing with you about your utter lack of expertise in the language, knowledge of the ecosystem and “production-ready” status of the many options, and your overall coding ability when it comes to PHP.
      • kgeist 8 hours ago |
        Sure, PHP can process logs of any volume, but it would require 5–10 times more servers to handle the same workload as something like Go. Not to say Go just works out of the box while for PHP you must set up all those additional daemons you listed and make sure they work -- more machinery to maintain, and usually they have quite a lot of footguns, too. Like, recently our website went down with just 60 RPS because of a bad interaction between PHP-FPM (and its max worker count settings) and Symfony's session file locks. For Go on a similar machine 60 RPS is nothing, but PHP can already barely process it, unless you're a guru of process manager settings.

        In a different PHP project, we have a bunch of background jobs which process large amounts of data, and they routinely go OOM because PHP stores data in a very inefficient way compared to Go. In Go, it's trivial to load hundreds of thousands objects into memory to quickly process them, but PHP already starts falling apart before we hit 100k. So we have to have smaller batches (= make more API calls), and the processing itself is much slower as well. And you can't easily parallelize without lots of complex tricks or additional daemons (which you need to set up and maintain). It's just more effort, more waste of time and more RAM/CPU for no particular gain.

        • Implicated 8 hours ago |
          This isn't a PHP problem, this is a configuration problem. You shouldn't be using the filesystem to handle your sessions in a production application.
          • kgeist 8 hours ago |
            Anything that unexpectedly blocks a process can bring down your entire PHP server because you will run out of worker processes. For example, imagine you experience a spike in requests while another server you're trying to call is timing out. You can't set the maximum worker count to a very high value because the operating system has an upper limit. Since the limit must remain low enough, you can quickly run out of your worker processes.

            In contrast, Go can efficiently manage thousands of such blocked goroutines without issue. Sure, you can address this problem in PHP, but you need:

            - understand PHP-FPM (or whatever you use) configs and their footguns

            - understand NGINX configs and their footguns

            - fiddle with PHP configs/optimizing your code to fit within PHP's maximum limits

            - rent larger servers to have the same throughput

            • Implicated 7 hours ago |
              True. I stand corrected.

              This is a footgun, regardless of if it's a block from file systems or remote requests or whatever.

              My claim that it's a configuration problem is just a 'fix' and there are ultimately an unlimited list of ways this same thing can come up to bite you. Well, outside of aggressive timeouts - and even then, with enough volume of requests that's even not going to save you :D

        • Implicated 8 hours ago |
          > In Go, it's trivial to load tens of objects into memory to quickly process them, but PHP already starts falling apart before we hit 100k.

          I'm not going to argue that PHP is _better_ than Go. Just starting off with that.

          But if your background jobs are going OOM when processing large amounts of data it's likely that there's better ways to do what you're trying to do. It is true that it's easy to be lazy with memory/resources with PHP due to the assumption that it'll be used in a throwaway fashion (serve request -> die -> serve request -> die) - but it's also perfectly capable of long-running/daemonized processes that aren't memory issues rather trivially.

  • majkinetor 14 hours ago |
    We need glorified (rip)grep instead of ELKS and friends, which have huge learning curve. I welcome this effort.
    • remram 13 hours ago |
      I'm pretty satisfied with Loki. It just ingests the logs and offers a powerful query language to extract data at query time (e.g. parse JSON, run regexes, plot). It can store data in a local folder or S3-compatible storage. I also gave up on configuring ELK in the past...
      • leeoniya 13 hours ago |
        yep, loki is basically "distributed grep"
      • toabi 13 hours ago |
        I moved from an ELK stack to Loki and it's sooooo much easier/better/just-works.
      • ptman 10 hours ago |
        VictoriaLogs is similar, but a nice improvement over Loki
  • bdcravens 14 hours ago |
    Very nice. A lot of the complexity you described is why I've settled on using CloudWatch logs for anything I have on AWS. I don't need a fancy UI, just a powerful querying language for investigation and debugging. With that said, it would be nice to see at least some mechanism for building aggregates queries (for example, 4* results in the last 24 hours by user) but if it's ClickHouse underneath, I assume that's easy using standard ClickHouse tools.
    • stingraycharles 14 hours ago |
      I hate how Cloudwatch itself is so fragmented, and they have three different query languages for logs.

      It’s all cognitive overhead I don’t want to learn.

      • bdcravens 14 hours ago |
        I will say that the language isn't the most intuitive, and a project like this one with some simply querying with the (presumed) ability to drop down to SQL for power use is probably the ideal solution. (Doable with CloudWatch logs and Athena, but that's another can of complex worms)
      • infecto 14 hours ago |
        I would be happy to pay a premium for a better cloudwatch. For me it is always not intuitive which I am sure is driven by limited use.
  • drchaim 14 hours ago |
    Don't tell me why, but I've developed an instinct that recognizes solutions that use Clickhouse under the hood :)
    • HatchedLake721 14 hours ago |
      Tell us more!
      • williebeek 13 hours ago |
        It uses both, MySQL for the metadata and ClickHouse for the logs. The selfhost page explains a bit more about the architecture.

        edit: the connection to ClickHouse uses the MySQL driver, this is actually a very nice CH feature, you can connect to CH using the regular mysql or postgresql client tools. The PHP MySQL PDO driver works seamlessly. One catch, using advanced features like CH query timeouts requires a CTE function, check the model/txtlogrowdb.php file if you're interested.

    • stingraycharles 14 hours ago |
  • dobin 14 hours ago |
    Pretty unrelated, but i like how it displays large amount of potentially diverse JSON events. Would need some better filtering and sorting, hiding of keys etc. Products which do this well are Elastic and Splunk, but are too heavy for my taste.
    • szundi 13 hours ago |
      I always played with the idea that the logs could be viewed as packets of some protocol and use wireshark to filter them and view related logs as a “stream” like view that wireshark provides
  • rednafi 14 hours ago |
    This is nice.

    At work, we use Datadog for logging, and I have previously used CloudWatch, Splunk, and Honeycomb. Among these, only Honeycomb makes implementing canonical log lines [1] easier. I want arbitrarily wide, structured logs [2] without paying exorbitant costs for cardinality.

    Our Datadog costs are outrageous, and it seems like no one cares at this point. Pydantic Logfire is also doing some good work in Python-specific environments. I use both Python and Go, but Logfire wasn’t as ergonomic in Go.

    [1]: https://stripe.com/blog/canonical-log-lines

    [2]: https://www.honeycomb.io/blog/structured-events-basis-observ...

  • reacharavindh 14 hours ago |
    My current log solution that is based on Clickhouse I’m tinkering with in free time in Victorialogs. https://docs.victoriametrics.com/victorialogs/
  • mooreds 13 hours ago |
    I've heard good things about Axiom[0], especially for high scale needs.

    0: https://axiom.co/

    • mdaniel 12 hours ago |
      If you like them, please submit the link on its own, and not to take away from someone's MIT "Show HN" to plug a non open source project
  • mdaniel 12 hours ago |
    What in the world does this mean? https://txtlog.net/doc#:~:text=use%20your%20local%20time%20w... That's made twice as bad by the "we throw away Z because you were just kidding by including it". That leads me to believe that any RFC 3339 that isn't automatically Z (e.g. 1996-12-19T16:39:57-08:00 <https://datatracker.ietf.org/doc/html/rfc3339#section-5.8>) is ... well, I don't know what it's going to do but it likely won't be good

    It also appears that your documentation is currently a very verbose version of an OpenAPI spec, so you may save your readers some trouble by actually publishing one, with the added advantage that they come with a "Try it" button in the OpenAPI renders

    That would allow you to save the natural language parts for describing things that are not API-centric (such as the "but WWWWHHHHYYY mysql AND clickhouse" that you alluded to elsewhere but wasn't mentioned at all in /doc nor /selfhost)

    • tyingq 12 hours ago |
      The date treatment isn't great, but the repo seems to indicate it's existed as a public thing for 22 days. So perhaps just an early compromise to get it working.
      • mdaniel 12 hours ago |
        For all the folks championing how awesome PHP is in this thread, one would surely hope it has rfc3339 aware date parsing, no? But I guess that <https://www.php.net/manual-lookup.php?pattern=rfc%203339&sco...> and <https://www.php.net/manual-lookup.php?pattern=iso8601&scope=...> both being :shruggle: doesn't do it any favors. However, it seems it is just a search stupidity because https://www.php.net/manual/en/datetimeimmutable.createfromfo...

        I do love this, since it 100% squares with my mental model of PHP's approach to life: you're holding it wrong https://www.php.net/manual/en/function.date-parse-from-forma...

        • Implicated 9 hours ago |
          Given the tone and wording of your comments I hesitated to even reply but, alas, my love for PHP was strong enough to push me through.

          You are, actually, doing it wrong.

          https://carbon.nesbot.com/docs/

          I forgive you, being that you're clearly not familiar with modern PHP and it's incredibly mature and diverse library ecosystem and first class package manager.

          > However, it seems it is just a search stupidity ...

          You're searching a list of thirty (30) functions. I don't even know how you found that list of functions but, surely, you don't think that's an exhaustive place to search for a specific date format? Surely you're not being purposely obtuse. (As you likely found, if you just plop your search term in the search at the top of the PHP website you would have found the DateTime class and how to handle these various formats)

          Anyway - for anyone who may happen across this odd chain of comments, dealing with dates in PHP is an actual breeze using Carbon\Carbon.

          • tyingq 5 hours ago |
            Pretty sure that doesn't handle the 'Z' timezone offset, as I saw the same with various PHP built-ins. Some ignore offsets, some don't...but handle only specific formats and not others, including the Z. So you still need some kind of wrapper.
    • voytec 11 hours ago |
      Off-topic, but thanks for the neat trick with

          url#:~:text=blah
      • mdaniel 11 hours ago |
        It's actually a standard! https://developer.mozilla.org/en-US/docs/Web/Text_fragments It can do a bunch of awesome stuff, but the text= one is the one I use the most

        I finally started using it when it landed on Firefox release (although, in true Firefox fashion, they give no fucks about the UX forcing me to install an extension that is "create link to selection")

        • Implicated 9 hours ago |
          I too must thank you for this, I had no idea this existed and likely will be making regular use of it now :)
  • adriand 12 hours ago |
    I’m curious about the open source nature of this and how you / people in general manage a project where you are hosting it and need to maintain its security, but are also presumably merging pull requests as people contribute to the project. I would be quite paranoid about this, ie concerned that someone might slip a line of code in with the intent of breaching the service that I would not catch during code review. I know this is true of any open source project but it feels especially fraught when you are also hosting it and letting people sign up and pay for it. I’m wondering if you or others have experience with this and what approaches and practices mitigate this risk.
    • gabeio 11 hours ago |
      Just because a project is “open source” doesn’t actually mean you must accept or even merge PRs from others. After reading others pointing this out my opinion of managing open source projects have significantly changed. Of course, you can entertain PRs and see if the idea behind them is sound but not accept the raw code from others and implement the features they way you envision instead. Keep in mind it’s always possible to have a vulnerability without anyone else’s assistance. This is especially true if you use dependencies, as you don’t keep track of every line of code they add.
      • withinboredom 11 hours ago |
        > This is especially true if you use dependencies, as you don’t keep track of every line of code they add.

        You absolutely should vendor your dependencies and review them before accepting the new version. Even though they are dependencies, you are ultimately responsible for using them. "They are just dependencies" doesn't absolve you of responsibility.

        • dlln 9 hours ago |
          Great points about dependencies and reviewing PRs. In addition to manual reviews, layering security tools within your CI/CD pipeline is key. Tools like static code analyzers, dependency scanners, and security linters help catch vulnerabilities early. Open source can also be a valuable way to uncover security gaps, but having a secure channel for reporting vulnerabilities is crucial to address them quickly. Leveraging techniques like Content Security Policies (CSPs) adds extra layers of protection, promoting proactive security throughout development and deployment.
    • skeeter2020 9 hours ago |
      For users of OS projects, a very common approach is to clone into a private repo, then only pull upstream changes within your own timeline/process, and potentially open public PRs at some point after working in private, i.e. you do your business in private, and share in the public part as & when works. For the project maintainer people can open PRs whenever they want but you are under no obligation to accept them or use any of the code; they're doing this to help others but don't need to for their own scenario.
  • hk1337 12 hours ago |
    It's a minor thing but I would remove the jQuery dependency. You're not doing much with that plain javascript couldn't do just as well if not better. Plain JS has come a long way since jQuery first came out.
  • nesarkvechnep 12 hours ago |
    Some people praised Go as a better language for the use case than PHP. I’d say Elixir is even better. It can handle massive concurrency easy, can be made distributed easy, has a built-in, in-memory, key-value store (ETS), and is probably the best high-level language for anything that’s facing the network.
    • lukevp 9 hours ago |
      I've really been interested in learning more about Elixir and how it accomplishes these things, because I constantly hear the same opinions from others. Do you have some good resources you'd recommend for getting started with Elixir for a principal engineer that wants to understand these at-scale issues and how Elixir solves them better than other languages?
      • nesarkvechnep 8 hours ago |
        Yes, two books. To get a feel for the language - “Elixir in Action” by Sasa Juric. To discover how Elixir and the platform it’s built on excel in scalability and fault-tolerance - “Designing for Scalability with Erlang/OTP” by Francesco Cesarini.
  • thomquaid 11 hours ago |
    the 'easy to use' / 'view' was very nice. if you could add the actual session logs in it would be amazing.
  • TripleChecker 8 hours ago |
    It looks like that's a PHP codebase. I'm curious why one should use this solution instead of more performant Go/Rust log backends?

    Also, one of the login links takes you to a 404 page: https://triplechecker.com/s/jDTmQa/txtlog.net

    • giraffe_lady 6 hours ago |
      They said

      > Most of my development career has been with C#, NodeJS and PHP

      and then

      > The architecture is very scalable, but I've only tested up to a few billion logs.

  • piterrro 7 hours ago |
    > there are many log services available and this is my attempt at a better one.

    Out of curiosity, can you describe how your service is better than others?

    >I hope my application fixes most of these issues

    Do you care to elaborate on the "how"?