What does it do? How did you make it? How much time you spent making it? How often do you use it?
- Python-based pipe-objects-not-strings shell with cluster and database support deeply integrated. Twenty years later, it has turned into Marcel (https://marceltheshell.org), and I’m still the only one using it AFAIK.
- Backup utility for Linux, with characteristics of Time Machine. Been using it for five years, it keeps daily, monthly, yearly full backups, relying on hard links for files that don’t change. Handles both local and remote (to my Raspberry Pi).
I found no free optical design software that would run on Mac, so I coded something up to do some paraxial ray tracing ( maybe more, I'd have to dig up the code) and (this is the good part) draw lens diagrams from the specifications.
Pretty simple, but it was fun to do. Very little available for Linux either. Physics and optics people want to have fun, too.
Much of it was just parsing the input data.
I do recall a design and/or analysis program written in Basic, but it wanted a particular basic interpreter, and I forgot if it had porting problems. Must have. I don't recall using the program.
Oddest bit was something I did on my own for Sun flex office. I would get the list of scheduled occupants and their office choice and overlay that on a map of the office suite, for a "who is where" map.
On a "real" work task, I learned how to write graphics commands in Illustrator 3 format. I may have used that on this project.
But more generally, tacking the AI header code to the file made it valid Postscript/Illustrator format.
Someone once told me that apparently some local sports reporters' weekly tips are used with some seed money and the proceeds are given to charity and I was intrigued enough to spend a few days building something that could test that out.
I forked it back in 2005 because the maintainer wasn't interested in the direction my patches were going. My version has diverged dramatically from the current version.
I have no idea how many hours I've put into it over 19 years. It has needed surprisingly little care and feeding (which I'd attribute to it being a simple PHP app).
I've used it nearly daily in the last 19 years.
Edit: I also maintain a set of scripts to import my SMS from phone backups into my IMAP mailbox. Having a single place to search for my written communication is wonderful.
Initially the changes were for handling enclosures. The developer had no interest in supporting them. I wanted to use tt-rss as a podcatcher. That necessitated adding some database schema (tracking enclosure URLs) and UI (a "request download" button in the entry list and entry detail panes for those podcasts where I only download selected episodes, a "download all enclosures" checkbox in the preferences UI for podcasts where I want every episode downloaded).
I also added schema for multiple users to sharing the same database. It was basically per-user preferences and read/unread flags. My grand intention was to add "social" features and eventually a suggestion algorithm. The developer's reaction re: "social" features was, basically, "Why?". (I see that the project has since gained multi-user support...)
I never did much with my multi-user schema. I never even switched my production copy over to it. Amusingly, I've ended up running three separate instances support my blog reading, podcatcher, and TV computer (Youtube feeds). If I'd finished the multi-user work I could be using that instead.
That was the end of my interaction w/ the developer.
In later years I added virtual feeds for the podcatcher and tt-rss itself to report errors downloading or parsing feeds.
Edit: I'd heard about the developer being uncivil. He never was to me, but the reputation is apparently justified: https://community.tt-rss.org/t/how-to-contribute-code-via-pu...
I think it can be easy to get caught up in the software is for business mindset and forget that there is an infinite number of use cases for things you could build that can just be for fun or personal/small community benefit.
The most recent is a Chrome extension that plays a "server down" tone any time the word "critical" appears on our system monitoring web page (Netdata). It plays that tone when the number of "critical" words goes up, and plays a "server up" tone when the number goes down. It's dead simple and works to give me audio alerts so that when I'm hyper-focused on something, I can get pulled out of it by the "server down" tone. It's gone over well with my coworkers as well.
It uses an on-device model for language detection and results are sub 0.3s thanks to groq
If someone wants to try: https://testflight.apple.com/join/GBxPMw2h
For instance, I wrote a tool that tells me how many simultaneous users I need to simulate X number of real users who are hitting the server only intermittently.
If you write tools for your own use, unless they are really big you tend not to bother with coding style conventions. The important thing is being able to code it up quickly.
I used to write everything is Perl, but I’ve switched to Python. It’s a great rapid prototyping language, and basically everything I do is a prototype.
It allows me to launch vscode projects and devcontainers I often work on very quickly. Saves me so much time!
I open sourced it 8 years ago but it was not the original intention. I wrote it for myself. https://github.com/akalenuk/16counters
I wrote it in MASM32. It's therefore a tiny .exe file of about 7 KB. I spent maybe a few hours initially, but I've been adding features one-by-one for several years. I use it a few times a month.
My solution also manages SSL via Cloudflare and integrates with Stripe for simple fixed-price subscription billing models. The idea here is to be able to iterate on product ideas quickly without spending a day each time figuring out authentication and billing.
I did set up a marketing site at the time so that others could use it, but I don't have any users, and I'm happy to maintain it just for my own projects (half a dozen now).
It took me 2-3 weeks to make so on net I have probably not saved much time, but it really helps reduce the friction of launching things which I think is valuable.
- My blog, which has lots of weird pages for stats, time jumps, old games/tools/utilities. - Flight tracker app. - Home control app (because HA was a pain to keep updated). - Wood working tool helper for my dad.
And more. It's liberating.
I built a pretty simple web app that tracks a bunch of vendors and emails me when items matching my filters come in stock!
I originally thought more people might use it, but I have basically 6 teachers.
Hit me at [email protected] if you have some time, I might be interested in what you are doing!
I think it took me like an hour to make and I use it several times a week.
I love the results, though I have a strong hunch that there's probably a different processing approach that would be 10x faster.
unifi-dns-scraper[0]: a simple tool that logs into my Unifi console to get all the hosts and then creates a hosts file that my local DNS servers can use.
unifi-doh-blocker[1]: as part of my efforts to better control my network, I don’t want random devices ignoring my local DNS by using DoH. This gets various lists of public DNS over HTTPS servers and updates a blocklist on my Unifi Dream Machine Pro. With a few other firewall rules this essentially forces all my DNS through local servers which then do encrypted DNS queries to a third party DNS service.
These tools make me happy and were fun to write.
[0] https://github.com/pridkett/unifi-dns-scraper [1] https://github.com/pridkett/unifi-doh-blocker
Except me and the team at my company nobody uses it.
After I made it lot of solutions came in this space, but IMO this is the easiest one to use still.
I used it to buy a secondhand car based on several parameters (price, doors, features, reliability etc) and by assigning weight to the parameters
Python in total a day or so of work (including tweaking and adding features) only used it a couple of times to buy a car
Created with an early version of Phoenix (and re-created with LiveView)
Here's a link to a room so you see it with multiple users: https://pokershirt.dev/r/hn
I once wrote a discord not that interfaced with our Jira system. I wrote a whole chat interface that walked users through the steps required to produce a useful ticket, and they got thrown into a user report category.
The interface was quite thorough and basically forced users to follow a template designed to exctact the relevant details. You could even upload files or screenshots.
It was quite popular with power users as well as our own team. But most importantly, it gave angry users a place to complain instead of directly yelling at a developer.
It was over 10 years ago and I think it's still the project I'm most proud of. The bot as a whole was extremely complex and could interface with several external applications like google sheets and trello. It also had a very nice feature where it would answer my DMs for me. I was a pretty public figure in the community and got a lot of messages that would be better answered by just asking in the general chat. The bot would (respectfully) explain that I can't give 1:1 support to everyone and direct them to better support channels. It also explained that if something really did require my direct attention, I could still be reached by sending another message. Almost nobody ever did. I couldn't believe how effective it was. The DMs still existed in my inbox, but the bot marked them as read so I wouldn't get notified. I'd review them occasionally, I never found any false positives.
But of course that kind of thing is ancient history and will get your discord account banned nowadays. Using a bot to log in as a normal user has been prohibited for a long time. Too bad.
I built a bot that runs every 2 weeks, scrapes my local movie theater's currently playing titles, runs them against rotten tomatoes to filter out movies rated below 85%, then looks them up on youtube and posts the trailers to my discord server
How did you make it?
Python script that runs in google cloud functions, triggered on a bi-weekly schedule
How much time you spent making it?
An evening
How often do you use it?
Every two weeks
Looks like about half my library would be filtered out.
Back in 2011, my girlfriend was working at a catering company that announced shifts via a webpage and workers had to sign up for them. Other workers tended to pick them up very quickly, so it was hard to get too many shifts.
I wrote a quick web scraper to automatically accept any shift offered and email her.
For a couple weeks it was great, suddenly she had all the work she needed.
Then one day she woke up late to find a voicemail telling her she was fired.
Earlier that morning the script had detected a last minute job announced just an hour before start time and immediately accepted it, resulting in her not showing up to it. I had not accounted for the possibility they would announce a job so last minute, since it had never happened before.
I find usescripts way more easy to update than an extension, so that's what I've been sticking with for quite a few years now.
I don't remember how I got started initially, there was some investigation of the network protocol it used. It was some crusty old standard that used an odd command scheme, but I managed to divine what magic packets to send to wake it up and select an input.
I ended up rolling it all into my first and only Android app. It eventually became a full remote control for the thing, including a sleep timer that would turn it off after an hour.
It was extremely basic and horribly ugly. I never looked into automatic discovery of the device on the network, so it had a hardcoded IP address. But it worked well enough to get a free projector running.
Apart from that, every programmer has thrown together innumerable scripts and throwaway programs for one-off tasks. My most recent was a thing that takes in a Diablo 2 save, then sets the version number and recalculates the checksum. My pirated copy is a bit old and won't accept saves made with modern editors. I don't know how many scripts I've written that walk through a file system to find or change something in the contained files.
I also have a version of Klondike solitaire written in C++ with SFML. I initially wanted to build a neural network to play solitaire, but after building the game itself, I found in my research that solitaire is actually a very difficult problem and far beyond my skills.
It basically takes a couple json files for configuration, and some markdown.
Also made a notes app for Android that just had one text view with only options to edit text and save it. Used it for years for pasting random stuff, and gave it to a twitter friend and forgot about it. Connected with that friend years later and he said he used that app as well :)
I spend about 30 minutes a month maintaining it on average and I recently rewrote it in Go so that I could bring it back into my /x/ monorepo.
I have iOS automations that query and post to it about once every day. Eventually it's also going to handle photo uploads so that I can yeet it a photo and get the embed code for it shoved into a buffer note on my phone.
A news bot: https://github.com/qznc/mrktws-news (the output is public, does it count?)
A TiddlyWiki server: https://github.com/qznc/tiddlywiki-py
Such stuff usually costs me a few frantic evenings to build the first version and then minor maintenance.
The billing data is pulled from an external vendor's portal. The contact data is pulled from our internal CRM. Both sets of data are then cleaned up and merged with PowerQuery, and then VBA is used to send emails out to clients.
I probably spent in the range of 3-4 hours getting a working version going and ~20 hours optimizing during downtime at work. I genuinely find it enjoyable to work on—there is something immensely satisfying about automating rote work away.
I use this once per month (a billing cycle). It will probably never see the light of day for anyone else, at least in its current state, because I work in a low-tech, nonprofit environment and using this kind of tool would be daunting for my co-workers (for reference, mail merging is sometimes intimidating at my workplace).
Two highlights:
Enso (https://enso.sonnet.io) - I use it every morning and generally keep it open throughout the day: https://untested.sonnet.io/Stream+of+Consciousness+Morning+N...
Sit. (https://sit.sonnet.io) - I use it every morning and on some afternoons (I start my main job after lunch)
I made those for myself, and made them public after chatting with friends/colleagues, that's why they're available online now.
I also have a bunch of small scripts/apps/productivity tools (automate boring job tasks).
I published it on pypi/github out of habit, but it's primarily about scratching my itch and I doubt anybody except me will really use it. There's some interest in doing this type of thing with org mode but it all centers around LISP and emacs (neither of which I like).
- A mac app to generate to read text as playlists from openai tts api (my gf can't use the api)
- A automatic visual scraper that can get arbitray information from any website (e.g go to target and get the top 3 milk prices or get the licensing information from this opensource project)
- An ETL tool (called tabmaster) that can sync arbitrary json apis into postgres tables. with automatic schema inference and deduplication logic
- A really good OCR tool that enriches scanned PDFs with accurate words and not the bs that adobe ships
all really useful and used daily.
EDIT: formatting
Sounds interesting. How does this work exactly? How do I input that I want the top 3 milk prices, and how does it retrieve that data?
First one: cloud services are expensive and come with a lot of overhead. However I do enjoy the idea of lambda/cloud functions a lot but I didn't want to rely on cloud services for them, especially since I have a ton of hardware to self-host stuff. So I built a service which piggybacks on DIND where I could add templates for such lambdas for a few languages(just rust and python cause that's mostly what I use) where I can allocate some resources and build tiny containers. In order to avoid wasting resources, they are all off by default but through the magic of what I build(management and something like a reverse proxy + container management for DIND), I can call an endpoint, which will check if there is a container to handle the requests and proxies the request and response to and from it or it will spin up the container, do the same and shut it down after a pre-defined amount of time. You could argue that I could have achieved the same with minikube or k3s but this has proven to use a lot more resources compared to a single rust binary I can run on anything x86 with docker on it. The downsides are that I can't distribute it across multiple servers so it only runs locally. But that fits my needs perfectly. There might be some people that would be interested in it but I never got to open sourcing it so I'm the only user. I've been planning to open source it "next week" for the past 2 years or so.
Second one: I love data. Specifically unstructured, raw data and I've been collecting a ton of articles from the internet over the past 2 years or so. The ingestion involves a pipeline that automatically processes them and extracts data/makes them a lot less unstructured: events, locations, timestamps, what has happened where and all that jazz. It's a long list of ml models, data transformations, validations, 3 llm's fighting each-other and a bunch of other stuff. Kinda feels like a child to me so I don't want to open source it. I know that I could hypothetically convert it into a product and make some money off it but my past entrepreneurial experiences have been very unsuccessful, I have limited time and lack motivation. So at the moment I'm simply enjoying the data getting spit out on a grafana dashboard such as this [1].
Or at least the tool(s) you use?
I have the same need but it's surprisingly difficult to get it right, at least with the `camelot` or `fitz` python packages.
PDF to text (using either python or Java lib), which then is turned into a "header" structure with dates and balances via configuration driven regexes, and a "body" structure containing the transactions. The transactions themselves go through an EBNF parser to extract the date(s), narration, amount, and balance if reported. The narration text gets run against a custom merchant database for payee and categorization. It is a painful problem! The code is Clojure so there is not much of it, and there are high abstraction libraries like Instaparse that make it easy to use grammars as primitives. And the rube goldberg has yielded for me balance-validated data now for the last several years from half a dozen financial providers.
I have been incorporating local LLMs, running on an RTX 3090, into some other workflows I have, hope over the summer to see if those can help simplify some of the workflow.
Right now, it's ASP.NET/.NET 8 on the backend and still plain jQuery on the frontend.
I display RSS feeds, US National Weather Service data, and comics in it. I also have it send some things to friends and family as emails periodically.
Hangfire works on the backend to actually fetch new data at appropriate intervals.
I occasionally have to modify something and manually push a new build because something remote changes but it feels fairly stable right now (knock on wood).
I want to redo it to use ASP.NET AssemblyParts and work towards essentially giving each little box its own DLL as a sort-of-plugin-system so that I feel more comfortable adding more types of boxes (stocks, different weather data, etc) and maybe one day can open-source it. (I'd like to so I can point prospective employers at it and say "see, i can actually write reasonable real world code.")
https://github.com/stevenlandis/kilojoule
I’ve found it to be a pleasant multi tool for interacting with the shell when I need something a little more than bash.
It took a couple of weekends and evenings to get working but was a really fun way to learn about parsers, interpreters and Rust.
I assume that most things I build will only be used by me and a small group of users, and sometimes will also eventually be used by others. I generally only build tools that I use every day. For example:
- iCloud/Dropbox alternative macOS, iOS, and web app
- task management GTD macOS, iOS, and web app
- photos api for digital photo frames
- media management
Mostly built using go for backends and SwiftUI for macOS and iOS frontends.
MP3Renamer(2002) - The age of music piracy is still rife and i have downloaded my share from napster, university file shares, etc. However, most filenames are horrendous and not clean. So, my first utility was a java program that would analyse file names based on common garbled patterns and rename it into [Artist] - [Songname].mp3. It worked surprisingly well for 90% of the use cases,
CombatLogAnalyzer(2008) - Me and my wife are in the throes of World of Warcraft arena which is a competetive dueling system. We only play 2v2 and we both suck at it. So, i enabled combatlogs in WoW and then wrote a parser, analyser and visualizer for every arena game and that shows which spells were used, where did damage come up, highest contributer of damage and this was by each playable class. By the end, we learnt what was killing us and the statistics showed our strengths and weaknesses. Suffering high latency and poor skills we managed to crawl from 800 rating to 1800 rating! We just couldnt go beyond that! (I was the crutch). This was done in .NET WinForms and i really learnt how to use linq.
Space Commander (2019) - My daughter is almost 4 years old and i think she is ready for computer games. I decide to learn MonoGame and i make a Space Commander clone. It is a HIT!
HappyMrsChicken (2019) - From my smash hit game above, i make a clone of HappyMrsChicken except this is in a forest where you have corn that the chicken has to eat and there is competetion from a mysterious goblin creature who also goes after the corn. Who will win?? Turns out, i cheated and gave the chicken a boost. My daughter won a lot!!
OptionsTrading (2020) - It is covid and i am locked in a quarantine facility for 28 days. Like a lot of retail noobs, we are getting into trading stocks and options. I decide IBKR interface sucks and I can do better. While spending those 28 days in isolation from family, i learn react to write a frontend and python to write a backend that displays all our trades, statistics, UIs, loss calculators, PnL, etc. My wife and I use this to date but i am too chicken-shit to publish it.
My personal favourites are: CombatLogAnalyser, OptionsTrading and HappyMrsChicken in that order
I spent a decent amount of time tweaking the UI, improving performance, adding filters, providing different file output formats, etc. Never shared it with anyone.
It's not that it is meant for my use only (any capable reverse-engineer familiar with Ghidra should be able to pick it up and use it) nor that it will never see the light of the day (it's open-source). However, it is such an esoteric capability and outright heresy according to computer sciences that I'm having a hard time just finding people who can wrap their heads around the concept, let alone people who would actually want to use this. Simply put, it's so far out there that I don't even know where or how I could even advertise it in the first place, which makes it de facto for my own use only.
A couple of people did end up reaching out to me in the last couple of weeks about it, so it might be on the cusp of sprouting a very tiny user base. Still, I've made it for my own use and until very recently I've never expected anybody else would use it.
If someone wants to check out the dark magic: https://github.com/boricj/ghidra-delinker-extension (disclaimer: might give nightmares to linker developers).
Not sure about heresy according to computer science. Sure, it's not intended, but it's a very clever thing to be able to do.
It's when you get creative and throw ABIs out the window in order to create some cursed chimeras that this really becomes heresy.
It does allow a couple of nifty tricks, like pervasive binary patching (if the program is chunked into relocatable object files, then you're no longer constrained by the original program memory layout when patching/replacing stuff). It's also useful for decompilation projects, where you can reimplement a program one piece at a time until you no longer have binary pieces left and still create a fully working program at each step (you don't even need perfectly matching decompilation since the linker will mend stuff back together anyway).
I think there's the Witchcraft Compiler Collection if you want a freestanding option [1], although I haven't looked into it too closely.
The problem is that object files are made up of section bytes, a symbol table and a relocation table. You can't just rip bytes out of a program and call it a day, you need to recreate all that information in order to actually delink code into relocatable object files.
Doing that isn't a trivial problem, it requires a lot of analysis and metadata, especially if you don't have debugging symbols or symbols at all. Leveraging Ghidra allows me to concentrate on the specifics of delinking, which can get very tricky depending on the platform (MIPS in particular is a nightmare to deal with).
I'm also trying to solve delinking in general and not just for one platform/ISA pair, so reinventing the wheel for every architecture out there is a nonstarter in that context.
Without this relocation table on hand, you'll have to recreate it in order to make the section bytes relocatable again. This means analyzing code/data and identifying relocation spots, like you've said. But that `0x00400000` integer constant within an instruction or a word data, is it referring to the function at that address or is it the TOSTOP flag value? Who knows, but each time you get it wrong you'll corrupt four bytes in that object file.
I'm dealing with one rather gnarly scenario, which is a PlayStation video game without any source code, symbols [1] or linker map, just a bag of bytes in an a.out-like format. The MIPS architecture also happens to be an absolute nightmare to delink code from (one of the many pitfalls for example is the interaction between HI16/LO16 relocation pairs, branch delay slots and linkers with a peephole optimizer).
I've been at it for two years and I've only recently managed to pull it off on the entire game code [2]. Writing out the object file when you have the program bytes, the symbol table and the relocation tables is the easy part. Writing an analyzer that recreates a missing relocation table for the 80% of easy cases isn't too difficult. Squashing out the remaining 20% of edge cases is hard. All it takes is one mistake that affects a code path that's taken for some very exotic undefined behavior to occur in the delinked code.
Delinking with a missing relocation table (and without manually annotating the relocation spots yourself) is a thing that looks easy at first glance, but is deceptively hard to nail all of the edge cases. I'd gladly be proven wrong, but if you do have the full, original relocation table on hand then you're cheating with `cc -r` on code you just built yourself. Almost no real-world artifact spotted in the wild one would care about is ever built with that flag.
[1] I did end up recovering lots of data out of a leftover debugging symbols file from an early build later on, but that's a story for another time.
[2] Note that I'm working on top of a Ghidra database that contains symbols, type definitions and references, so the bulk of analysis is actually performed upstream of my tooling. Even then, the MIPS relocation synthesizer is a thousand lines of absolute eldritch horrors, but I do acknowledge that the x86 relocation synthesizer I have is quite tame in comparison.
Wow I only really know amd64, arm64, and i8086. What is it about MIPS that makes it so evil?
On MIPS, loading a pointer is classically done in two instructions, a LUI and an ADDIU, which forms a HI16/LO16 relocation pair I need to identify precisely in order to delink code. I'm using Ghidra's references in my analyzers, but these are attached to only one instruction operand, typically a register or an immediate constant.
So my MIPS analyzer has to traverse the graph of register dependencies for a reference within an instruction and find which two instructions are the relocation pair. It's trickier than it sounds because references can have an addend that's baked in the immediate constants (so we can't just search for the pattern of the address bits inside the instructions) and complex memory access patterns inside large functions can create large graphs (ADDU in particular generates two branches to analyze, one per source register). It's bad enough that I have one method inside my analyzer in particular that is recursive and takes six arguments, four of which rotate right one step at each iteration.
But that graph traversal can't be done in reverse program order, because there are instruction patterns that can terminate the graph traversal too early with the right mix of branches, instruction sequencing and register reuse. I've had to integrate code flow analysis to figure out which parent code block has to be actually considered during the register graph traversal.
But the most evil horror is the branch delay slot. One particular peephole optimizer consists of vacuuming up a branch target instruction inside a branch delay slot and shift the branch target one instruction forward, which effectively shortens the execution flow by one instruction. It also duplicates the instruction, which is catastrophic if it had a HI16 relocation because now we have LO16 relocations with multiple HI16 parents, which can't be represented by object file formats. I have to detect and undo that optimization on the fly by shifting the branch targets one instruction back, which I accomplish by adjusting the relocation addends for the branches.
I've only written relocation analyzers for x86 and MIPS so far. I don't know what other horrors are lurking inside other architectures, but I expect that all RISC architectures with split relocations will require some form of that register graph traversal and code flow analysis [2]. What I do know is that my MIPS relocation analyzer [3] is probably the most algorithmically complex piece of code I've ever written so far, one that I've rewritten a half-dozen times over two years due to all the edge cases that kept popping up. I also had to create an extensive regression test suite to keep the analyzer from breaking down in subtle ways every time I need to modify it. I expect that there are still edge cases to fix in there that I haven't encountered yet.
[1] I've written about some of them here, but it's far from the whole story: https://boricj.net/tenchu1/2024/05/15/part-10.html
[2] That piece of code is split off in its own Java class: https://github.com/boricj/ghidra-delinker-extension/blob/mas...
[3] In case you're curious: https://github.com/boricj/ghidra-delinker-extension/blob/mas... (remember that the register graph and code flow bits are split off inside another class)
By that logic, it should be written in JS so you don't need to install a C build toolchain for it.
The only part of that software that ever became public was the animated syntax highlighter[0], but I don't believe any other part would help anyone else accomplish anything.
It's a media player that sits in the console.
It evaluates streams against a database of 100 or so "quirks" that identify either general issues or issues that will only manifest on certain player libraries. For instance, specific "in spec" encodings which are actually non-standard in practice get flagged.
Built on TypeScript/node/Docker over the course of maybe 18 months. Used it fairly often when I was working in the space, not at all these days. Originally the plan was to license it as an enterprise vid tool.
(I've been considering open-sourcing it - would YOU use it if so?)
Some more:
- https://weather-sense.leftium.com: weather app with the trendcast just the way I like it. WIP, but already using it on daily basis.
- https://multi-launch.leftium.com: quick link launcher; can launch multiple links at the same time. I use this multiple times a day.
- https://tt.leftium.com: tool to streamline conversions I frequently need. When the input type is detected on paste the converted value is automatically put into the clipboard. Also paste works from anywhere on the page. A super-niche hidden feature is if I paste the outerHTML of my SoFi relay accounts list, it will transform it into a TSV format for pasting into a Google sheets balance sheet. I use it a few times a month.
- https://ff.leftium.com: tool to calculate the time I needed to do something in a game I used to play. Automatically updated a calendar event with notifications.
- https://orbs.leftium.com: another tool to help with planning in the game I used to play.
Finding a service that integrated with Dropbox the way I wanted probably took more time than the actual development.
The multiple lines per graph kinda reminds me of the meteograms that yr.no used to have on their site and in their app. I really like their presentation. Concise, and much faster to digest than a table of data. Sadly, they eventually split them out to multiple graphs, with no UI option to put them back on one graph.
- Different units have differing plots (mm represented as bars, percentages as area graphs, and temperatures as lines.)
- The y-axis is not labeled. For exact measurements, you can hover over the graph and read the exact values at the top.
- The legend at the top actually uses checkboxes. Eventually, you'll be able to toggle individual metrics on and off.
It supports Markdown plus a custom template language to convert Markdown-plus-more documents into a website, which allows me to add footnotes, sidenotes, images with specific formatting, custom markers, etc. It has specific support for parsing English and splitting sentences, so each one is in their own span in the resulting HTML. This is used for specific typographic layout.
So I build a little cli to generate my invoices. 100 loc and super simple.
I miss the 70s when programming was the default way to solve problems. Excessive abstractions and proprietary software often slow us down.
Bank is called millenium bank.
What's up with Portuguese and Poland? (Millennium, Biedronka, what else?)
When you get deep enough into writing a custom tool it starts doing things a generic tool would never accomplish (or it would have to be bloated with features no one needs) Hard coding values and constraints for personal use makes such elegantly simple interfaces.
For example, my agenda is a beutiful boulderdash-like grid of icons. The data is a set of arrays [1,2,1,0,0,1,1] and an object for special days. There are no setting, it has zero buttons to press.
I've made countless silly things other people could use if only they knew it existed.
https://go-here.nl/real-salary-calculator.html
https://title-spider.go-here.nl
https://salamisushi.go-here.nl
endless things I've made for personal use. I think something like 60% needs one or two lines of love to work in 2024.
My hours get billed, I've got my notes for the cycle, and payments to my LLC get automatically routed to the right financial targets based on stuff I integrated with mercury.com.
Doesn't do the taxes, but cuts a lot of the low value/high cut vendors out of my revenue cycle and that makes me happy.
I’m pretty sure nobody else uses it, but I use it a lot for DNA design work for my company https://github.com/koeng101/dnadesign
I am curious if you might explain how/why this stack diverges from the upstream one (bebop/poly)? I see for example dnadesign has a version two of the seqhash algorithm that looks rather interesting.
Basically, I developed a whole lot of bebop/poly, but I had some disagreements with the owner of the repo (Timothy Stiles) about direction at the later stages of development. For example, I wanted to standardize all the parsers to use a generic interface, so that they're all used in the same way, while he didn't really want to change anything. There were other features I wanted to add as well - you can see a full list in the changelog (which are the changes since diverging from upstream)
Use it almost every day. It’s nice to keep up with channels I care about while escaping the algorithm.
Something I work on-and-off on. Could be interested in open-sourcing it fully if folks are interested in helping out. Im a pretty junior engineer, so work is slow and I’ve never been satisfied enough with the code to publish yet.
My most used tool I have is a note taking web app that automatically saves markdown files to S3. The most important feature I added was a little button that would automatically make a note for the current day, and put it in the daily notes folder, all sorted and organized. I also have full text search which is very helpful for finding old notes.
I also use an RSS reader I made. Instead of just showing you the text of the page like a standard RSS reader, it proxies the web page so you can read the article directly from the website, but with the added benefit that since it's a proxy, I can control all the HTML, CSS, and Javascript on that page.
Another daily one I use is a random background music playlist for Spotify that is auto generated daily. Using the Spotify API you can find random music, then find random instrumental music from that random music. I use this to discover new songs to listen to while working.
Basically, making your own software is fun. Making production software is much less fun. I don't need to worry about a million things when I make my own software. Sure, I make $0, but if I spent months making, for example, my notes app production ready, and tried marketing it, I'm guessing I'd still be sitting at $0.
On-device copy of all the notes? Text index in another database? Just download them all from s3 when you search?
I have a very similar thing, mine uses a dropbox folder as a backend so I can easily browse on laptop and use whatever. I like the daily notes idea though!
For this, I made a Spotify playlist with 41 hours of instrumental soundtrack music that helps me focus. It's not random like yours, but with that many hours, there's enough variation if you put it on shuffle.
It's mostly epic and uplifting movie scores. Or suspenseful and building up to something... none of that 8-bit video game beep boop shrill stuff.
https://open.spotify.com/playlist/31buZEaVGW9f5Y4cEcKtbt?si=...
Which API endpoint do you use for this?
It is madness in script form which will never see the light of day, but I've never succeeded in rewriting it or making a nice Web interface for it 'cause what I've got Just Works for me. I interact with this script almost every day, making it the single most-used software I've ever written, either professionally or personally. It's great having a self-hosted streaming service. Run via Bash.
Mwuahahahaha.
Rough guess would be that I spent 50 hours actually working on the software.
There's a handful of raspberry pis involved. I wrote everything in elixir and used https://nerves-project.org. The dashboard is written with phoenix live view. One of the raspberry pis is the "brain" and basically runs the dashboard and controls devices. The devices are all in an elixir cluster. I also run timescale db for some basic history of metrics.
Once I start a grow I don't use it that much actively, but it passively runs all the time. I check in every few days or week to make sure nutrients are looking good.
I've grown strawberries, lettuce, jalapenos, and cayenne peppers.
I use standard 3/4” sprinkler valves from the big box stores, connected to a manifold via unions on each side. This enables me to swap if needed, but these are ruggedized and will last a while. They do take 12VAC, so you need a transformer and use relays to turn them on, but they work very well.
The usual sprinkler valves at hardware stores need quite a bit of water pressure to change state which is probably what most people have a problem with, especially if they're trying to feed them with the kind of pumps they get at hydroponic stores.
[1] https://www.amazon.com/Motorized-Stainless-Electrical-U-S-So...
Appreciate you sharing, helps share others are thinking about it too.
Is there a reason you went with hydroponics vs aeroponics?
I've loved making my own crushed red pepper. And there's something fun about growing plants in the middle of a cold snowy winter in the basement.
I released the Rust library that downloads and reassembles media segments from a DASH stream (https://github.com/emarsden/dash-mpd-rs). Won't release the web scraping bits because they are against website terms and conditions, and because annoying countermeasures will be implemented if too many people use them.
Basically I enter the transactions and it shows a dashboard of my contribution rooms, how much is left, how much I have already contributed, etc.
Nothing fancy, but it just just a Remix frontend with MantineUI backed by an SQLite db inside Dropbox. Took me about 6 hours, and I only made it after I botched some changes I made into the Google Sheet that broke a bunch of formulas.
I thought about making it into a public app, but it is so tuned to what I want that it is probably not really that valuable to others.
I use it every time I save. Used to be weekly but lately monthly.
It handles lights, fans, cameras, sensors, locks, doorbells, speakers and TVs, HVAC and more. It runs automations based on time, weather, presence of people, and other events. It tracks energy consumption, sends alerts, etc. There’s a strong focus on local control (as opposed to cloud).
My favorite thing about it is that the client and server run from the same codebase and share state over a WebSocket connection. I’m still working on it occasionally but mainly it just runs quietly and my people seem to like it. The whole thing is configurable (connectivity, behavior and interface) so theoretically it could be used by someone else, but for many reasons I doubt it ever will :)
I never released it because 1. it's perpetually 98% done and 2. I don't feel like offering technical support for it and dealing with people who don't like it or find bugs. I may just open source it, but then I get to be a maintainer which is an even more thankless job.
I've written some really crappy Notion API (before the official API) to act as a CMS for a headless site.
I'm at a weird point in my life that I need to work on these little hacky projects because they bring me genuine joy. It's like lego to me.
It can also send follow-ups at set times on my behalf or handle some vague inbounds. It's not the best, but it's ok.
I spent a total of probably 20 hours on it and use it 5/7 days a week.
I built it in the process of teaching a cli tool. Which I extensively use to move files between computers, especially to servers.
Among other things it uses ChatGPT to extract structured data from PDF invoices for reporting expenses.
I went from writing my own tricked out Web 2.0 blog, writing my own 6502 NES games, writing myriad utilities and scripts to… remodeling my home, learning how to operate and maintain a pool, work on vehicles, garden and homestead.
I think in hindsight my hobby isn’t programming, it’s learning and I just enjoyed learning programming and kicked out into a good career.
I'd say it's a rite of passage to write software for yourself:
1) It gives new programmers some practice.
2) It can help you understand software development better.
3) It reinforces the concept of dogfooding what you create (even though in this case others won't get to use your software). Again, making you better developer. You'd be surprised how many people write programs that they barely use or test. Never really knowing how useful it is or isn't to others.
A blue full sized bar on the left size of the screen? The CPU is under 100% load. A purple bar on the right side of the screen, that reaches 3/4 of the way up? Guess my RAM usage is getting close to the limit, better not get too eager with the tabs, maybe close a program.
It actually wasn't that bad, but shortly after I just bought some more RAM. That was also before I had my own Gogs or Gitea instance, so I don't think I have the source anymore, it wasn't too hard to do on Windows though (nowadays I'd probably just put Linux Mint on the system or something, it needs a bit less memory in general). Oh, also now I have like 4 monitors for my main PC and don't do too much computing on the go. The 8 GB MacBook that I have for when I'm out and about feels rather slim, though.
Aside from that, there's the CMS I use for my website, though I'm not showing it to anyone because it's so bad that it makes me want to climb into a hole and disappear off the face of the planet. It still works though, so I don't necessarily see myself changing it out for something else just yet.
My bad habit is not properly archiving these little programs, so I invariably end up recreating them from scratch each time.
With this visualization I was able to determine the best way to package this on the vehicle with the minimal amount of deflection to avoid bump-steer and death-wobble.
I suppose it would be useful to other people building radius-arm/link-suspensions that incorporate a track-bar but I haven't got around to hosting it any where.
TODO list program and the list is periodically printed on my desktop with conky
Software to automatically book reservations at good restaurants on Resy. Supported proxies, multi account, automatically running every day and getting reservations in a certain time range, and even something with a USB GSM modem to respond to the confirmation texts you get a day before. Used it until I got banned from Resy :\
Family photo search using CLIP (actually uform) and face labels from Synology NAS. So you can search “winter +christopher” and it will only show pictures of me and sort by the most winter related ones. You can also filter out certain names, search multiple names, click on an image to get images most like it, filter by year, or any combination thereof. Took a couple days to make with Flask, pgvector, and some code to scrape data from the Synology web interface. My family uses it sometimes too though.
- My blog mailing list (Sendy) was having a bot signup problem. I wanted to deploy a captcha, but the mailing list software didn't support it. So, I put a simple proxy that validates the captcha before inserting the record to the software.
- Wanted a way to apply an email template to the Sendy mailing list, so I wrote a software that consumes my RSS feed, applies a template, and creates a draft in my mailing list software.
- Wrote a "Scoreboard" that aggregates MRR across Stripe accounts into a graph, and sends auto-emails to friends when I cross thresholds.
- Wrote a script that emails me when the HN "whoishiring" job goes live every month. (Simple crons like Zapier don't work)
There are a few other things in there, too.
A big chunk of these are proof of concept programs, test code that is good for trying an idea but won't be complete enough for a test suite in the end result.
Others are random data fixes when you're migrating file formats or end up with a bad CSV or something.
I have a toy program that just tells me what a keystroke looks like via raw mode.
I've considered putting a little bit more elbow grease and turning into a packaged product but I'm not sure if there's demand for it out there.