i'm pretty tired of engagement bait and all the political nonsense on my x/twitter feed.
i was curious if i could use an llm to filter out these type of content, so i prototyped a quick chrome extension.
it uses LLama 3.3 to analyze the tweet through https://groq.com/ (because they are super-super fast).
the extension is available in the chrome store, also there is a link to the repo.
- you can tweak the system prompt for the filtering - but you need your own API key from Groq (you can get one for free)
He's a complicated figure. I'm not a fan of all of his ideas, nor do I consider him some kind of right-wing free speech savior. He's like anyone else in that there's some hit and miss. I do find it disturbing how quickly the predominantly left-leaning tech sphere turned on him. This coincided with politically driven media talking points.
A few short years ago he was celebrated for his association with green energy. He was even celebrated by the globalists at the WEF. For this and other reasons, I suspect his promotion to be something of a false alternative. Idolizing him seems a bit naive. The demonizations I regularly read on this site are often poorly reasoned or Godwinian hyperbole.
In short, it is the philosophy espoused by the WEF, The Club of Rome and some UN institutions.
For all of Twitter's faults, I never interact with someone making these toxic insinuations on that platform.
Closing down the API, charging for the checkmark, not displaying tweets to those not logged in, artificially boosting tweets by the owner itself etc etc.
[1] https://www.washingtonpost.com/politics/2023/07/24/twitter-r...
there is another player i've found when making this -> https://inference.cerebras.ai/
never tried their api tho
I suspect experiences using social media apps differ wildly from person to person.
X is the same thing, but with the lies amped up 10x and fractured into discontextualized atoms of outrage to keep you scrolling.
Grandparents sharing nonsense on Facebook also think themselves highly informed — after all they’ve spent the past ten years in retirement “doing their own research”.
Absolute nonsense. Believing X is a simple distribution or sample of reality is bonkers.
I had an amazing experience with Facebook shorts! What happened was I...
I had an amazing... (hits the close button)
Consider an analogy with a knife. If someone quits using knives simply because they had previously cut themselves, and someone says: “wait, there’s at least one useful application for these things”, and demonstrates the useful application, that could be enough for that person to discover how useful knives can be when used right.
Exact same goes for any tool that can cause both harm and benefit, like social media apps.
I was just making fun of desperate engagement baiting Facebook short videos that of people just about to get kicked by a camel or whatever, looping right before whatever they're baiting actually happens.
Mine discovered that I go wild for horse videos. The fewer humans on screen, the better.
To me the big blow was seeing the response to what’s happening in Gaza and what narratives people and algorithms in combination end up promoting. The thoughtful, balanced, humanistic view gets approximately zero traction, while completely untruthful propaganda (on both sides) has enormous reach.
Maybe it’s an insurmountable problem. Human defense mechanisms in combination with algorithms will always push people to tribalism and cheering for atrocities. I hope not, but seems like it.
I guess you could take the OPs approach of filtering out all the propaganda and keep contributing. But then you are effectively working for a propaganda machine free of charge, helping create value that will draw others in to be subjected to the propaganda.
If you need an example on the other side you can take this popular post that claims the German foreign minister has said Israel has a right to target civilians [2], when in fact she says Israel has a right to target civilian infrastructure, if it is being used for military purposes.
Community notes do not appear because the algorithms require people who typically disagree to agree, which I doubt will ever happen in a military conflict.
Except I'm not sure there's anything that can be done about it in the US because of first amendment rights.
Also see: people using them to write artificial garbage and then readers using them to summarise the said garbage.
Also, Usenet and Dove/Fidonet (you can get them over NNTP news://cvs.synchro.net:119) have more polite discussions on politics.
it feels like a huge chunk of (recent/real-time) tech/design discussion happens there. but could be just me. i'm still warming up on bluesky/threads.
But good idea yes.
obviously it's never going to be perfect tho
but your best bet is still blocking/muting people then
I think it’s inevitable that we’ll start to see more sophisticated ways of organizing our social media feeds.
I don’t think it has to be this binary decision where we either abandon social media altogether or expose ourselves to the most emotionally draining content possible. There’s likely many different unexplored metas as it were.
I often joke that we should have a marketplace of algorithms we can subscribe to, where the sentiment “slider bar” can go from Hello Kitty Island Adventure positivity to 4Chan LiveLeak nihilism, if you so choose.
the problem imo is that there is absolutely zero control on most platforms.
on x there is a button called "i'm not interested in this tweet" but obviously it doesn't do anything meaningful
It's also what makes you valuable as a free consumer.
If you want lumber, don't buy a house and strip it
1: https://help.x.com/en/managing-your-account/how-to-deactivat...
I logged in after many years of inactivity only to realize that despite only having a few friends, my feed was exclusively Musk and Tesla stuff. No amount of "don't show me this" helped. On one hand I found it exceptionally pathetic that any person would need to stroke their own ego to this degree at the expense of everyone else. But worse, it's clear that Twitter/X/Musk control my feed more than I do so the only way to win the game is to not play it.
Getting rid of "For you" entirely and removing Retweets from the chronological timeline (default configuration for [1]) gets you 95% of the way there - you'll only get engagement bait and political nonsense if the people you choose to follow are the ones actively doing it, in which case…
I have lists for Software, Hardware, Politics, VCs, Economy, Art, and some other wired stuff.
You can ignore 99% of Spam, bait etc and only focus on the signal.
yes it's the same idea, tho for this one you need your own API key, which can be a bummer for some.
re publishing—just do it! this project is also a rough mess, only a quick experiment, so i doubt anybody will care if it fails sometimes / not perfect, code is a mess etc.
but i feel you, the following tab is definitely a healthier/more controllable space
I still do it that way, better to do a (very) slow roll and build up a network slowly than to follow the accounts Twitter's algorithm thinks I should follow (IMHO).
Another way to reduce the spam, is to remove bot followers from your account, I believe these bots use "likes" (which are hidden) to boost content for the botnet owners. Most of these bots are easy to spot, 0 posts, and very bad follower/following ratios. I built this Chrome Extension for bot removal automation: https://chromewebstore.google.com/detail/x-bot-remover/aohkh...
It would be way better if this was just done by X themselves, but why not try.
re bot removal: thanks for it, will give a it a try!
my feed is pretty boring there:d
There are of course a ton of custom feeds already out there.
1. People who want to debate / react to things / politics / just get angry apparently.
2. People who are like a town crier from times past, they want to be the first to know some event has happened and then go to their group chats & other social media to post about it.
i think there is still a large enough group which is there to share work / projects
With Twitter, I can see content from people I follow and from content referenced or highlighted by people I follow. People in this case can also include news organizations or other professional journalistic content.
In addition there is the algorithmically generated garbage feed stuff that is mostly useful for seeing (potentially) interesting content that I missed on the main feed, but also contains a ton of content from people I did not follow and that people I do follow did not highlight or reference. Presumably this project attempts to reduce the garbage-ey ness of this feed, which I would appreciate.
I don't know if it's still the case, but there used to be RSS feeds for Twitter accounts too.
You can comment wherever people listen to you.
https://github.com/danielpetho/unbaited/blob/main/extension/...
Seems like a reasonable idea. I have doubts about an LLM being able to reliably detect when a tweet was made to *deliberately* trigger emotions. That’s setting the bar quite high on llm working out intent
the default system prompt is quite weak, and could be much more specific / better.
also, even as a "human" can't decide sometimes what is a bait...
i mean even llms wouldn't be necessary (this is just a possible direction), but just give me more control on my own feed
Highlighting each piece as such and teaching as what about it is bait, etc would make people stronger and more aware.