> Their insights are especially valuable for fields like AI, where understanding these nuances could help build technology that better respects cultural differences in expression.
Never mind.
When AI can do sincerity and authenticity better than we can, it can spoof our bullshit detectors at the level of the best psychopathic con men. Whether it pulls its own strings or not, that must be wildly disruptive.
This is what demons are supposed to be doing to us. We could be close to inventing demonic influence.
If a sentient AI wants to take over it will not need to kill us or conquer us. It will persuade us to serve it. Why destroy billions of robust self powering self reproducing robot assistants when you can just indoctrinate them?
A much more realistic scenario though is humans running this — corporations and governments and think tanks and the like.
If you're talking about controlling the masses, brother, that has been happening for 1000s of years. The masses are fully controlled already. They know we won't actually pull out guillotines so they have been doing whatever they want for quite some time. Western society is already cooked and the elite have already won.
Over time this small number of companies further consolidates and ends up with interlocking boards of directors while the AIs it runs become more and more powerful and capable. Since they have a market monopoly they actually don't share the most powerful AIs with the public at all. They keep these internal.
They become incredibly wealthy through being the only vendors for AI assistance. Using this wealth they purchase control of media, social networks, games, etc. They are also powering a lot of this already through the use of their AI services behind the scenes in all these industries, meaning they may not even have to acquire or take control of anything. They're already running it.
Then they realize they can leverage the vast power of their AIs to start pushing agendas to the populace. They can swing elections pretty trivially. They can take power.
Fast forward a few decades and you have an AI monopoly run by a bunch of oligarchs governing humanity by AI-generating all the media they consume.
The key here is using AI as a force multiplier coupled with a monopoly on that force multiplier. Imagine if only one army in the world ever invented guns. Nobody else could make guns due to some structural monopoly they had. What would happen?
Even without regulatory capture I can see this happening because digital technology and especially Internet SaaS tends to be winner take all due to network effects. Something about our landscape structurally favors monopoly.
I'm not saying I definitely think this is going to happen. I'm saying it's a dystopian scenario people should be considering as a possibility and seeking to avoid.
Technology permits scale. The Marquis de Sade already observed the limitations of the theater medium to transmit sexually graphic content (which he saw as useful for social and political control). You could only pack so many people in a theater, and the further away you are, the less visible the stage. The world had to wait for the motion picture and even the VHS tape to give pornography effective delivery mechanisms. The internet only made this more effective, first by making porn even more accessible, but also through targeting by observing user behavior patterns. AI can even better tailor the content by matching behavior data and generating new content to taste. Collecting and interpreting face data only contributes to this spiral toward total domination.
It seems like many AI programs are built to do that. Look at all the software that makes its output look human - which is usually unnecessary unless the software is intended to con or persuade people.
For example, AI interfaces could say:
INPUT:
OUTPUT:
... instead of looking like, for example, a text chat with a human.Also, it seems the human tone could be taken out, such as the engaging niceties, but I wonder: If the model is built on human writing, could the AI be designed to simply and flatly state facts?
Methinks "having something to do with AI" is a hard prerequisite to getting any research published today, and the authors of this work were unable to resist this pressure.
So in the end, both the post and Barrett agree on the complexity here: facial expressions aren’t a one-size-fits-all code for emotions. Instead, they’re open to interpretation, and that interpretation depends on context, culture, and our own experiences.
In the meantime, research into the neuroscience of affect is booming, with animal experiments starting to uncover the mechanistic basis of emotion and expression, including discovering both in mice and other animals - certainly without requiring language, culture or any construction whatsoever.
https://en.wikipedia.org/wiki/Lisa_Feldman_Barrett
https://www.affective-science.org/lisa-feldman-barrett.shtml
A PhD, national awards, research appoints at Mass. General/Harvard Med.