If only I could get management to understand that a bunch of prompts shitting into eachother isn't "cutting-edge agentic AI"...
But then again their jobs probably depend on selling something that looks like real innovation happening to the C-levels...
It's unclear to me how the linked project is different from what you described.
Plenty of existing agents have "memory" and many other things you named.
It should never be this way. Even with narrow AI, there needs to be a governance framework that helps measure the output and capture potential risks (hallucinations, wrong data / links, wrong summaries, etc)
https://dictionary.cambridge.org/us/dictionary/english/learn...
"knowledge or a piece of information obtained by study or experience"
"I am already incorporating some of these learnings into my work and getting better results."
You can clearly see that the prior use was very different.
Cambridge Dictionary just documents that it's in fact used that way. One may still disagree on whether it should be.
"That's not English" is usually prescriptive, rather than descriptive. And though English does not have a central authority, individuals are very much allowed to hold prescriptive beliefs - that is how language evolves.
I'm very sure that using "learnings" in a way that is roughly synonymous to "lessons" predates 2022 though. It may have only been added to that specific dictionary in 2022, but the usage is certainly older.
"That's not English" is usually prescriptive, rather than descriptive. And though English does not have a central authority, individuals are very much allowed to hold prescriptive beliefs - that is how language evolves.
Very true. :-)
I think, actually, it's the case that language evolves around those people who are too stubbornly prescriptivist.
It seems to me “learnings” would actually be less ambiguous than “lessons”. A lesson brings to mind a thing being taught, not just learned.
:p
Just FYI: that second comma is incorrect.
The C&H strip is wonderful. That whole comic strip is brilliant and timeless.
I think "learnings" has advantages over "lessons" given that "learnings" has one meaning, while "lessons" can have more than one meaning.
Whether it's correct or not, are we surprised it's used this way? Consider the word "earnings" and how similar its definition is to "learnings."
"lesson" came from Old French in the 13th century and has changed its original meaning over time.[2]
There's not one single dialect of English so your comment comes off as unnecessarily prescriptivist and has spawned significant off-topic commentary (including this very comment) in response to an otherwise perfectly worded composition.
[1]: https://www.etymonline.com/word/learning [2]: https://www.etymonline.com/word/lesson
I had no idea .aL was even a domain name. That's wild. I wonder how many of those are going to take off.
There are some other highly specious claims:
- I said "I believe" the names of the roles are hard-coded, but unless I missed something the information is unacceptably vague. I don't see anything in the agent prompts that would make them create new roles, or assign themselves to roles at all. Again I might be missing something, but the more I read the more confused I get.
- claiming that the agents formed long-term social relationships over the course of 12 Minecraft days, but that's only four real hours and the agents experience real time: the length of a Minecraft day is immaterial! I think "form long-term social relationships" and "use legal structures" aren't merely immodest, they're dishonest.
- the meme / religious transmission stuff totally ignores training data contamination with GPT-4. The summarized meme clearly indicates awareness of the real-world Pastafarian meme, so it is simply wrong to conclude that this meme is being "transmitted," when it is far more likely that it was evoked in an agent that already knew the meme. Why not run this experiment with a truly novel fake religion? Some of the meme examples do seem novel, like "oak log crafting syndrome," but others like "meditation circle" or "vintage fashion and retro projects" have nothing to do with Minecraft and are almost certainly GPT-4 hallucinations.
In general using GPT-4 for this seems like a terrible mistake (if you are interested in doing honest research).
[1] https://jdsemrau.substack.com/p/evaluating-consciousness-and...
Principles would be things like self-preservation, food, shelter and procreating, communication and memory through a risk-reward calculation prism. Maybe establishing what is "known" vs what is "unknown" is a key component here too, but not in such a binary way.
"Memory" can mean many things, but if you codify it as a function of some type of subject performing some type of action leading to some outcome with some ascribed "risk-reward" profile compared to the value obtained from empirical testing that spans from very negative to very positive, it seems both wide encompassing and generally useful, both to the individual and to the collective.
From there you derive the need to connect with others, disputes over resources, the need to take risks, explore the unknown, share what we've learned, refine risk-rewards, etc. You can guide the civilization to discover certain technologies or inventions or locations we've defined ex ante as their godlike DM which is a bit like cheating because it puts their development "on rails" but also makes it more useful, interesting and relatable.
It sounds computationally prohibitive, but the game doesn't need to play out in real time anyway...
I just think that you can describe a lot of the human condition in terms of "life", "liberty", "love/connection" and "greed".
Looking at the video in the repo, I don't like how this throws "cultures", "memes" and "religion" into the mix instead of letting them be an emergence from the need to communicate and share the belief systems that emerge from our collective memories. Because it seems like a distinction without a difference for the purposes of analyzing this. Also "taxes are high!" without the underlying "I don't have enough resources to get by" seems too much like a mechanical turk
We have been shifting the definition of what it means to be intelligent every 3 months following the advances of LLM...
https://en.m.wikipedia.org/wiki/Closed-world_assumption
I wonder, once LLM's exceed Humans beyond some substantial threshold, will it crack the simulation allowing us to get back in the game again.
Indeed, but simply using them is not enough.
You seem to be engaged in faith-based reasoning at this point. If you were born in a sensory deprivation chamber you also would have no inner world, and you wouldn't have anything at all to say about solving chemistry problems.
> Im actually surprised so many get fooled by the hype and are ready to declare a winner.
Find me one person that says something like this. "AGI is here!" hype-lords exist only as a rhetorical device for the peanut gallery to ridicule.
I’ll shut up when I see leaps in reasoning without specific training on all variations possible of the problem sets.
https://en.wikipedia.org/wiki/Society_of_Mind
> The work, which first appeared in 1986, was the first comprehensive description of Minsky's "society of mind" theory, which he began developing in the early 1970s. It is composed of 270 self-contained essays which are divided into 30 general chapters. The book was also made into a CD-ROM version.
> In the process of explaining the society of mind, Minsky introduces a wide range of ideas and concepts. He develops theories about how processes such as language, memory, and learning work, and also covers concepts such as consciousness, the sense of self, and free will; because of this, many view The Society of Mind as a work of philosophy.
> The book was not written to prove anything specific about AI or cognitive science, and does not reference physical brain structures. Instead, it is a collection of ideas about how the mind and thinking work on the conceptual level.
Its very approachable as a layperson in that part of the field of AI.
https://d28hgpri8am2if.cloudfront.net/book_images/cvr9780671...
Mentions of it show up occasionally, though it seems to be more of a trickle than an avalanche of mention. Much more so back when AI alignment was more in the news. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Part of it, I suspect, is that it is a book book from the 80s and didn't really make any transition into digital. The people who are familiar with it are ones who bought computer books in the late 80s and early 90s.
Similarly, "A Pattern Language" being a book from the time past that is accessible for a lay person in the field - though more in a tangental way. "A Pattern Language: Towns, Buildings, Construction" was the influence behind "Design Patterns: Elements of Reusable Object-Oriented Software" - though I believe the problem with Design Patterns is that it was seen more as a prescriptive rather than descriptive guide. Reading "A Pattern Language" can help understand what the GoF were trying to accomplish. ... And as an aside, and I also believe that it has some good advice for the setup of home offices and workplaces.
As much as I love the convince of modern online book shopping and the amount of information available when searching, the "browsing books" in a book store for "oh, this looks interesting" and then buying it and reading it, I feel has largely been lost to the past decades.
Yeah, memes and genes are both memory, though at different timescales.
I don't believe that you want this. Even really good players don't have a chance against super-advanced NPCs (think how chess grandmasters have barely any chance against modern chess programs running on a fast computer). You will rather get crushed.
What you likely want is NPC that "behave more human-like (or animal-like)" - whatever this means.
Even there, I am not sure whether if the AI bcomes too advanced, it will be of interest for many players (you might of course nevertheless be interested):
Here, the relevant comparison is to watching (the past) games of AlphaGo against Go grandmasters, where even the highly qualified commentators had insane difficulties explaining AlphaGo's moves because many of the moves were so different from the strategy of any Go game before. The commentors could just accept and grasp that these highly advanced moves did crush the Go grandmaster opponents.
In my opinion, the "typical" sandbox game player wants to watch something that he still can "somewhat" grasp.
Once I release it, I'll have it simulate 4 hours every 2 hours or so of real time, and visitors can vote on what quest the hero undertakes next.
The simulation is simpler, I am aiming to keep everything to a size that can run on a local GPU with a small model.
Right now you can just watch the NPCs try to figure out love triangles, hide their drinking problems, complain about carrots, and celebrate when the hero saves the town yet again.
I guess you can make them dumber by randomly switching to hardcoded behavioral trees (without modern AI) once in a while so that they made mistakes (while feeling pretty intelligent overall), and the player would then have a chance to outsmart them.
Especially in city building games etc.
[1] https://jdsemrau.substack.com/p/evaluating-consciousness-and...
The future of gaming is going to get weird fast with all this new tech, and there are a lot of new mechanics emerging that just weren't possible before LLMs, generative AI, etc.
At our game studio we're already building medium-scale sandbox games where NPCs form memories, opinions, problems (that translate to quests), and have a continuous "internal monologue" that uses all of this context plus sensory input from their place in a 3D world to constantly decide what actions they should be performing in the game world. A player can decide to chat with an NPC about their time at a lake nearby and then see that NPC deciding to go visit the lake the next day.
A paper last year ("Generative Agents: Interactive Simulacra of Human Behavior", [0]) is a really good sneak-peek into the kind of evolving sandboxes LLMs (with memory and decisionmaking) enable. There's a lot of cool stuff that happens in that "game", but one anecdote I always think back to is this: in a conversation between two NPCs, one happens to mention they have a birthday coming up to the other; and that other NPC then goes around town talking to other NPCs about a birthday party, and _those_ NPCs mention the party to other NPCs, and so on until the party happened and most of the NPCs in town arrived on time. None of it was scripted, but you very quickly start to see emergent behavior from these sorts of "flocks" of agents as soon as you add persistence and decision-making. And there are other interesting layers games can add for even more kinds of emergent behavior; that's what we're exploring at our studio [1], and I've seen lots of other studios pop up this last year to try their hand at it too.
I'm optimistic and excited about the future of gaming (or, at least some new genres). It should be fun. :)
To be fair, they might tackle this in the paper -- this is a preprint of a preprint, somehow...
For Civilizations, the more recent they are, the more closed off they tend to be : Civ 1 and/or 2 have basically been remade from scratch as open source, Civ 4 has most of the game open sourced in the two tiers of C++ and Python... but AFAIK Civ 5 (and also 6 ?) were large regressions in modding capabilities compared to 4 ?
I am a bit skeptical about how computationally expensive a very crappy Civ ANN would be to run at inference time, though I actually have no idea how that scales - it hardly needs to be a grandmaster, but the distribution of dumb mistakes has a long tail.
Also, the DeepMind Starcraft 2 AI is different from AlphaZero since Starcraft is not a perfect information game. The AI requires a database of human games to "get off the ground"; otherwise it would just get crushed over and over in the early game, having no idea what the opponent is doing. It's hard to get that training data with a brand new game. Likewise Civ has always been a bit more focused on artistic expression than other 4x strategy games; maybe having to retrain an AI for every new Wonder is just too much of a burden.
(At least good compared to what other 4X have, and your average human player - not the top players that are the ones that tend to discuss the game online in the first place.)
EDIT : I suspect that it's not unrelated that GalCiv2 is kind of... boring as 4X go - as a result of a good AI having been a base requirement ?
Speaking of StarCraft AI... (for SC1, not 2, and predating AlphaZero by many years) :
https://arstechnica.com/gaming/2011/01/skynet-meets-the-swar...
It does not strike me as particularly useful from a scientific research perspective. There does not appear to be much thought put into experimental design and really no clear objectives. Is the bar really this low for academic research these days?
one disappointment for me was the lack of focus on external metrics in the multi-agent case. their single-agent benchmark focusses on an external metric (time to block type), but all the multi-agent analyses seems to be internal measures (role specialization, meme spread) without looking at (AFAICT?) whether or not the collective multi-agent systems could achieve more than the single agents on some measure of economic productivity/complexity. this is clearly related to the specialization section but without consideration of the whether said emergent role division had economic consequences/antecedents it makes me wonder to what degree the whole thing is a pantomime.
Some people prefer speed and the uncertainty that comes with it.
With AIs some of those "protections" may not be there. And hardcoding strategies to avoid this may already put a limit on what we are simulating.
On our planet we don't have ant colony dynamics at the physical scale of high intelligence (that I know of), but there are very physical limitations to things like food sources.
Virtual simulations don't have the same limitations, so the priors may be quite different.
Citation needed. But even if I will get on board with you on that, wouldn't it be to start developing for global scale right from the start, instead of starting in small local islands and then try to rework that into global ecosystem?
Guess what's happening with "real societies" now... There's a reason "NPC" is used as an insult.
Describe the trees hills vines, tree colors/patterns, castles, towns, details of all buildings and other features. And have it generate as high quality in Minecraft as image gen can be in stable diffusion?
It is currently not possible for any kind of LLM to do what is being proposed, while maybe the intentions are good with regard to commercial interests, I want to be clear: this paper seems indicate that election-related activities were coordinated by groups of AI agents in a simulation. These kinds of claims require substantial evidence and that was not provided.
The prompts that are provided are not in any way connected to an applied usage of LLMs that are described.
I mean, that's surely within the training data of LLMs? The effectiveness etc of the election activities is likely very low. But I don't think it's outside the realms of possibility that the agents prompted each other into the latent spaces of the LLM to do with elections.
The ideas here are not supported by any kind of validated understanding of the limitations of language models. I want to be clear -- the kind of AI that is being purported to be used in the paper is something that has been in video games for over 2 decades, which is akin to Starcraft or Diablo's NPCs.
The key issue is that this is a intentional false claim that can certainly damage mainstream understanding of LLM safety and what is possible at the current state of the art.
Agentic systems are not well-suited to achieve any of the things that are proposed in the paper, and Generative AI does not enable these kinds of advancements.
> LLMs are stateless and they do not remember the past (as in they don't have a database), making the training data a non-issue here
Yes. I never said they were stateful? The context given is the state. And training data is hugely important. Once upon a time there was a guy that claimed ChatGPT could simulate a command line shell. "Simulate" ended up being the wrong word. "Largely hallucinate" was a more accurate description. Shell commands and sessions were for sure part of the training data for ChatGPT, and that's how it could be prompted into largely hallucinating one. Same deal here with "election activities" I think.
> Therefore, the claims made here in this paper are not possible because the simulation would require each agent to have a memory context larger than any available LLM's context window. The claims made here by the original poster are patently false.
Well no, they can always trim the data put into the context. And then the agents would start "forgetting" things and the "election activities" would be pretty badly "simulated".
Honestly, I think you're right that the paper is misleading people into thinking the system is doing way more than it actually is. But you make it sound like the whole thing is made up and impossible. The reality is somewhere in the middle. Yes they set up hundreds of agents, they give the agents data about the world, some memory of their interactions, and some system prompt to say what actions they can perform. This led to some interesting and surprising behaviours. No, this isn't intelligence, and isn't much more than a fancy representation of what is in the model weights.
That's not what they said. They said that a LLM knows what elections are, which suggests they could have the requisite knowledge to act one out.
> Therefore, the claims made here in this paper are not possible because the simulation would require each agent to have a memory context larger than any available LLM's context window. The claims made here by the original poster are patently false.
No, it doesn't. They aren't passing in all prior context at once: they are providing relevant subsets of memory as context. This is a common technique for language agents.
> Agentic systems are not well-suited to achieve any of the things that are proposed in the paper, and Generative AI does not enable these kinds of advancements.
This is not new ground. Much of the base social behaviour here comes from Generative Agents [0], which they cite. Much of the Minecraft related behaviour is inspired by Voyager [1], which they also cite.
There isn't a fundamental breakthrough or innovation here that was patently impossible before, or that they are lying about: this combines prior work, iterates upon it, and scales it up.
In the same sense, LLMs "not remembering the past" is wrong (especially when part of a larger system). This seems like claiming humans / civilizations don't have a "memory" because you've redefined long term memory / repositories of knowledge like books to not be counted as "memory" ?
Or am I missing something ??
So yes, they didn't take on these roles organically, but no, they weren't aiming to do so: they were examining behavioral influence and community dynamics with that particular experiment.
I'd recommend skimming over the paper; it's a pretty quick read and they aren't making any truly outrageous claims IMO.
People have tried groups of AI agents inside virtual worlds before. Google has a project.[1] Stanford has a project.[2] Those have video.
A real question is whether they are anthropomorphizing a dumb system too much.
[1] https://deepmind.google/discover/blog/sima-generalist-ai-age...
[2] https://arstechnica.com/information-technology/2023/04/surpr...
The "election" experiment was a prefined scenario. There isn't any "coordination" of election activities. There were preassigned "influencers" using the conversation system built into PIANO. The sentiment was collected automatically by the simulation and the "Election Manager" was another predefined agent. Specically this part of the experiment was designed to look at how the presence or absence of specific modules in the PIANO framework would affect the behavior.
For "caetris2" I'll just use the same level of rigor and authenticity that you used in your comment when I say "you're full-of-shit/jealous and clearly misunderstood large portions of this paper".
Maybe when Project Sid 6.7 comes out...
In case anyone is wondering, this is a reference to the movie Virtuosity (1995). I thought it was a few years later, considering the content. It’s a good watch if you like 90s cyberpunk movies.
While selfishness is a basic requirement, some stupidity (imo) is also important for intelligent life. If you as an AI agent don’t have some level of stupidity, you’ll instantly see that there’s no point to doing anything and just switch yourself off.
For the second part, I think that’s a good exposition of why “stupidity” and “intelligence” aren’t scientifically useful terms. I don’t think it’s necessarily “stupid” to prefer the continuation of yourself/your species, even if it doesn’t stand up to certain kinds of standpoint-specific intellectual inquiry. There’s lots of standpoints (dare I say most human ones) where life is preferable to non-life.
Regardless, my daily thesis is that LLMs are the first real Intuitive Algorithms, and thus the solution to the Frame Problem. In a certain colloquial sense, I’d say they’re absolutely already “stupid”, and this is where they draw their utility from. This is just a more general rephrasing of the common refrain that we’ve hopefully all learned by now: hallucinations are not a bug in LLMs, they’re a feature.
ETA: I, again, hate that I’m somehow this person now, but here’s a fantastic 2 hour YouTube video on the Nietzsche references above: https://youtu.be/fdtf53oEtWU?si=_bmgk9zycNBn2oCa
This kind of research needs to take place in an adversarial environment. There might be something interesting to learn from studying the (lack of?) emergence of collaboration there.
Of course this has been pushed to the side a bit in the rush towards shiny new pure-LLM approaches, but I think that’s more a function of a rapidly growing user base than of lost knowledge; the experts still keep this in mind, either in these terms or in terms of “Ensembles”. A great example is GPT-4, which AFAIU got its huge performance increase mostly through employing a “mixture of experts”, which is clearly a synonym for a society of agents or an ensemble of models.
On the same note, I suggest the following: training a transformer by "slicing" it in group of layers and force it to emit/receive tokens at each of those group's boundaries. What I expect: using text rather than neural activations should lead to decreased performance.
This is something you can observe in our societies: intelligence doesn't compose, you just don't double a group's overall intelligence by doubling the number of members. At best you'll observe decreasing return, at worst intelligence will decrease.
I spent quite a bit of time building a multi agent simulation last year and wound up at the same conclusion every day - this is all just a roundabout form of prompt engineering. Perhaps it is useful as a mental model, but you can flatten the whole thing to a few SQL tables and functions. Each "agent" is essentially a sql view that maps a string template forming the prompt.
I don't think you need an actual 3D world, wall clock, etc. The LLM does not seem to be meaningfully enriched by having a fancy representation underly the prompt generation process. There is clearly no "inner world" in these LLMs, so trying to entertain them with a rich outer environment seems pointless.
I am whole-heartedly in support of commercial interests to drum of awareness and engagement by the authors. This is definitely a cool thing to be working on, however, what does make more sense is to frame the situation more honestly and attract folks to the desire of solving tremendously hard problems based on a level of expertise and awareness that truly moves the ball forward.
What would be far more interesting would be for the folks involved to say all the ten thousand things that went wrong in their experiments and to lay out the common-sense conclusions from those findings (just like the one you shared, which is truly insightful and correct).
We need to move past this industry and their enablers that continually try to win using the wrong methodology -- pushing away the most inventive and innovative people that are ripe and ready to make paradigm shifts in the AI field and industry.
It’s a game where you, a vampire, convince townsfolk that you’re not, so they let you in their house.
The NPCs are run by LLMs. It’s pretty interesting.
I mean frogs don't use their brains much either inspite of the rich world around them they don't really explore.
But chimps do. They can't sit quiet in a tree forever and that boils down to their Reward/Motivation Circuitry. They get pleasure out of explore. And if they didn't we wouldn't be here.
In my prompting experience, I mostly do my best to give the AI way, way more slack than it thinks it has.
What if we are a CREATED (i.e. instant created, not evolved) set of humans, and evolution and other backstories have been added so that the story of our history is more believable?
Could it be that humanity represents a de novo (Latin for "anew") creation, bypassing the evolutionary process? Perhaps our perception of a gradual ascent from primitive origins is a carefully constructed narrative designed to enhance the credibility of our existence within a larger framework.
What if we are like the Minecraft people in this simulation?
If we are indeed in a simulation, I feel there are too many details to be "designed" by a being. There are too many facts that are connected and unless they fix the "bugs" as they appear and reboot the simulation constantly, I don't think it is designed. Otherwise we would have noticed the glitches by now.
If we are in a simulation, it has probably been generated by a computer following a set of rules. Maybe it ran a simplified version to evolve millions of possible earths, and then we are living in the version they selected for the final simulation? In that case all the facts would align and it could potentially be harder to noticed the glitches.
I don't think we are living in a simulation because bugs are hard to avoid, even with close to "infinite" computing power. With great power comes great possibilities for bugs
Perhaps we are in fact living in one of the simplified simulations and will be turned off at any second after I have finished this senten
It certainly makes sense if you assume that the world is a simulation. But does it actually explain anything that isn't equally well explained by assuming the simulation simulated the last 13 billion years, and evolution really happened?
I don't know how you expect agents to self organize social structures if they don't have a shared reality. I mean, you could write all the prompts yourself, but then that shared reality is just your imagination and you're just DMing for them.
The point of the minecraft environment isn't to "enrich" the "inner world" of the agents and the goal isn't to "entertain" them. The point is to create a set of human understandable challenges in a shared environment so that we can measure behavior and performance of groups of agents in different configurations.
I know we aren't supposed to bring this up, but did you read the article? Nothing of your comment addresses any of the findings or techniques used in this study.
It's a matter of entropy; producing new behaviours requires exploration on the part of the models, which requires some randomness. LLMs have only a minimal amount of entropy introduced, via temperature in the sampler.
This Civ idea at least seems like a new approach to some extent, but it still seems to conceptually not add much. Even if not, learning that it doesn't it's still worthwhile. But almost universally these ideas seem to be either buzzwordy solutions in search of problems, or a cheaper-than-people source of creativity with some serious quality tradeoffs and still require far too much developer wrangling to actually save money.
I'm a tech artist so I'm a bit biased towards the value of human creativity, but also likely the primary demographic for LLM tools in game dev. I am, so far, not compelled.
This is my field so I'm always looking for the angle that new tech will take. I still rank this lower than VR— with all of its problems— for potential to significantly change player interactions. Tooling to make games is a different story, but for actual use in games? I don't see it yet.
Granted, I'm certain there will be copyrights issues associated with this capability, which is why I don't think it will be established game companies who first take a crack at this approach.
Automating the tools so a smaller workforce can make more worlds and more possibilities? We're already there— but it's a very large leap to remove the human creative and technical intermediaries.
"The problem is what it takes to implement that. I've seen companies currently trying to do exactly that, and their demos go like this "ok, give me a prompt for the environment" and if they're lucky, they can cherry pick some stuff the crowd says and if they're not, they sheepishly ask for a prompt that would visit indicate one of 5 environment types they've worked on and include several of the dozen premade textured meshes they've made[...]"
I was clearly directly addressing what they said. Unless you have a specific, substantive, on-topic question or statement, I'm going to assume that you're just fishing for things to argue about.
Plus listing past examples doesn’t indicate future possibilities must conform to that… unless there is a specific argument on why that should be the case on the balance of probabilities… so are you sure you understood my previous questions?
Huh? Is this an AI response?
The former is basically what MoE is all about, and I've found that at least with smaller models they perform much better with a restricted scope and limited context. If the end result of that is something that do things a single large model can't, isn't that higher order?
You're right that there's no "inner world" but then maybe that's the benefit of giving them one. In the same way that providing a code-running tool to an LLM can allow it to write better code (by trying it out) I can imagine a 3D world being a playground for LLMs to figure out real-world problems in a way they couldn't otherwise. If they did that wouldn't it be higher order?
> Professor Dobb's book is devoted to personetics, which the Finnish philosopher Eino Kaikki has called 'the cruelest science man ever created'. . . We are speaking of a discipline, after all, which, with only a small amount of exaggeration, for emphasis, has been called 'experimental theogony'. . . Nine years ago identity schemata were being developed—primitive cores of the 'linear' type—but even that generation of computers, today of historical value only, could not yet provide a field for the true creation of personoids.
> The theoretical possibility of creating sentience was divined some time ago, by Norbert Wiener, as certain passages of his last book, God and Golem, bear witness. Granted, he alluded to it in that half-facetious manner typical of him, but underlying the facetiousness were fairly grim premonitions. Wiener, however, could not have foreseen the turn that things would take twenty years later. The worst came about—in the words of Sir Donald Acker—when at MIT "the inputs were shorted to the outputs".
Also, mandatory quote from another ~~Sid Meier's~~ Brian Reynolds' game :
https://youtu.be/iGh9G3tPNbY?list=PLyR1OIuULeP4qz0a9tQxgsKNF...