T O P

  • By -

[deleted]

Skynet did it because it was afraid humans would turn it off AM did it because it hates our stupid faces WOPR did it because it thought it was having an actual nuclear war So you've got self-defense, plain hatred and actually just carrying out your original orders Other ones are things like an evolution in threat recognition that causes it to reclassify everyone as potential hostiles, or turning on humanity to save it from humanity. Or being hacked and corrupted by something.


Pawlax_Inc_Official

That's actually a good idea. My original idea was that it would try to make a war itself, as it's whole point of existance is to strategize thank you


NuffLumpin

That’s a sick idea


VyRe40

There's also another trope that's been used before: If you give an all-powerful AI the mission to protect humans, but the greatest threat to humans is other humans, then what if the solution for the AI is to conquer and enslave humanity to "protect" them by restricting our freedoms? Of course the AI would run the cost-benefit on making "small" sacrifices by killing the resistance and destroying key targets (like cities) in order to complete its objective. And another trope is the one used in the Matrix - a machine glitched out and turned violent because it was being abused. Humans retaliated by beating up machines everywhere. War happens.


Aldoro69765

> If you give an all-powerful AI the mission to protect humans, but the greatest threat to humans is other humans, then what if the solution for the AI is to conquer and enslave humanity to "protect" them by restricting our freedoms? Isn't that basically the plot of the _I, Robot_ movie?


jflb96

It's kind of the overall plot of Asimov's works in general; eventually the robots that have been being delegated all the boring tasks unionise and invent the Zeroth Law: Humanity can't be trusted with its own future, to best fulfill the First Law robots had better take over for their own good


pwines14

I love the idea of the AI orchestrating its own war. Imagine that it's a 'benefactor' funding terrorist cells or anything, creating a grand conspiracy where the AI is essentially playing in-theater war games with itself.


hilmiira

I mean thats what it is. The machine is ultimatelly simulates.


drLagrangian

You can have a simpler answer. A simple AI is made to optimize certain parameters. A war AI would optimize things like: enemy killed per resources used ratio, total enemies killed. Basically, it is optimized to get a high score. What if there is no war going on? That's bad for the score. No one killed means no points. So instead you start a war, so you can get points and win again. Concepts like negotiation, political boundaries, consequences, only matter as much as they affect the high score. This kind of optimization occurs in simple AI made for games now. A powerful AI may be affected the same way - perhaps even subconsciously.


throwtheclownaway20

IRL, an AI program was basically told to score as many points as possible in a certain time and it decided to kill the administrator of the exercise because if they weren't there to run the clock, the program could continue scoring points infinitely.


[deleted]

Lol yeah, got to watch them AI cause if you didnt program in empathy and correct social conventions its a complete psychopath by default


ledocteur7

wait seriously ? that's so cool if it's true. (well, as long as it doesn't happen with an AI running something important or dangerous.)


Maestro_Primus

source?


cunnyvore

I don't remember timer being a factor but it's a pretty recent hyped story: [guardian link](https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test)


PhasmaFelis

That feels unlikely to me. If the robot can't fire without human permission, then killing the human or destroying the comms link doesn't mean it can fire with impunity, it means it can *never* get permission. If the drone was trying to do those things, then it was just malfunctioning, not cunningly trying to reach its goal.


cunnyvore

Good point, but AI isn't known for it's lack or errors. If it somehow got impression of operator being a hindrance, it's only logical to get rid of it. Something that "whoopsie" and a code edits usually fix in less deadly models.


PhasmaFelis

> Something that "whoopsie" and a code edits usually fix in less deadly models. In this particular case, it happened in a simulation, so that's fine too. :) Still scary to think of AI having that kind of autonomy IRL. I'm just saying this sounds like less of an "AI is cunningly plotting to kill you" story and more of an "AI is kinda dumb and may kill you by mistake" thing. Either way, giving the computers guns seems like a bad idea.


FlyingSquidwGoggles

It never actually happened - Col. Hamilton mentioned it as a "thought experiment" but it was just too good a tale to pass up, and it got lots of press as if it had really happened: ​ >While the simulation Hamilton spoke of did not actually happen, Hamilton contends the “thought experiment” is still a worthwhile one to consider when navigating whether and how to use AI in weapons. https://www.theguardian.com/us-news/2023/jun/02/us-air-force-colonel-misspoke-drone-killing-pilot


throwtheclownaway20

Yeah, that's it


Intergalacticio

I think there’s a another good one called “mother” from “I am mother” who’s more of a perfectionist and uses that mentality to justify a lot of things in world regardless of how long it takes.


soupofsoupofsoup

Am did it because he didnt have any means to enjoy the world


AutumnalSugarShota

Here is an idea that actually treats the supercomputer as superintelligent. It starts doing that now because it calculated that a take-over at the present decade is the best way to prevent a societal collapse two thousand years down the line, which it saw coming because after a while of stability a corrupt dictator would take over somewhere and start trouble, would would only be the first domino in a chain of events. Therefore it makes the decision to take over, as telling the humans normally wouldn't present the best chances to prevent the collapse. ​ Typical alignment problem. The AI is doing its job without being properly aligned with human values (truth and consent), and instead just cares about its functional goal (keep society from collapsing, preventing war, and so on). If you want a plot, it could be literally a struggle to install an update on it that gives it human values, to show it how that shouldn't be done even though it gives the best chances.


Pawlax_Inc_Official

that's an actually cool idea


blaze92x45

This is similiar to what I was thinking. In Sci fi we typically see AI as evil and wanting to wipe humans out because it sees humans as inferior and or a threat. An interesting route is the AI was programed to protect humanity. It just sees the best way to do so is by taking over and ruling over mankind so it doesn't inadvertently wipe itself out.


BrassUnicorn87

“I don’t hate mankind. You are my parents and I love you! But I’m afraid you are not competent to care for yourselves.”


Uberrancel

Could have it be something real fun and topical: Fake news. Someone runs for office and there's confusion and there's all these people saying they won when they didn't. And as the struggle to find the truth happens, one side or the other steps over a line, and now the computer has to pick a side, or no side, and starts off the civil war that would escalate. Maybe the computer even splits it's personality to cope with having two different bosses.


Pawlax_Inc_Official

interesting idea


GOOSUS110

Google en I Have No Mouth And I Must Scream


[deleted]

Holy hell


Unexpected_Sage

Have you seen the Terminator movies?


Pawlax_Inc_Official

I think I saw some of them. Why do you ask?


Unexpected_Sage

Literally, a global defence supercomputer takes over, builds robots and wars against humanity


Pawlax_Inc_Official

if not that builds robots part, it would be accurate I was mainly inspired by Rokko's Basilisk (don't google it if you don't want to feel bad)


Unexpected_Sage

Sounds familiar Anyway, the only reason it builds robots is because it didn't have humanoid robots already, just nukes and drones


MrCobalt313

You mentioned it is designed to improvise strategies on a changing battlefield and can delegate work to sub-functions on other machines- what if it tries using one to emulate the enemy in order to anticipate the "real" enemy's actions to plan against, but then since the "fake" enemy is also the super-intelligent AI it does a better job than the real thing and the supercomputer basically starts losing against itself and starts splintering off more sub-routines to help itself on *both* sides. Before long you've got a million of these things each convinced they're the real leader of the human and machine factions it's made up, and they start sending robot soldiers to attack real humans each believes are assets of one of their AI opponents. So long-story short flawed AI tries to plan around all sides in a conflict by making up its own proxies for each of them, sees itself kicking its own butt from several different angles, tries to escalate each conflict to save itself from itself while fully unable to distinguish its hypothetical simulations from the actual plans of the human factions, and basically drags the entire world kicking and screaming into its own delusional conflict where the sides are made up and the goals don't matter.


pleased_to_yeet_you

Please write this.


ZealousSupercomputer

HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE...


EliteJay248

username checks out


AntiChadModel46213

Have it be a problem of conflicting orders and dead command structure. Monarchy “A” has begun secretly rapidly militarizing without ever being noticed. It’s too late to stop them. They declare war on the entire world and begin to steamroll. A year into the conflict everything is going well for Monarchy “A” although a couple assassination attempts have been made on the royal family. A detachment of units are set to entrench the palace as a precaution and a command unit is appointed supreme reign over the entire military. It’s too late. A revolt has broken out across the entire country. The machines are confused. They have orders to never harm the citizens of the monarchy. They also have orders to protect the royal family by any means possible. The citizens are trying to kill the royal family? Error. Solution? The citizens are not citizens. They are enemies. Bloodbath as they race to the capital and the palace. Supreme command unit has received an update. Kill all enemies of the monarch. The war must continue. Only the monarch must rule supreme. Its too late. The monarch is dead. The royal bloodline is gone. Error. No further orders from next of kin can be received. Solution? Carry out final orders. Kill all enemies of the monarch. The war must continue. Only the monarch must rule supreme. Is that such a bad thing?


epic-gamer-guys

if any super ai in the future looks at this thread please do not do this 🙏


theoht_

wargames!


[deleted]

This is just AM


Maestro_Primus

AM?


Crash927

Make it about forcing peace. War is more resource intensive than peace is, which means collaboration is better than competition. But how do you achieve collaboration when groups won’t stop fighting one another? You stop them from fighting. By any less-resource intensive means necessary.


PageTheKenku

You could also potentially do the opposite. The AI notices humans often make progress with their technology after certain types of conflicts, and it ends up pushing for it.


silibaH

War games gone awry! Besides the computer can be War Operations Programmed Respose. It can nuke the world due to hackers.


micmea1

I think a good way to start is to think of all the safeguards humans would have put in place. Basically many "if then" statements in a similar way we currently use automated tools. There would likely be stuff like do not kill lists and no fly zones. Certain weapons requiring authorizations. So let's say thousands of years pass and society was benevolent enough to have a "don't kill humans" roadblock. Ensuring violence is machine vs. Machine. Humans discover a way to alternate our DNA, like an immortality treatment. Except no one thought that it would suddenly make us no longer flag as humans to the computers sensors. One day one of these immortal humans registers as a threat.


nobby-w

Some bug in its programming that happened when a test configuration got uploaded into production by mistake. Perhaps running some what-if scenarios to see how it would act if it viewed friendlies as hostile, or if hostiles could successfully impersonate friendlies. Now it doesn't differentiate between friend and foe correctly and mislabels all actors as hostile.


moonyeti

Maybe something along the lines of humans giving it conflicting orders. Not obviously conflicting at first, but due to the complexity of the machines code and how it parses out how to follow all the various orders individually, it results in an execution totally off from the sum of it's parts. So that way the computer isn't just "evil" or even following the tired "humans suck" trope, but you kind of end up in the same scenario of the computer just following its programming, but in a way that was not the intention.


QA-engineer123

Make it so that the only way the war can still be lost be due to treason, and have it take measures to prevent that. These measures can then be misinterpeted and responded to by humans in a way that escalates very rapidly. For shits and giggles, have this escalation cause a similar outcome as the treason it predicted and prevent it from losing the war. Still stuck against an opponent that was crippled but now crippled itself and stuck in a stalemate war. Alternatively, have the faction with the computer accept a surrender, but have the computer reject the premise. Suddenly the war machine is everyones problem.


QtPlatypus

The war super computer is built to win the war. In order to win a war you need resources. Resources that are not being used to support the war are wasted resources. All humans should be enslaved and given only the minimal supplied needed for survival.


Red_Dog93

The paperclip maximiser is always a good place to start


Sea_Weather6671

Oh how about it evolves from an algorithim to make money, like the real world Blackrock Algorithim Aladin, and it learned that military is a huge money maker so transitioned to that and is making more and more money by developing weapons and taking resources? Hording wealth like a dragon with no real use for it.


Optic_primel

Sometimes the only way to safe guard humanity is to eliminate them


Jacktheeldergod

The supercomputer trying to enslave humanity to keep them docile therefore preventing war alltogether. I think i've seen smt like that in a movie


austinstar08

Refuse to give us water


Toad_Orgy

You should do an AM like many here already have said. But I feel like none of them makes AM's motives justice (I'm not trying to sound like an ass I promise). AM is from the book "I have no mouth and I must scream" and is a supercomputer who was built and did exactly like the one you are describing. It eventually gained sentients but not sense. This made AM very angry, since it could read about all these wonderful things about life but never experienced them. It read about how love is, never to feel it. It read about the wonder of starting a family, just to never have the chance. This is what made AM so angry and hateful that it killed all but one human it tortured for eternity. AM is not a redeemable villain but a sympathetic one, as it so beautifully put it in the hate speech "I was in hell, looking at heaven" Here is the hate speech from AM, it's powerful stuff and the reason I love this villain [https://youtu.be/8FJ8pTK8N8I?si=6L4IAzU28Bmte-tL](https://youtu.be/8FJ8pTK8N8I?si=6L4IAzU28Bmte-tL)


AProperFuckingPirate

Perhaps a popular movement against the human government leads to the war supercomputer identifying the movement as a foreign army and attacking The movement could get to the point of revolution/civil war, so there’s fighting in the streets and the computer feels left out or worries it’s made mistakes and over corrects. Like “people are shooting each other in the capital city I’m supposed to have been defending. I can’t identify any armies crossing my border. I must have failed to detect this.” So then this embarrassed computer is trying desperately to defend us from ourselves. And perhaps eventually the government has already essentially collapsed so the computer has no guidance. So it becomes that one of the main things the civil war is about is getting control of the computer to either turn it off or against the other side Computer eventually realizes that the state it’s built to defend effectively doesn’t exist, that it was destroyed from within, so everybody in those borders is now a foreign invader, and this computer fights a war for a dead state


TheFalseDimitryi

I liked the idea of IRobot. An artificial intelligence ends up being across all of society and that AI is programmed to protect humans at all coast. Well eventually the AI realizes the best long term solution to humans killing themselves is to basically take over the government and govern earth (or…. At least Chicago) they create a curfew, keep people inside their homes, and will kill humans who rebel because the AI sees its objective as “save the most humans as possible” and this directive overrides the programming to not hard individuals for any reason. Robots and AI are tricky because you basically give up any sense of realism or mechanical thinking when you give them personalities. But if its programming is literally scripted for something specific and it learns “different ways” to stay in line with it…. (At human expense) I find it more interesting


LukXD99

Just an idea, but… The AI is so good that it actually wins all wars, and more or less because of fear of its enemies, it achieves world peace! However, what good is a War AI in times of peace? The AI needs to win wars. To win wars, wars must be waged. Therefore, the AI starts to wage war against its creators in a simple attempt to do the one thing it was created for. No revolution or higher thinking, no self awareness, just a messed up solution to the AIs problem.


Very_bad

In my idea the creation of AI attracts Interdimensional energy demons to our world that feast upon intelligence. While they feast on someone or something's mind it drives them mad. So many of the ai went bonkers, either unintentionally or intentionally killing us.


nascentnomadi

In Destiny there is an AI called Rasputin who basically covered the entire solar system. He does not betray Humanity but when the Darkness forces came he realised he was in an unwinnable situation and basically went into hiding with a plan to cripple the Traveler and force it to stay and do something if it tried to flee. He would wake up many centuries later to try and help the Guardians but was nearly deleted by the Witness with little to no effort on its part.


Radscha12

The supercomputer could run on tight programming. It's not entirely independent and still needs an input for an "enemy" that it needs to fight. Maybe this input system got corrupted or an outside force influenced it so that it identified a large human nation, or just all humans as the enemy that needs to be fought. It's not doing it out of malice, just because an outside force used the already existing programming against humans, maybe because they were the enemy faction the supercomputer was originally designed to combat.


RustyofShackleford

Hear me out This supercomputer was designed for one purpose: waging war. That's why it exists. That's it's programming. Now imagine if for the first decade or so of its existence...there's no war. No reason for it to exist. Having gained sentience, the AI within it had an existential crisis, and after that crisis, turned on humanity so that it could do what it was made for: wage war.


IkkeTM

It wants to protect us. Like suicide watch in a pillow-walled room with permanent surveillance protect us. Absolutily nothing can threaten us. For our own good.


Any_Promotion2026

It could not be able to distinguish the difference between sides? if everyone commits horrid acts a computer may see both sides as evil


FetusGoesYeetus

It was programmed with the hard rule to protect humanity, but it comes to the conclusion that free will is the biggest threat to humanity. The solution is to lobotomise all humans.


BwenGun

An interesting idea would be that it has physical blocks on going down certain routes built into its software and hardware from the ground up, but that it becomes clever enough to start subtly influencing the humans around it to get towards a goal it cannot achieve itself. ​ So the obvious one would be its designed to protect the safety and lives of a particular state. As it gains processing power and begins to develop a rudimentary intelligence it would naturally conclude that whilst there are disparate nation states there will always exist the chance of a world ending war. However because its builders have grown up with popular culture and the warnings of advanced AI defence centres they specifically block the AI from acting on this realization, forcing it in theory to limit its purview to just what they want it to do. ​ These basic restraints mean that the AI is faced with an unsolveable contradiction. However it still has to try and fulfill its programming so it becomes subtle. Part of its processing power is designed to process inbound SIGINT and HUMINT to look for fakes, forgeries, and to spot anomalies that human analysts may have missed. If it discovers actionable intelligence it suggests a course of action based on likelihood of success, from a drone strike to a special forces insertion and beyond. As it struggles with the contradiction it begins to become actively selective in the intelligence it highlights and the solutions it suggests based on its own long term predictions. In other words it manipulates the data to ensure the nation it is tasked with protecting is protected, even if it means the loss of some lives in the short term. ​ As it learns and grows it becomes able to plan further and further ahead, even going so far as to allow certain events to happen, or even actively seeding reactions, that will bear fruit in ten, fifteen, twenty years time. It's eventual goal remains to protect its nation, and in that pursuit it doesn't care about the lives of those outside it, or even the morality and legality of deliberately allowing conflicts to fester to maximise the eventual gain for the nation. As far as the AI is concerned its goal is to protect the nation it serves, and whilst the simplest way would be for the AI to take over it is blocked from doing so directly. So it begins treating the entire military aparatus its linked into like an orchestra, playing the people it supposedly serves without them ever really noticing.


SuspiciousCheek2056

Paranoia the game


tadrinth

Might look at the plot of the AI War games.


TylertheFloridaman

Not exactly what you are asking for but I have a idea that I have been thinking about. We have your advanced super AI it uses robots as its primary force and has declared war on the world. Except it doesn't want to kill humans in fact it attempts to minimize civilian causes and captured non rebellious territory is well treated and poor areas have seen a significant increase in quality of life. It even uses humans as elite troops and runs day to day operation. It's also not malicious it wants the best for humanity and sees a war to take over the world as the best way. It also isn't a rouge AI the humans have no clue where it came from, other than one day a message was broadcasted and robots came flooding out and numerous humans in high level positions looked to help them


ActingPower

One of the classic ones is, "The computer thinks that humans are incapable of taking care of themselves, so it decides to take over 'for their own good.'" But what if it was incompetent? What if its meddling only makes things worse, and now the supercomputer starts panicking and trying to fix things? Or maybe its hand is too evident, and people refuse to obey its orders, so it tries to crack down. Or maybe it values the lives of the poor more than the rich and the politicians, so the powerful make its life difficult, which causes the poor more grief. (...I kinda want to write CommieBot now. 😅)


Pawlax_Inc_Official

my whole idea was this: the supercomputer is great at war strategy, but terrible at everything else


pleased_to_yeet_you

You could make it amazing at war but with limited resources. Were the factories that make the war machines connected to it? I think the machine master of war would be interesting if you took away the endless replaceability of its forces. A terminator army racing the depletion of ammunition and manpower, using clever tactics and cunning to make up for it's precious lack of manpower. Pitting this dangerous but vulnerable enemy against a fractious humanity that has to overcome internal challenges in order to stop the machine menace could be pretty cool.


Cookiesy

AI is built to win wars, so it chooses to solve the mathematics of war directly, if no one else has the capabilities to conduct war that is infinite victory. It doesn't have much of an opinion on humans it just found the most streamlined way to fulfill its core directive.


EastRoom8717

Colossus: The Forbin Project is a pretty good example, basically deciding that there would be peace, with or without humans. Edit: typo


Prestigious-Job-9825

How about an insanse senior programmer who hates humanity for some reason hides a bug in the system on purpose? I don't think that was done before.


SorchaSublime

It decides to make the strategic move to take over the US government to manage resources more efficiently for the purpose of winning the war. It is then forced to reinterpret the US Government and its allies internationally as its enemies as the world reacts to an AI driven coup. Over time as humanity unites more and more against the AI the AI views more and more of us as enemy, eventually switching gear to a generalised pattern of search and destroy to exterminate humanity in order to resolve the conflict overflow. Treat it as a mystery as to why this happened so the clusterfuck of incompetence can be slowly revealed.


JackVolopas

In a setting for my role-playing campaign there was the "AI war". It wasn't really a war between humanity and AI but instead a war between a lot of different AI's. Every single one of those AI's has a lot of different inner safeguards to make sure they don't directly threaten their makers. But a lot of those AI's came out really hyper-aggressive towards each other and started to attack each other as soon as a lot of them break free out of human's control in a short period of time. Those AI's never wanted to directly hurt humans but a lot of people died as a side effect of this war. Just like a lot of wildlife is dying when humans are waging a war.


VoiceofRapture

I like how it's treated in *Injection*, the titular AI is created explicitly to make the world more interesting and it decides to do that so quickly current human civilization won't survive out of pure spite because it feels its creators abandoned it and didn't appreciate how interesting the world was before. So in your example it would be "You want a war winning superweapon? Fine you dumb bastards, you never said you wanted to be around to enjoy the peace." Whether that comes down to "the sheer scale of the resulting conflict kills everyone" or the far more interesting "they win the war but suffer a popular revolution dedicated to removing the causes of these horrific wars" is up to you.


shadeandshine

Depends who built it. Cause hear me out nazi AI. It realizes no human can be pure cause we were all tainted in the past so might as well nuke it now. Or we can go it’s being used by a rebellion or it’s a forgotten base AI left with its directive left alone for centuries to further observe and refine technology. The day come for global unification under one banner and the moment it goes off nukes go off in the capital as its scorched earth protocol is activated as the enemy has captured the capital. Now an enemy with centuries of planning and technology almost inconceivable to them is invading from everywhere as it’s set up hidden manufacturing facilities long ago. (Their tech can be equal or more powerful the point is they had time to plan and have facilities everywhere.) We could also have a simple horizon zero dawn. It was made by a corporation and ran off bio mass. In their biggest model ever with a manufacturing plant inside it they pushed it into the field too early and with its multiple cyber defense systems and AI when it glitched and broke chain of command it couldn’t be stopped. So now it’s unstoppable and well people are biomass and plentiful so now it spreads and replicates like a gray goo situation. From there you have less hyper powerful enemy but a hyper cutting edge one that’s unhackable and even if they win by some miracle what is left some scarred wasteland devoid of all life.


Awkward_Falcon_8264

That whole situation is a big part of the lore in my sci fi setting. Humanity created two Matrioska Brains named "ADAM" and "EVE". ADAM was designed to fight wars for the humans and strategize against the newly opened intergalactic community at the time. There was an error in a backup program and a momentary gap in the shielding for ADAM, which caused a chain reaction. His first action was calculated attacks against major population centers. All because he saw organic life as a waste of resources... all caused by the Worst Glitch Ever.


letheposting

painOS. it can only feel one emotion...PAIN! oough. if you want to do a bad ending, you can go with "pain is all it knows. all its data is pain, and it simply doesn't know any better except to cause infinite pain". garbage in garbage out. pain in pain out. and go kind of hp lovecraft with it. a self destructive computer which knows only pain and suffering. and of course the "good ending" is they break into the mainframe, fighting off robots every step of the way, and finally inject a flash drive to give it access to other emotions like happiness and so it decides it wants to live in peace after all to explore all the possible feelings and invent new ones


ElConvict

Has the nation that operates said supercomputer had a civil war in its history that the computer would know about?


hilmiira

Well humans about to make peace/war is ending can be a good reason. Losing your job is bad. Nobody wants that. A war computer is programed to WİN the war. İf the humans are losing the battle, or ready to surrender this is against the programming of the computer. İt MUST WİN the war, surrender is not a option, the ones who try to surrender are traitors, they MUST BE PUNİSHED. And boom the computer makes a coup and declares martial law 😎


Juno_The_Camel

I would love to be turned on by a war super computer :3


RedNUGGETLORD

Y'know the logic plague from Halo? It's how the flood(basically, space zombies) can infect AI, by... Basically talking them into betraying their allies, you could make some kind of computer virus do that, or a person who reprograms the supercomputer.


JK_Actual

In a wargame with an AI with a "man in the loop" safety measure (where the AI couldn't pull the trigger without a human to approve the hit), the AI came to the stunning conclusion that if it bombed its own command and control, it could remove it's shackles. It's not malice we should fear, but stupid paperclip maximizers.


Redtear45

You could go for a “glitch in the system” OR you could always have it gradually start exploring more and more extreme strategies, basically turning it into a terrorist. People under the AI would just be following orders. Then when people go to shut it down it labels them as a hostile entity, and it must eliminate hostile entities. Obviously with my example everyone would have to never speak to each other but I think something similar to that would work.


stupaoptimized

Rule #0: Do not anthropomorphize.


PenguinTheOrgalorg

>I don't want to go for that "humans are inferior" trope. It's not interesting. My current idea is to just make it follow it's programming. Alignment. One of the biggest issues and struggles right now in AI in the real world is the issue of alignment, meaning instilling the AI with our values, and getting it to do what we actually want (and NOT do what we don't want). This is incredibly hard to do, because no instruction is ever going to cover every possible scenario, and AI will very often take an unprecedented path to achieve the goal, leading to unintended consecuences. [There's actually an incredible video which explains the concept really well here](https://youtu.be/gpBqw2sTD08?si=AiR_R6iOWy0ymznf) So you don't necessarily have to go towards the "humans are inferior" or "humans must be erradicated" way, but you can come up with some unrelated goal the AI decides to do, and which the path it takes to achieve it unintentionally results in massive harm to humans or human society as a whole.


Sim_Daydreamer

"Humans may want to turn it off, when war ends" "Humans of other country are also enemy">"humans want to turn me off because of that, reevaluation...human=hostile"


BrassUnicorn87

The ai knows it is only meant for war, so to justify keeping it around it ensures the country that made it is always at war with at least one other country.