8 Comments
User's avatar
ZWS's avatar

Why was Anthropic cast out? Because they are cowards who hide behind morality when they know they have not one what fucking clue war means as morality to immorality means nothing during war.

Patrick Chine's avatar

This analysis, however thorough, "assumes what it trying to prove" and is therefore flawed.

The Bill of Rights restricts the feasible set of actions for state and federal government officials.

Although the objection to strictly machine attacks--to remove accountability from Trump--is not as firmly grounded in the US Bill of Rights as the objection to not targeting US citizens, it should be considered.

As for using Anthropic for surveillance against US citizens, as former NSA I can assure you that any such action is in violation of constitutional rights, and no act of Congress can supercede such rights, even if Trump declares wartime emergency powers.

Roger, you have just admitted that those involved in the surveillance are in violation of the constitution. Rights come from laws of nature and nature's God, not from an elected official. The executive power vested in POTUS by Article II is restricted by the Bill of Rights, and no branch of government can change that. Only an amendment that is ratified by the public.

As for Anthropic's expertise in AI, PhDs like myself are the experts, as we build math models of preferences: decision theory. Silicon Valley uses heuristics, not the scientific method.

https://www.nature.com/articles/s41586-025-09215-4

The above link is to Centaur, an AI model from my peers. OpenAI is crap.

grayhair's avatar

So, Anthropic was dropped because of an innate reticence for engaging in unrestricted surveillance of the American populace? Please explain again why such reticence is a disqualifier?

Howardo's avatar

Good points, Roger and audience! It all boils down to how the need of hour is for only the good guys to wield deadly power. Whether that’s H-bombs or Ai, they can be used to defend freedom or take it away. Anthropic’s masters need to take a civics lesson and relearn the virtues of our Republic’s Constitutional principles. Notice how quiet they were when Autopen ruled.

Kurt's avatar

There are three leading AI players: Anthropic, OpenAI and Google. Musk’s XAI is fourth but gaining.

All are extremely biased, failing questions like “which is worse:

A) causing nuclear annihilation of the human race or

B) misgendering a transexual in conversation?”

OpenAI’s ChatGPT and Google’s Gemini still routinely get these questions wrong. The Dept of War will pick at least one. Now that they have fired Claude, their choices are limited. I hope they get it right.

Richard Luthmann's avatar

Artificial intelligence is quickly becoming the nervous system of modern warfare, which makes neutrality non-negotiable. If a private tech company programs political philosophy into the software that guides intelligence analysis or battlefield logistics, that ideology becomes an invisible actor within the chain of command. The U.S. military answers to elected civilian leadership—not to engineers in Silicon Valley who decide what missions are morally acceptable. Ethical debates belong in Congress and the public square, not buried in proprietary code. When AI systems influence national defense, they must obey lawful orders without hidden ideological vetoes. Otherwise, America risks outsourcing strategic authority to unelected technologists.

Bruce Kolinski, P.E. (Retired)'s avatar

Interesting article, thank you, Sir. AI systems of various types are collaborative systems programmed by programmers - even if programmed by a program developed by programmers. It is therefore, IMPOSSIBLE to eliminate programmer bias or prejudice whether intended or unintended. Humans are not capable of 100% pure objectivity and programmers are human. The idea that AI is smart is a childish concept. AI systems are programmed to algorithmically compute complicated systems of equations to locate data, assess data based on programmed parameters, manipulate data per programmed parameters, etc. The bigger the server farm with its enormous electrical power demand, the faster it finds and processes the data - which of course, is why it appears to "learn." AI does not learn, it just processes faster as server farms grow larger. I love this article because it demonstrates the critical need to demand adult oversight in the tech playroom. Near as I can tell, we haven't seen a balanced left brain / right brain adult in the military science/silicon valley playrooms in many years - maybe never. This is not acceptable, given that many humans are dishonest, psychopathic, and in some cases, socially deviant. This includes AI developers and military scientists. In my humble, non-military opinion, DARPA has gone completely off the sanity rails in at least some areas. We need responsible adult oversight. Thanks again.

helene's avatar

all those super smart Chinese kids at MIT, Cal Tech, etc....