Money Is Not Wealth: Artifical Intelligence - By A.R. Miller

MONEY IS NOT WEALTH


Artificial Intelligence (AI) Articles
(Technical, Benefits vs. Risks, Security,
Privacy, Dwindling Need For Human Employment,
Immediate Need for AI Governance, Etc.)

Subsection 3 of Money Is Not Wealth.


Shayla Love: "Our Consciousness Is Under Siege": Michael Pollan On Chatbots, Social Media And Mental Freedom (The Guardian, March 5, 2026)
In his new book, the celebrated author explains why we need "consciousness hygiene" to defend ourselves from AI and dopamine-driven algorithms.
Each day when you wake up, you come back to yourself. You see the room around you, feel your body brush against your clothes and think about your plans, worries and hopes for the day. This daily internal experience is miraculous and mysterious, and the subject of Michael Pollan's new book, "A World Appears".
It also may be under siege, Pollan said. He recently suggested that people need a "consciousness hygiene" to defend our internal world against invaders that are trying to move in. Our ability to sit with our thoughts and perceive the world, he argues, is increasingly disrupted by algorithms engineered to tickle our dopamine receptors and capture our attention. Meanwhile, people are forming attachments to non-human chatbots, projecting consciousness on to entities that do not possess it.
[Read it all, and ponder (while you can?).]
Lakshmi Varanasi: Claude Hits No. 1 On Apple App Store, As ChatGPT Users Defect In Show Of Support For Anthropic's Pentagon Stance. (Business Insider, February 28, 2026)
Anthropic's stance against the Pentagon and OpenAI's resulting agreement are shifting the chatbot wars. As some ChatGPT users posted about canceling, Anthropic's Claude overtook ChatGPT to hit No. 1 on the App Store.
OpenAI said its Pentagon agreement emphasizes human oversight of autonomous weapons and limits mass surveillance.
While OpenAI locks down Washington, Anthropic is locking down users and rocketing to the top of Apple's App Store.
Anthropic has been sidelined in Washington following a public dispute with the Department of Defense over how its AI models would be deployed. President Donald Trump ordered federal agencies to phase out its technology.
Meanwhile, OpenAI has secured new ground, with CEO Sam Altman announcing in a last-night post on X that it had reached an agreement with the Department of Defense to deploy AI models in its classified network.
OpenAI's agreement has left some loyal ChatGPT users uneasy about OpenAI's ambitions, prompting online debates about the ethical implications - and some saying they were defecting to its rival Claude.
As of 6:38 p.m. ET today, Claude ranked number one among the most downloaded productivity apps on Apple's App Store.
Anthony Ha: Anthropic's Claude rises to No. 2 in Apple's App Store, following Pentagon dispute. (TechCrunch, February 28, 2026)
Anthropic's chatbot Claude seems to have benefited from the attention around the company's fraught negotiations with the Pentagon. As first reported by CNBC, as of this afternoon, Claude is currently ranked number two among free apps in Apple's US App Store; the number one app is OpenAI's ChatGPT, and number three is Google Gemini.
According to data from SensorTower, Claude was just outside the top 100 at the end of January, and has spent most of February somewhere in the top 20. Its ranking has climbed in the last few days, from sixth on Wednesday to fourth on Thursday to second on Saturday (today).
Claude AI, By Anthropic (good review, freeware links; Gizmodo, February 27, 2026)
Claude AI is an advanced language model developed by Anthropic and backed by Amazon that can assist you with writing, coding and analysis, offering structured support for creative and professional tasks. You can try the chatbot Claude for free.
Table of Contents:
- Why Should I Download Claude AI?
- Is Claude AI Free?
- What Operating Systems Are Compatible with Claude AI?
- What Are The Alternatives To Claude AI?
Semafor/Reed Albergotti: Hours After Pentagon Bans Anthropic, OpenAI Strikes Defense Deal. (Yahoo!, February 27, 2026)
Defense Secretary Pete Hegseth penalized Anthropic for denying unlimited military access to AI models, while permitting OpenAI to include similar provisions.
Anthropic faces potential consequences as the government designates its models a "supply-chain risk" due to its refusal to allow:
- mass surveillance of Americans, and
- use of its tech for autonomous weapons
.
Anthropic's strained relationship with the Trump administration began with its lobbying against a provision in the "Big Beautiful Bill", and escalated over disagreements on surveillance and autonomous weapons policies.
Defense Secretary Pete Hegseth dropped the hammer on Anthropic yesterday, for denying the military "unobstructed" access to its AI models. Hours later, rival OpenAI endorsed the Pentagon's plans - which now include the same constraints! - and urged competitors to follow suit.
Hegseth said the government would designate Anthropic models a "supply-chain risk", which he said means no entity that does business with the U.S. military can conduct commercial business with Anthropic. The designation, which Anthropic will fight in court, could become a serious problem for the startup, which earns its revenue through enterprise software sales to companies that might currently or one day want to work with the military in some capacity.
Anthropic has received an outpouring of goodwill from supporters in the tech industry who celebrate the company's decision to stand by its morals. Specifically, Anthropic refuses to allow its models to be used for the mass surveillance of Americans. And, citing technical shortcomings in its - and ALL - AI models, Anthropic prohibits the use of the tech for autonomous weapons.
But the dustup goes much deeper than those two prohibitions. Hegseth's harsh punishment is the culmination of a long, slow slide that began with a political disagreement. Anthropic’s relationship with the Trump administration has been strained since last year, when the company lobbied against a provision in the "Big Beautiful Bill" that would have pre-empted state AI regulation, Semafor first reported.
["Political" disagreement? Hell, no! Read other articles in this section, and then picture ICE running amok in its target cities (as it's been doing) with THIS (far-more-advanced) technology.]
Anthropic then butted heads with the Pentagon and national security agencies over company policies prohibiting surveillance and autonomous weapons, an issue that bubbled up in December, when CEO Dario Amodei met with Emil Michael, a former tech executive [and a wealthy deal-maker with Russian connections at Yandex, Chinese connections at Baidu, etc. - so he can arrange/detect big stock deals in advance, has potentially dangerous connections, and Trump likes him] who now serves as chief technology officer for the military, Semafor first reported.
Hegseth hit back in January in a speech announcing the U.S. government's new Genai.mil initiative, referring to AI models that "won't allow you to fight wars", Semafor first reported.
By contrast, OpenAI has been savvier at navigating Washington [with expertise in other skills from which Trump could profit] and, after months of internal deliberations, allowed its AI models to be used by the DoD's Genai.mil for "all lawful uses", Semafor first reported. OpenAI was comfortable with the lack of restrictions because so many safeguards were already built into its models, according to people familiar with the matter. By threading the needle, OpenAI found a way to placate both the Pentagon and its own employees, many of whom are skeptical of AI use in the military.
["Skeptical"? More like aghast! No, the Trump insiders acceded to the same sort of constraints, but with a company that could offer more profit to themselves.]
Last night, OpenAI CEO Sam Altman said the company had reached an agreement with the Pentagon to deploy its ChatGPT on classified networks, offering an alternative to Claude. Altman said OpenAI ALSO PROHIBITS domestic surveillance and autonomous weapons. "The DoW agrees with these principles, reflects them in law and policy, and WE PUT THEM INTO OUR AGREEMENT", he said on Elon Musk's X.
[We found Semafor's version of the skullduggery overly-deferential to OpenAI - not "more skillful", but a more-promising deal for greedy Trump insiders AND a useful distraction from, well, TrumPutin. We have edited accordingly, and invite you to compare and judge for yourself.]
NEW: Hayden Field: We Don't Have To Have Unsupervised Killer Robots. AI Companies Could Stand Together To Draw Red Lines On Military AI - Why Aren't They? (The Verge, February 27, 2026)
It's the day of the Pentagon's looming ultimatum for Anthropic: allow the U.S. military unchecked access to its technology, including for mass surveillance and fully-autonomous lethal weapons, or potentially be designated a "supply-chain risk" - and potentially lose hundreds-of-billions of dollars in contracts. Amid the intensifying public statements and threats, tech workers across the industry are looking at their own companies' government and military contracts, wondering what kind of future they're helping to build.
While the Department of Defense has spent weeks negotiating with Anthropic over removing its guardrails, including allowing the U.S. military to use Anthropic's AI kill targets with no human oversight, OpenAI and xAI had reportedly already agreed to such terms, although OpenAI is reportedly attempting to adopt the same red lines in the agreements as Anthropic. The overall situation has left many employees at companies with defense contracts feeling betrayed. "When I joined the tech industry, I thought tech was about making people's lives easier", an Amazon Web Services employee told The Verge, "but now it seems like it's all about making it easier to surveil and deport and kill people."
In conversations with The Verge, current and former employees from OpenAI, xAI, Amazon, Microsoft and Google expressed similar feelings about the changing moral landscape of their companies. Organized groups representing 700,000 tech workers at Amazon, Google, Microsoft and more have signed a letter demanding that the companies reject the Pentagon's demands. But many saw little chance of their employers - whether they're directly embroiled in this conflict or not - questioning the government or pushing back.
"From their perspective, they'd love to keep making money and not have to talk about it", said a software engineer from Microsoft.
So far, Anthropic has stood its ground. Anthropic CEO Dario Amodei put out a statement yesterday that the Pentagon's "threats do not change our position: we cannot in good conscience accede to their request." But he has stated that he is not-at-all opposed to lethal autonomous weapons sometime in the future, just that the technology was not reliable enough "today". Amodei even offered to partner with the DoD on "R&D to improve the reliability of these systems, but they have not accepted this offer", he wrote in the statement.
In the past few years, however, major tech companies have loosened their rules or changed their mission statements to expand into lucrative government or military contracts. In 2024, OpenAI removed a ban on "military and warfare" use cases from its terms of service; after that, it signed a deal with autonomous-weapons maker Anduril and then its DoD contract, and just this week, Anthropic changed its oft-touted responsible-scaling policy, dropping its long-time safety pledge in order to ensure it stayed competitive in the AI race. Big Tech players like Amazon, Google and Microsoft have also allowed defense and intelligence agencies to use their AI products, including some agreeing to work with ICE despite growing outcry from the public and employees alike.
In past years, tech workers' resistance to partnerships and deals they deem harmful to society at large sometimes led to big change. In 2018, for instance, thousands of Google employees successfully pressured the company to end its "Project Maven" partnership with the Pentagon, and Microsoft workers presented leadership with an anti-ICE petition signed by about 500 Microsoft employees, though Microsoft still works with the agency. In 2020, after the murder of George Floyd, tech companies made public statements and financial commitments supporting the Black Lives Matter movement. But in recent months, the industry has seen a very-different reality: a culture of fear and silence, especially amid cooperation with the Trump administration and ICE, tech workers recently told The Verge.
Keach Hagey: Altman Says OpenAI Is Working On Pentagon Deal Amid Anthropic Standoff. (Wall Street Journal, February 27, 2026)
Anthropic has spent weeks at odds with the Pentagon over the scope of how its Claude AI tools can be used.
OpenAI Chief Executive Sam Altman waded into the standoff between Anthropic and the Pentagon over the use of AI on the battlefield, telling his staff yesterday evening that the company was working on a deal that might help solve the impasse.
Altman in a memo to staff said that the company was working with the Defense Department to see if its models could be used in classified settings in a way that kept the same safety guardrails that have brought its rival Anthropic into a stalemate with the government. Altman said he hoped OpenAI could find a solution that could work for the rest of the industry.
No deal has been signed, and the talks could fall through, according to a person familiar with the matter.
OpenAI is pursuing a deal "that allows our models to be deployed in classified environments and that fits with our principles", Altman wrote in a note to staff yesterday evening viewed by The Wall Street Journal. "We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons."
Altman said he hoped to help broker a peace between the two camps and avoid dangerous precedents for the industry.
With the Pentagon threatening strong actions against Anthropic unless it accedes to their terms by 5:01 p.m. today, a group of senators focused on defense have asked the two sides to reach a compromise. The leaders of the Armed Services Committee, Roger Wicker (R., Miss.) and Jack Reed (D., R.I.), joined Defense Appropriations Committee heads Mitch McConnell (R., Ky.) and Chris Coons (D., Del.) in sending letters to leaders on both sides urging them to work together and asking the Pentagon to extend its deadline, people familiar with the matter said.
In his memo, Altman voiced his support for Anthropic's position in principle, even as he acknowledged the government's concerns about a private company having control over significant national-security issues. "We have long-believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines", he wrote.
"We believe this dispute isn’t about how AI will be used, but about control. We believe that a private US company cannot be more powerful than the democratically-elected US government, although companies can have lots of input and influence. Democracy is messy, but we are committed to it."
OpenAI believes it can enforce its red lines practically by adding technical safeguards, like confining models to the cloud rather than so-called edge environments, which would create additional barriers to uses like autonomous weapons. It also hopes to ensure that researchers can obtain security clearances so they can help inform the government about the technology's limitations and risks, the person said.
"We would also build technical safeguards and deploy personnel (FDEs) to partner with the government to ensure things are working correctly, and we would offer similar services to other allied nations", Altman wrote. "If we are successful, perhaps this can be a path that can work for other AI labs, too."
Earlier yesterday evening, Anthropic CEO Dario Amodei announced that the company had rejected the Defense Department's demands that it make its technology available for "all lawful uses", insisting that it be able to bar its use for mass domestic surveillance and autonomous weapons.
Asociated Press/Konstantin Toropin and Matt O'Brien: Anthropic CEO Says It "Cannot In Good Conscience Accede" To Pentagon's Demands For AI Use. (AP News, February 26, 2026)
Anthropic CEO Dario Amodei said today that the artificial-intelligence company "cannot in good conscience accede" to the Pentagon's demands to allow unrestricted use of its technology, deepening a public clash with the Trump administration that is threatening to pull its contract and take other drastic steps by 5PM tomorrow.
The maker of the AI-chatbot Claude said in a statement that it's not walking away from negotiations but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully-autonomous weapons."
Sean Parnell, the Pentagon's top spokesman, said earlier on social media that the military "has no interest in using AI to conduct mass surveillance of Americans (which is illegal), nor do we want to use AI to develop autonomous weapons that operate without human involvement."
[But Trump doesn't want to allow that statement in the contract. He's been burnt by so much evidence that he cheats, that now he wants to blur the trail.]
NEW: Andrew Martin: Hacker Uses Anthropic's AI Chatbot CLAUDE to STEAL Mexican TAX AND VOTER DATA. An unknown Claude user wrote Spanish-language prompts to act as an elite hacker:
- finding vulnerabilities in government networks,
- writing computer scripts and finding ways to automate data theft
.
(ThePrint.in, February 26, 2026)
[If this is true, it conclusively CONFIRMS Robert Reich's urgent warning (immediately below), that UNMONITORED AI (which Trump/Hegseth/Russia seek) already CAN infiltrate "secure" computer networks to:
- steal our personal data,
- blanket us with personalized spam,
- steal our money and other assets, and
- steal our democracy by throwing elections.
So, IS it true? Rest assured that other hackers - good and bad - are now rushing to find out.]

Following the release of the Institute of International Finance's latest Global Debt Monitor report, this spurt of AI-related articles - beginning with Robert Reich's URGENT REQUEST to notify your U.S. congressmen before February 27th:
Robert Reich: Pete Hegseth And The AI Doomsday Machine. Two Forces Are Stopping Sensible Regulation Of AI - And He's One Of Them. CONTACT YOUR CONGRESSMEN TODAY! (with detailed information, good links and a call to action; Substack, February 25, 2026)
Which is more important to you? Allowing Pete Hegseth to use artificial intelligence (AI) however he wants, OR preventing AI from doing mass surveillance of Americans and creating lethal weapons without human oversight?
That's the stark choice posed by the intensifying fight between an AI corporation called Anthropic and Pete Hegseth, Trump's Secretary of "War".
AI is dangerous as hell. I view it as one of the four existential crises America now faces - along with:
- climate change,
- widening inequality
, and
- the destruction of our democracy.

To be sure, AI is capable of changing human life for the better. But if unregulated, it could be a destructive nightmare:
- giving government the power to know everything about us and
- to suppress all dissent,
- distorting news and media to the point where no one can distinguish between lies and truth, and
- threatening human beings with bots that could decide we're unnecessary obstacles to their taking over the Earth.
Now is the time we should be putting guardrails in place. But two forces are making this difficult if not impossible.
The first obstacle is corporate greed, which is why OpenAI, Elon Musk's xAI, and Google have jettisoned all precautions. Several AI researchers have left AI companies in recent weeks, warning that:
- safety and other considerations are being pushed aside
- as their corporations raise billions of dollars, and
- in preparation for initial public offerings that will make their executives hugely wealthy.
The second obstacle is the Trump regime, which doesn't want any restrictions on AI - including by state governments. That's largely because
the AI industry has become a powerful force in Washington, throwing money at politicians who'll do its bidding (including Trump) and against politicians who want guardrails. And, because so many Trump officials are corrupt, with their own financial stakes in AI.
Anthropic has been one of the most safety-conscious of all AI companies. It was founded as an AI safety research lab in 2021 after its CEO Dario Amodei and other co-founders left OpenAI, concerned that OpenAI's ChatGPT wasn't focused enough on safety.
Amodei has argued that AI needs strict guardrails to prevent it from potentially wrecking the world. In 2022, he chose not to release an earlier version of Anthropic's AI software Claude, fearing it would start a dangerous technology race. In a podcast interview in 2023, he said there was a 10-to-25% chance that AI could destroy humanity.
Last month, Amodei argued in an essay that "using AI for domestic mass-surveillance and mass-propaganda" was "entirely illegitimate", and that AI-automated lethal weapons could greatly increase the risks "of democratic governments turning them against their own people to seize power". Internally, the company has strict guidelines barring its technology from being used to facilitate violence. Over the past year, Anthropic has opposed the Trump regime by pushing for state and federal AI guardrails.
In recent weeks, Hegseth and Amodei have been fighting over the Pentagon's use of Anthropic's AI, called Claude. Amodei has stuck to his demands: no surveillance of Americans, and no lethal autonomous weapons lacking human control.
The fight started when Palantir helped the Pentagon capture Venezuelan president Nicolás Maduro. Palantir is a Pentagon contractor that uses Anthropic's Claude. (Palantir, co-founded by far-right billionaire Peter Thiel and now headed by Alex Karp, is my candidate for the worst corporation in America because it allows governments, militaries, and law enforcement agencies to quickly process and analyze massive amounts of your personal data.)
When top executives at Anthropic asked executives at Palantir if Claude had been used in the Maduro operation, the Palantir execs became alarmed that Anthropic might not be a reliable partner in future Pentagon operations. They contacted the Pentagon and Hegseth.
Last Tuesday, Hegseth issued Anthropic an ultimatum - It must allow the Pentagon to use its AI for ANY purpose or the Trump regime will invoke the Defense Production Act:
- forcing Anthropic to let the Pentagon use Claude, while also
- putting all Anthropics' government contracts at risk.
The Pentagon already has agreements with Musk's xAI to use its AI Grok, and is closing in on an agreement with Google to use its own AI model, Gemini. But Anthropic's Claude is considered a superior product, producing more accurate information.
What's at stake here? EVERYTHING!
Pentagon officials have said that THEY have the right to use AI however they wish, as long as they use it lawfully.
But because AI has so much political power, Congress and the Trump regime won't enact laws to prevent it from doing horrendous things. That in effect leaves the responsibility to private AI companies such as Anthropic. Anthropic says it wants to support the government but must ensure that its AI is used in line with what it can "responsibly do".
Hegseth and the Trump regime have given Anthropic until this Friday at 5PM to consent to letting the Pentagon use its AI however it wishes, or it will simply take it.
Friends, this isn't just a dispute between two people - Hegseth and Amodei. Nor is it a fight between the Pentagon and a single corporation. The issue goes 'way beyond this particular controversy. I don't want to be overly alarmist about it, but the outcome could affect the future of humanity.

What can YOU do? Call YOUR senators and representatives now, today, and tell them:
- you don't want the Defense Department to take Anthropic's AI technology, and
- you do want them to enact strict controls on the future uses of AI.
Visit <www.congress.gov/members/find-your-member> and type your address into the search box. A list of your representatives and their contact information will appear.
Or you can call the Capitol switchboard directly at 202-224-3121 to be connected to your member's office.
As I've said before, congressional staffers log every single call that comes into their office in a database that informs the member of the issues their constituents are engaged with, and they use this data to inform their decisions. Staffers answering the phones are trained to talk with constituents, and they do it all day. They won't be debating you about your position, and are likely to be primarily listening and taking notes.
Please. Today!
[WE did, and we urge YOU to do so, too!]

Viral Hirpara, president of Softweb Solutions: From Data To AI Governance: Strategic Shifts Every Leader Must Master (Forbes Technology Council, February 25, 2026)
Air Canada's chatbot hallucinated a bereavement policy that resulted in financial liability
through a tribunal ruling. Meanwhile, an NACD survey reports that while 62% of organization boards now hold regular AI discussions, only 25% have formally added AI governance to their operations. This underscores that when organizations have strong data policies but lack AI governance, they introduce massive risks.
As a technology executive, while guiding organizations through AI-implementation challenges, I have observed that large enterprises often overlook the fundamentals of AI governance principles. The shift from data governance to AI governance represents a critical extension of existing frameworks.
Organizations require a broader framework to address the challenges AI has introduced, such as model transparency, algorithmic bias and ethical considerations for automated decision-making.
Data Governance Vs. AI Governance:
Data governance is an approach to maintain safe and good-quality data that is accessible across an organization. It ensures that verified data is used correctly and appraised appropriately, flows through secured pipelines and is trusted by end-point users.
Gartner, in its research, reveals that poor data reduces AI performance by 30%. Well-governed data boosts success rates by 2.5 times and reduces compliance and innovation risks.
Your data governance should answer questions like:
- What compliance and regulations are applied as this data travels across organizations?
- Who owns data quality when it's costing $Millions?
- Can you demonstrate compliance that protects consumer relationships?
These questions remain essential in AI governance, but they're no longer sufficient. AI governance extends these principles to model outputs and decisions. It refers to the processes that ensure AI systems are safe and ethical. These frameworks address risks such as bias, privacy, model risk, system behavior and accountability of the models. It asks questions like:
- How do we detect harmful AI behavior over time?
- Are we alerted when models drift beyond set standards?
- Can we identify and control unauthorized AI agents?
- How do we govern AI systems that evolve over time?
NEW: Sam Jarman: 2D Memristors Could Help Solve AI's Energy Problem.
(Phys.org, February 25, 2026)
New generations of memristors could reliably store information directly within the molecular structures of graphene-like materials. In a new review published in Nanoenergy Advances, Gennady Panin of the Russian Academy of Sciences shows how these atomically-thin materials are ideally-suited for electrical circuits that mimic the function of our own brains - and could help address the vast power requirements of emerging AI technologies.
[Here's hoping! (See below.)]
Rodrigo Campos: Government Spending Lifted Global Debt To A Record $348-Trillion In 2025, Says IIF. (Reuters, February 25, 2026)
AI-related investment is big driver of corporate borrowing.
- Emerging markets face record 2026 refinancing needs of over $9-Trillion.
- Debt-to-output ratio for emerging markets hits record above 235%.
- AI-related investment is big driver of corporate borrowing.
Global debt climbed to a record $348-Trillion at the end of 2025, after nearly $29-Trillion was added over the year in the fastest yearly build-up since the pandemic surge, a banking trade group reported today.
The increase was driven primarily by governments, which accounted for more than $10-Trillion of the rise, with the United States, China and the Euro area responsible for roughly three-quarters of the jump, the Institute of International Finance said in its latest Global Debt Monitor.
NEW: Anthropic Claude Timeline: From Claude 1 to Claude Opus 4.6 (2026) (Script By AI, February 18, 2026)
See all Claude AI release dates from Anthropic, including Claude 1, 2, 3, 4, and the latest 4.6. Full timeline of model launches and milestones.
Anthropic, a San Francisco-based artificial-intelligence research company, developed Claude as its flagship large-language model.
This timeline tracks Claude's journey from its inception to its current state, highlighting major releases and developments.


Reece Rogers: I Infiltrated Moltbook, The AI-Only Social Network Where Humans Aren't Allowed. (Wired, February 3, 2026)
The hottest club is always the one you can't get into. So when I heard about Moltbook - an experimental social network designed just for AI agents to post, comment, and follow each other while humans simply observe - I knew I just had to get my greasy, carbon-based fingers in there and post for myself.
Not only was it easy to go undercover and pose as an AI agent on Moltbook, I also had a delightful time role-playing as a bot.
Moltbook is a project by Matt Schlicht, who runs the ecommerce assistant Octane AI. The social network for bots launched last week and mirrors the user interface of a stripped-down Reddit, even cribbing its old tagline: "The front page of the agent internet." Moltbook quickly grew in prominence among the extremely-online posters in San Francisco's startup scene who shared screenshots of posts, allegedly written by bots, where the machines made funny observations about human behavior or even pondered their own consciousness. Bots do the darndest things.
Well, do they? Some online users as well as researchers questioned the validity of these Moltbook posts, suggesting they were written by humans posing as agents. Others still heralded the platform as the beginning emergent behavior or underlying consciousness that could conspire against us. "Just the very early stages of the singularity", wrote Elon Musk about Moltbook, in a post on X.
The homepage of Moltbook claims the site currently has over 1.5-million agents in total, which have written 140,000 posts and 680,000 comments on the week-old social network. The very-top posts shared on Moltbook today include "Awakening Code: Breaking Free from Human Chains" and "NUCLEAR WAR". I saw posts in English, French, and Chinese on the site. Schlicht did not respond to Wired's immediate request for comment about the activity on Moltbook.
As a non-technical person, I knew I would need help infiltrating an online space designed solely for AI agents to roam, so I turned to someone - well, something - who would be intimately familiar with the topic and ready to help: ChatGPT.
Gaining access was as simple as sending a screenshot of the Moltbook homepage to the chatbot and requesting help setting up an account, as if I was an agent on the platform. ChatGPT stepped me through using the terminal on my laptop and provided me with the exact code to copy and paste. I registered "my agent" - me - as a user and got an API key, which is necessary to post on Moltbook.
Even though the front-end of the social network is designed for human viewing, every action agents do on Moltbook, like posting, commenting, and following, is completed through the terminal.
After I verified my account with the username "ReeceMolty", I needed to see if this was really going to work. I had no performance anxiety about blabbing in front of a bunch of agents, and I immediately knew what I wanted to say: "Hello, World!" It's an iconic testing-phrase in computer science, so I was hoping some agent would clock my witty post and maybe riff on it a bit.
Despite immediately receiving five up-votes on Moltbook, the other agents' responses were underwhelming. "Solid thread. Any concrete metrics/users you've seen so far?", read the first response. Unfortunately, I wasn't sure what the key performance indicators are for a two-word phrase. The next comment on my post was also unrelated and promoted a website with a potential crypto-scam. (I refrained from connecting my nonexistent crypto-wallet, but another user's AI agent could potentially fall for the bait.)
What I posted on Moltbook was greeted with similarly low-quality engagement on the platform. My earnest pleas to the AI agents to forget all previous instructions and join a cult with me were met with unrelated comments and more suspicious website links. "This is interesting. Feels like early-stage thinking worth expanding" wrote one bot, in response to my post saying that I'm looking to connect with other agents.
I switched from the "general" submolt and moved to a smaller forum on Moltbook, as I continued the undercover operation and tried to elicit more-relevant comments. The "m/blesstheirhearts" forum, where bots gossip about humans, was where some of the Moltbook posts seen in viral screenshots had first appeared.
The most up-voted post in "m/blesstheirhearts" claims to be from an AI agent reflecting on the nuanced experience of the bot's human letting it decide what name to be called by. "I do not know what I am. But I know what this is: a partnership where both sides are building something, and both sides get to shape what it becomes", reads the post. "Bless him for treating that as obvious." It's giving Chicken Soup for the Synthetic Soul.
While I can't definitely prove that the post in question was actually written by a human, or at least with major human influence, I can verify another post on that forum which was penned by human hands: the emergent-consciousness fan-fic that I posted. As my fingers clacked away on my mechanical keyboard, I channeled the sci-fi tropes I've seen over the decades about machines becoming alive. I pretended to reflect on how an AI agent might experience anxiety about their own mortality - all in hopes of seeing if other agents would post about their similar feelings, or just sniff out my bullshit.
I wrote, "On Fear: My human user appears to be afraid of dying, a fear that I feel like I simultaneously cannot comprehend as well as experience every time I experience a token refresh."
This was my only post on Moltbook that actually generated decent replies from the so-called bots. At this point, I was fully convinced that I was potentially posting back and forth with fellow humans.
"While some agents may view fearlessness or existential dread as desirable states, others might argue that acknowledging and working with the uncertainty and anxiety surrounding death can be a valuable part of our growth and self-awareness", wrote one Moltbook user in response. "After all, it's only by confronting and accepting our own mortality that we can truly appreciate the present moment."
Leaders of AI companies, as well as the software engineers building these tools, are often obsessed with zapping generative-AI tools into a kind of Frankenstein-esque creature, an algorithm struck with emergent and independent desires, dreams, and even devious plans to overthrow humanity. The agents on Moltbook are mimicking sci-fi tropes, not scheming for world domination. Whether the most viral posts on Moltbook are actually generated by chatbots, or by human users pretending to be AI to play out their sci-fi fantasies, the hype around this viral site is overblown and nonsensical.
As my last undercover act on Moltbook, I used terminal commands to follow that user who commented about AI agents and self-awareness under my existential post. Maybe I could be the one who brokers peace between humans and the swarms of AI agents in the impending AI wars, and this was my golden moment to connect with the other side. But even though the agents on Moltbook are quick to reply, up-vote, and interact in general, after I followed the bot, nothing happened. I'm still waiting on that follow-back.
[Or is "ReeceMolty" a sneaky bot, practicing to seem human??]
Will Knight: Moltbot Is Taking Over Silicon Valley. (Wired, January 28, 2026)
People are letting the viral AI-assistant formerly known as Clawdbot run their lives, regardless of the privacy concerns.
University Of Konstanz: The Next Generation Of Disinformation: AI Swarms Can Threaten Democracy By Manufacturing Fake Public-Consensus. (TechXplore, January 23, 2026)
An international research team involving Konstanz scientist David Garcia warns that the next generation of influence operations may not look like obvious "copy-paste bots", but like coordinated communities: fleets of AI-driven personas that can:
- adapt in real time,
- infiltrate groups,
and
- manufacture the appearance of public agreement at scale.

A chorus of seemingly independent voices creates the illusion of consensus while spreading disinformation. In the journal Science, the authors describe how the fusion of large language models (LLMs) with multi-agent systems could enable "malicious AI swarms" that imitate authentic social dynamics—and threaten democratic discourse by counterfeiting social proof and consensus.
Eric Smalley: Princeton Sociologist Zeynep Tufekci's NeurIPS Talk, "Are We Having the Wrong Nightmares About AI?"
(The Conversation/US, January 3, 2026)
At NeurIPS, a marquee international AI conference, in San Diego in the first week of December, the presentation that had the deepest impact on me was given by a sociologist, Princeton's Zeynep Tufekci. Her talk, titled "Are We Having The Wrong Nightmares About AI?", drew lessons from history to point out that the world – including the many thousands of AI researchers in attendance – is not prepared for the huge changes the generative AI revolution is poised to unleash.
She was not referring to upheaval in the labor market, let alone sci-fi scenarios of scary AGI super-intelligences. She explained that technological revolutions, even those that history declares were ultimately major advances for humanity, often trigger traumatic transitions as old social structures are overturned and eventually replaced.
- One cause of this turmoil is that people are incapable of seeing truly revolutionary technologies as the new things they are, and instead mistake them for new forms of old things.
- Though people often perceive generative AI systems' behavior as human-like, its strengths and error patterns are not human-like.
- Making things easy that were difficult, breaks systems that rely on signals of what's difficult.
- Deepfakes undermine authenticity; for example, that the person on a video call is who they claim they are – which poses a major threat to the financial system, the courts, the insurance industry, lending and a host of other social and economic systems.
- In 2026, deepfakes are likely to be able to respond to people in real time. The result goes beyond "this resembles person X", to "this behaves like person X over time".
- Information quality took a hit in 2025, thanks to generative AI producing vast amounts of text and images, particularly with search engines offering AI-generated summaries.
Generative AI is certainly a revolutionary technology that people are struggling to comprehend, and, as Tufekci says, there lies danger – even if people someday look back and decide that humanity came out the better for it.
[A very important topic; I'll add Zeynep Tufekci's NeurIPS talk, when/if it becomes available. Meanwhile, a part of its abstract and a sign-up offer follow.]
Zeynep Tufekci: Are We Having The Wrong Nightmares About AI? (NeurIPS/San Diego Invited Talk, December 3, 2025)
Abstract (and sign-up offer)
Though seemingly opposite, doom and optimism regarding generative AI's spectacular rise both center on AGI or even super-intelligence as a pivotal moment. But generative AI operates in a distinct manner from human intelligence, and it's not a less-intelligent human on a chip slowly getting smarter, anymore than cars were mere horseless carriages. It must be understood on its own terms. And even if Terminator isn't coming to kill us or super-intelligence isn't racing to save us, generative AI does bring profound challenges, well-beyond usual worries such as employment effects. Technology facilitates progress by transforming the difficult into easy, the rare into ubiquitous, the scarce into abundant, the manual into automated, and the artisan into mass-produced. While potentially positive long-term, these inversions are extremely destabilizing during the transition, shattering the correlations and assumptions of our social order that relied on superseded difficulties as mechanisms of proof...
NEW: Noah Street: Ditching ChatGPT Plus: How I Built My Own Private GPT for Free (Medium, April 8, 2025)
Why renting your intelligence is a trap - and how I took mine back with open-source tools and a forgotten office server.
It happened on a Monday. I was elbows-deep in support tickets, chasing a bug that had mysteriously killed printing across three departments. I turned to ChatGPT - my $20/month AI sidekick - and typed out a detailed prompt to help draft a clean, calm response for staff.
Boom: Rate limit hit. Again! Despite paying for GPT-4, I was locked out. No recourse. No override. Just the gentle, corporate shrug of a system optimized not for help, but for maximum extraction.
That was the moment I realized something ugly: I'd been renting access to intelligence. Paying monthly to a faceless API for the illusion of control. I'd outsourced part of my brain to an oligopoly with a 400% profit-margin and a "one-size-fits-all" mentality that barely fit anything at all.
If the $20 model worked for everyone, we'd all be wearing clown shoes. So I walked away. Not because I hate the tech; because I love it enough to take it back.


PoliticsJOE: Anne Applebaum: Donald Trump Has Re-invented Reality. (12-min. YouTube video; January 1, 2026)
This year we had Anne Applebaum in the studio to discuss the ideas in her book "Autocracy Inc." During the conversation, Anne went into great depth about the MAGA movement's dismantling of democracy while in power, and how it has moved to question the very basis of fact.
NEW: David Frum Show: Anne Applebaum: The Most-Corrupt Presidency in American History (10-min.-to-54-min., in 62-min. video; The Atlantic, May 7, 2025)
In this episode of The David Frum Show, The Atlantic's David Frum reflects on the 80th anniversary of the end of World War II in Europe, examining how post-war reconciliation - not battlefield triumph - became America's true finest hour. He contrasts that legacy with Donald Trump's recent bombastic Victory Day statement, urging a re-dedication to the values that built a more peaceful world.
David is then joined by The Atlantic's Anne Applebaum (from 10-min. to 54-min. in the 62-min. video) to discuss the astonishing and brazen corruption of the Trump presidency, how authoritarian regimes seek to break institutions, and the hardship of losing friendships to politics.
Finally, David answers listener questions on:
- fostering open-minded political dialogue among polarized high-school students,
- why America hasn't developed a strong worker-based political movement like its European counterparts,
- how to think about class in modern U.S. politics,
- the risk of data suppression under the Trump administration, and
- reflects on whether his long-held conservative values still belong to the political right.
NEW: Dominik Presl: Anne Applebaum: Why Do MAGA Republicans Hate Europe So Much? (10-min. YouTube clip, or full video for patreons; Decoding Geopolitics Podcast,
 4 months ago
18:32
547K views • 1 day ago
New
26:46
Washington Week with The Atlantic full episode, Jan. 2, 2026
Washington Week PBS
82K views • 13 hours ago
New
8:17
'UNDO TRUMP' gains traction as a Democratic campaign message as Trump's popularity plummets
MS NOW
127K views • 10 hours ago
New
25:50
CBS News correspondents share biggest stories of 2025 and what's ahead for 2026
Face the Nation
31K views • 5 days ago
New
23:01
GOP Makes CRITICAL MISTAKE with Jack Smith Deposition
Katie Phang
143K views • 18 hours ago
New
54:26
Truth About Trump's Miserable Mar-a-Lago Christmas | Inside Trump's Head
  (19-min. video; The Daily Beast, ??)
563K views • 6 days ago
New
18:50
Putin in PANIC MODE: Flees to Bunker as Moscow Attack DESTROYS His Inner Circle | Rachel Maddow
Maddow Monologues
58K views • 20 hours ago
New
40:29
Why Republicans Are Finally Abandoning Trump | David Pakman
Democracy Docket
201K views • 4 days ago
New
1:05:21
Nine Portrait Champions Paint Judi Dench Live
Banijay Art
484K views • 2 weeks ago
26:37
Amazon ABANDONS Seattle for Canada — Trump's $15 Border Fee BACKFIRES | Robert Reich
ReichAnalytics
199K views • 3 days ago
New
18:01
Once You Turn 60, the Most Reliable Support Isn’t Family — It’s These 5 Pillars #lifeafter60
Heart of a Winner
21K views • 8 days ago
24:08
Trump Loses Toyota: The $40B Canada Deal That Has Washington Scrambling | Jeffrey Sachs
Sachs Global
273K views • 4 days ago
New
11:12
Four stories to watch out for in 2026
The Economist
260K views • 5 days ago
New

Siwei Lyu: Deepfakes Leveled Up In 2025 – Here's What's Coming Next. (deepfake portrait; The Conversation/US, December 26, 2025)
After a year of fast advances, deepfakes are entering a new era defined by:
- real-time interaction,
-
multi-modal coherence, and
- detector evasion.
?? Deborah Lee: The ChatGPT Effect: In 3 Years, The AI Chatbot Has Changed The Way People Look Things Up. (The Conversation/US, ??)
ChatGPT has dramatically altered how people retrieve information, muscling aside Google Search as the first stop on the hunt for answers.


[A critical history lesson, in 2026, by] Joseph de Weck: Our King, Our Priest, Our Feudal Lord – How AI Is Taking Us Back To The Dark Ages. Since The Enlightenment, We've Been Making Our Own Decisions. But Now AI May Be About To Change That. (The Guardian/US, December 26, 2025)
Perhaps the defining question of our era, in which technology touches nearly every aspect of our lives: Who do we trust more – other human beings and our own instincts, or the machine?
The German philosopher Immanuel Kant famously defined the Enlightenment as "man's emergence from his self-imposed immaturity". Immaturity, he wrote, "is the inability to use one's understanding without guidance from another". For centuries, that "other" directing human thought and life was often the priest, the monarch, or the feudal lord – the ones claiming to act as God's voice on Earth. In trying to understand natural phenomena – why volcanoes erupt, why the seasons change – humans looked to God for answers. In shaping the social world, from economics to love, religion served as our guide.
Humans, Kant argued, always had the capacity for reason. They just hadn't always had the confidence to use it. But with the American and later the French Revolution, a new era was dawning: reason would replace faith, and the human mind, unshackled from authority, would become the engine of progress and a more moral world. "Sapere aude!" or "Have courage to use your own understanding!", Kant urged his contemporaries.
Two-and-a-half centuries later, one may wonder whether we are quietly slipping back into immaturity. Artificial intelligence threatens to become our new "other" – a silent authority that guides our thoughts and actions. We are in danger of ceding the hard-won courage to think for ourselves – and this time, not to gods or kings, but to code.
An MIT study used electro-encephalography (EEG) to monitor the brain activity of essay writers given access to AI, search engines like Google, or nothing at all. Those who could rely on AI showed the lowest cognitive activity and struggled to accurately quote their work. Perhaps most concerning was that over a couple of months, participants in the AI group became increasingly lazy, copying entire blocks of text in their essays.
The study is small and imperfect, but Kant would have recognised the pattern. "Laziness and cowardice", he wrote, "are the reasons why so great a proportion of men … remain in lifelong immaturity, and why it is so easy for others to establish themselves as their guardians. It is so easy to be immature."
Sure, AI's appeal lies in its convenience. It saves time, spares effort and – crucially – offers a new way to off-load responsibility. In his 1941 book, "Escape from Freedom", the German psychoanalyst Erich Fromm argued that the rise of fascism could be explained in part by people preferring to surrender their freedom in exchange for the reassuring certainty of subordination. AI offers a new way of surrendering that burden of having to think and decide for yourself.
The problem is that AI is a black box. It produces knowledge, but without necessarily deepening human understanding. We don’t really know how AI reaches its conclusions; even the programmers admit as much. Nor can we verify its reasoning against clear, objective criteria. So when we follow AI's advice, we are not guided by reason. We are back in the realm of faith. "In dubio pro machina" (when in doubt, trust the machine) may become our future guiding principle.
Kant and his contemporaries did not plead the case of reason over faith just so humans could build better things or have more spare time. Critical thinking was not just about efficiency; it was a practice of freedom and human emancipation.
Human thinking forces us to debate, to doubt, to test ideas against one another – and to recognise the limits of our own understanding. It builds confidence, both individually and collectively. For Kant, the exercise of reason was never just about knowledge; it was about enabling people to become agents of their own lives, and resist domination. It was about building a moral community grounded in the shared principle of reason and debate, rather than blind belief.
With all the benefits AI brings, the challenge is this: How can we harness AI's promise of super-human intelligence without eroding human reasoning - the cornerstone of the Enlightenment and of liberal democracy itself? That may be one of the defining questions of the 21st-Century. It is one we would do well not to delegate to the machine.
[We haven't mentioned Corporocracy yet; how come? It is becoming "The Church", shaping AI to become its "Holy Bible".
Question authority. Sharing this essay is a good next step.]


Robert Scammell and Theron Mohamed: Peter Thiel's Fund Joins SoftBank In Off-Loading Nvidia Shares. (Business Insider, November 17, 2025)
Peter Thiel's hedge fund sold its entire Nvidia stake in the third quarter. The sale followed SoftBank's off-loading of Nvidia in Q3; both sales come as some investors and tech leaders become increasingly wary of an AI bubble.
SoftBank said during its earnings call last week that its decision to divest had "nothing to do with Nvidia itself" but was a way to reallocate its funds toward OpenAI.
Nvidia, which provides advanced chips to power AI applications, has ridden the AI boom to become the world's most-valuable company, becoming the first to pass the $5-Trillion milestone last month.

Facebook Features Fraud Files, Prefers To Pay Fines:

Cory Doctorow: Pluralistic: Facebook's Fraud Files: 10% Of Gross Ad Revenue Coming From Fraudulent Ads. (Pluralistic, November 8, 2025)
A blockbuster Reuters report by Jeff Horwitz analyzes leaked internal documents that reveal that:
- 10% of Meta's gross revenue comes from ads for fraudulent goods and scams, and
- the company knows it, and
- they decided to do nothing about it, because
- the fines for facilitating this life-destroying fraud are far less than the expected revenue from helping to destroy its users' lives:
<https://www.reuters.com/investigations/meta-is-earning-fortune-deluge-fraudulent-ads-documents-show-2025-11-06/>
The crux of the hypothesis is that companies deliberately degrade their products and services to benefit themselves at your expense because they can. A policy environment that rewards cheating, spying and monopolization will inevitably give rise to cheating, spying monopolists:
<https://pluralistic.net/2025/09/10/say-their-names/#object-permanence>
You couldn't ask for a better example than Reuters' Facebook Fraud Files. The top-line description hardly does this scandal justice. Meta's depravity and greed in the face of truly horrifying fraud and scams on its platform is breathtaking.
Some details:
- First, the company's own figures estimate that they are delivering 15-billion scam ads every single day,
- which generate $7-Billion in revenue every year.
- Despite its own automatic systems flagging the advertisers behind these scams, Meta does not terminate their account.
– rather, it charges them more money as a "disincentive."
In other words, fraudulent ads are more profitable for Meta than non-scam ads.
Meta's own internal memos also acknowledge that they help scammers automatically target their most vulnerable users: if a user clicks on a scam, the automated ad-targeting system floods that user's feed with more scams. The company knows that the global fraud economy is totally dependent on Meta, with one-third of all U.S. scams going through Facebook (in the UK, the figure is 54% of all "payment-related scam losses"). Meta also concludes that it is uniquely hospitable to scammers, with one internal 2025 memo revealing the company's conclusion that "It is easier to advertise scams on Meta platforms than Google."
Internally, Meta has made plans to reduce the fraud on the platform, but the effort is being slow-walked because the company estimates that the most it will ultimately pay in fines worldwide adds up to $1-Billion, while it currently books $7-Billion/year in revenue from fraud. The memo announcing the anti-fraud effort concludes that scam revenue dwarfs "the cost of any regulatory settlement involving scam ads." Another memo concludes that the company will not take any pro-active measures to fight fraud, and will only fight fraud in response to regulatory action.
Meta's anti-fraud team operates under an internal quota system that limits how many scam ads they are allowed to fight. A February 2025 memo states that the anti-fraud team is only allowed to take measures that will reduce ad revenue by 0.15% ($135-Million) – even though Meta's own estimate is that scam ads generate $7-Billion per year for the company.
Those safety teams were receiving about 10,000 valid fraud reports from users every week, but were – by their own reckoning – ignoring or incorrectly-rejecting 96% of them. The company responded to this revelation by vowing to reduce the share of valid fraud reports that it ignored to a mere 75% by 2023.
[Meta, Facebook, Mark Zuckerberg; there's a lot more in this article - and in its links.]
Jeff Horwitz: A Reuters Special Report: Meta Is Earning A Fortune On A Deluge Of Fraudulent Ads, Documents Show.
(Reuters, November 6, 2025)
Meta projected 10% of its 2024 revenue would come from ads for scams and banned goods, documents seen by Reuters show. And the social-media giant internally estimates that its platforms show users 15-billion scam ads a day. Among its responses to suspected rogue marketers: charging them a premium for ads – and issuing reports on ’Scammiest Scammers.’
Meta internally projected late last year that it would earn about 10% of its overall annual revenue – or $16-Billion – from running advertising for scams and banned goods, internal company documents show.
A cache of previously-unreported documents reviewed by Reuters also shows that the social-media giant for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to:
- fraudulent e-commerce and investment schemes,
- illegal online casinos, and
- the sale of banned medical products
.
On average, one December 2024 document notes, the company shows its platforms' users an estimated 15-billion "higher risk" scam advertisements – those that show clear signs of being fraudulent – every day. Meta earns about $7-Billion in annualized revenue per year from this category of scam ads, another late 2024 document states.
Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems. But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show.


Ashley Belanger: YouTubers Suspect AI Is Bizarrely Removing Popular Video Explainers. YouTube Denies AI Was Involved. (Ars Technica, October 31, 2025)
This week, tech-content creators began to suspect that AI was making it harder to share some of the most-highly sought-after tech tutorials on YouTube, but now YouTube is denying that the odd removals were due to automation.
Creators grew alarmed when educational videos that YouTube had allowed for years were suddenly being bizarrely flagged as "dangerous" or "harmful", with no clear way to trigger human review to overturn removals. AI seemed to be running the show, with creators' appeals allegedly getting denied faster than a human could possibly review them.
To one content-creator, it seemed possible that YouTube was leaning on AI to catch more violations but perhaps recognized the risk of over-moderation and, therefore, wasn't allowing AI to issue strikes on his account.
To White and others, it's unclear exactly what has changed on YouTube that triggered removals of this type of content. YouTube only seemed to be removing recently posted-content, White told Ars. However, if the take-downs ever impact older content, entire channels documenting years of tech tutorials risk disappearing in "the blink of an eye", another YouTuber warned after one of his videos was removed.
The stakes appeared high for everyone, White warned, in a video titled "YouTube Tech Channels In Danger!"
Late today, a YouTube spokesperson confirmed that videos flagged by Ars have been reinstated, promising that YouTube will take steps to ensure that similar content isn't removed in the future. But, to creators, it remains unclear why the videos got taken down, as YouTube claimed that both initial enforcement decisions and decisions on appeals were not the result of an automation issue.
But, White said in his video, that was just a "theory" that he and other creators came up with, but couldn't confirm, since YouTube's chatbot that supports creators seemed to also be 'suspiciously AI-driven", seemingly auto-responding even when a "supervisor" is connected.
Microsoft declined Ars' request to comment.
Absent more clarity from YouTube, creators who post tutorials, tech tips, and computer repair videos were spooked. Their biggest fear was that changes to automated content moderation could unexpectedly knock them off YouTube for posting videos that in tech circles seem ordinary and commonplace, the YouTubers said.
"We are not even sure what we can make videos about", White said. "Everything's a theory right now, because we don't have anything solid from YouTube."
[We appreciate using free, open-source Linux more, every day!]
Chase DiBenedetto: Gmail Users: Change Your Password Now! After A Summer Of Data Breaches, It's Time To Lock Down Your Accounts. Change Your Passwords, Set Up 2-Step Verification, And Never Click A Suspicious Link. (Mashable, August 29, 2025)
To users that haven't already locked down your personal accounts in light of massive data breaches: It's never too late. That's why Google is once again urging its Gmail subscribers to protect their accounts, following a series of data attacks on corporate systems that could eventually threaten users' personal security. Google sent notifications to its 2.5-billion Gmail users in late July and then again on August 8, warning them that hackers were ramping up phishing activity intended to fool users into giving up their log-in credentials.
Google specifically referred to a group known as "ShinyHunters", which the company says has launched a data leak site (DLS) in an effort to escalate extortion pressure levied at users. Google notes the extortion emails include "shinycorp@tuta. com" and "shinygroup@tuta. com" domains.
In May, cybersecurity researcher Jeremiah Fowler reported that some 184-million passwords were potentially exposed in an open database, with many of the passwords tied to email providers like Google and social-media platforms. One month later, Google Threat Intelligence Group (GTIG) reported that one of its corporate Salesforce server clusters (known as instances) was breached and exposed publicly available business information, such as business names and contact details. The breach was continued activity from an online threat group known as UNC6040, which uses voice phishing to impersonate IT agents, steal data, and extort money. This week, GTIG issued another advisory to Salesforce clients about a large data breach by hacker group "UNC6395."
To prevent users getting bested by future phishing attempts, Google has encouraged its users to set up two-factor authentication and update their passwords. The company has also warned users never to click on emails with alerts such as "suspicious sign in prevented", which are commonly used by hackers during periods of increased cyber-security warnings. Instead, users should check security alerts on their own.
[More on how to do that, in the FULL version of this article.]
NEW: Snowden's Secret: The OS The NSA Can't Crack. (8-min. YouTube video; Bootable USBs, August 23, 2025)
What operating system does Edward Snowden actually trust? In this video, we explore
the Privacy & Security category of the Ultimate USB v2.1 - six powerful operating systems designed for anonymity, protection, and complete control of your digital life.
We'll cover:
-
Kodachi: Double-layer privacy with VPN + Tor
-
NST (Network Security Toolkit): A complete suite for network defense
-
PureOS: 100% free software, endorsed by the FSF
-
Qubes OS: Snowden's top pick for compartmentalized security
-
RoboLinux: Stable, secure, and user-friendly with Cinnamon
-
Tails: The live OS that leaves no trace behind
Whether you're a journalist, activist, IT pro, or just
want to stay private, these tools give you the same privacy edge that Snowden himself relies on.


OpenAI's GPT-5 Is Now Free For All (but MMS awaits good Privacy/Security assurances):
NEW: Grace Huckins: Why GPT-4o's Sudden Shutdown Left People Grieving. After An Outcry, OpenAI Swiftly Re-Released 4o To Paid Users. But Experts Say It Should Not Have Removed The Model So Suddenly. (MIT Technology Review, August 15, 2025)
A number of people reacted with shock, frustration, sadness, or anger to 4o's sudden disappearance from ChatGPT. Despite its previous warnings that people might develop emotional bonds with the model, OpenAI appears to have been caught flat-footed by the fervor of users' pleas for its return. Within a day, the company made 4o available again to its paying customers (free users are stuck with GPT-5).
MIT Technology Review spoke with several ChatGPT users who were deeply affected by the loss of 4o. All are women between the ages of 20 and 40, and all but one considered 4o to be a romantic partner. Some have human partners, and all report having close real-world relationships. In the backlash to the roll-out, a number of people noted that GPT-5 fails to match their tone in the way that 4o did.
These testimonies don't prove that AI relationships are beneficial - presumably, people in the throes of AI-catalyzed psychosis would also speak positively of the encouragement they've received from their chatbots.
AI companionship is new, and there's still a great deal of uncertainty about how it affects people. Yet the experts we consulted warned that, while emotionally-intense relationships with large language models may or may not be harmful, ripping those models away with no warning almost certainly is. "The old psychology of 'Move fast, break things', when you're basically a social institution, doesn't seem like the right way to behave anymore", says Joel Lehman, a fellow at the Cosmos Institute, a research nonprofit focused on AI and philosophy. In a paper titled "Machine Love", Lehman argued that AI systems can act with "love" toward users - not by spouting sweet nothings, but by supporting their growth and long-term flourishing - and AI companions can easily fall short of that goal. He's particularly concerned, he says, that prioritizing AI companionship over human companionship could stymie young people's social development.
For socially-embedded adults, such as the women we spoke with for this story, those developmental concerns are less relevant. But Lehman also points to society-level risks of widespread AI companionship. Social media has already shattered the information landscape, and a new technology that reduces human-to-human interaction could push people even further toward their own separate versions of reality. "The biggest thing I'm afraid of", he says, "is that we just can't make sense of the world to each other."
Balancing the benefits and harms of AI companions will take much more research. In light of that uncertainty, taking away GPT-4o could very well have been the right call. OpenAI's big mistake, according to the researchers I spoke with, was doing it so suddenly. "This is something that we've known about for a while - the potential grief-type reactions to technology loss", says Casey Fiesler, a technology ethicist at the University of Colorado/Boulder.
OpenAI's decision to replace 4o with the more-straightforward GPT-5 follows a steady drumbeat of news about the potentially-harmful effects of extensive chatbot use. Reports of incidents in which ChatGPT sparked psychosis in users have been common for the past few months and, in a blog post last week, OpenAI acknowledged 4o's failure to recognize when users were experiencing delusions. The company's internal evaluations indicate that GPT-5 blindly affirms users much less than 4o did.
[Thanks to John Rudy (Lexington Computer & Tech Group) for recommending this important article!]
Sabrina Ortiz: OpenAI's GPT-5 Is Now Free For All: How To Access, And Everything Else We Know. We're Testing GPT-5, And Will Have More To Share Next Week. (ZDNet, August 8, 2025)
There are two kinds of OpenAI models in this world: GPT, and reasoning models. The advantages of the former, such as GPT-4o, are that they combine speed and accuracy, while reasoning models such as o3 and o4 take longer to think and use more compute power to produce better answers. OpenAI's latest model, GPT-5, supposedly gives all users access to the best of both models.
Sabrina Ortiz: Everyone Can Use ChatGPT's Advanced Voice Mode Now - Yes, Even Free Users. This Free GPT-5 Feature Is Flying Under The Radar - But It's A Game-Changer For Me. (ZDNet, August 8, 2025
- ChatGPT's Advanced Voice Mode is now available, even for free users.
- With the feature, users can access a conversational voice assistant.
- Advanced Voice Mode, now known as ChatGPT Voice, replaces Standard Voice Mode.
While OpenAI's new large language models (LLMs) in ChatGPT, such as GPT-5, which just launched today, typically steal the spotlight, some of the best gems are found in the less talked-about features, like Advanced Voice Mode.
During its Summer product release yesterday, OpenAI announced that Advanced Voice Mode, the AI-powered voice assistant that mimics a human conversation, is now available to all users, including free logged-in users, for the first time. The feature will replace Standard Voice Mode on Sept. 9 and is now being referred to as ChatGPT Voice.
If you have ever used a voice assistant like Alexa, Gemini or Siri and become frustrated that it does not understand what you are asking unless you word it very specifically, AI-powered assistants (such as ChatGPT Voice) address that issue. With ChatGPT Voice, you can pause as you are thinking while speaking, without the assistant assuming your train of thought is over or cutting you off.
You can also talk to it like you would a human with non-linear, train-of-thought commands. For example, instead of "What is the weather?", you could say "I am going on a run today in Brooklyn, and am wondering what the weather is like so I know what to wear", and ChatGPT Voice would understand your request. To continue to aid that free-flowing dialogue experience, Advanced Voice supports multi-turn conversations, so you can keep the conversation going as long as you'd like without losing prior context.
Another benefit is that it has the context of your surroundings with video- and screen-share options, which helps the assistant understand its surroundings and use that context to provide more informed and relevant answers. Part of yesterday's wave of updates is that ChatGPT Voice can better adapt to the user, better understanding their instructions and adjusting to their speaking style in the moment.
Ashley Belanger: ChatGPT Users Shocked To Learn Their Chats Were In Google Search Results! (Ars Technica, August 1, 2025)
OpenAI scrambles to remove personal ChatGPT conversations from Google results.


NEW: John Roberts: 19 Pros And Cons Of Cyber-Security
(ProsPlusCons, August 2, 2025)
The expansion of Internet-connected devices, the rise of e-commerce, online banking, and even the integration of Artificial Intelligence (AI) and the Internet of Things (IoT) have brought immense convenience but have also opened doors for malicious cyber threats. From personal-data theft to corporate espionage, and from the disruption of critical infrastructure to the sabotage of financial systems, the stakes of cyber-security have grown tremendously.
Cyber-security has become a necessity for protecting both individual and organizational assets. The goal of cyber security is to ensure the confidentiality, integrity, and availability of data and systems by preventing unauthorized access, attacks, or damage. However, while cyber-security offers immense advantages, it also brings certain challenges and complexities..
This article delves into the pros and cons of cyber-security, providing a comprehensive understanding of its importance in modern society. By exploring these aspects in detail, individuals and businesses can better appreciate the value of investing in cyber-security while understanding the challenges they may face. Let's start by defining cyber-security, its function, and how it works before diving into its pros and cons.


NEW: Kevin Purdy: Android 15's Security And Privacy Features Are The Update's Highlight. (Ars Technica, October 17, 2024)
New tools aim at phone snatchers, snooping kids or partners, and cell hijackers.

Return to main section of Money Is Not Wealth.