https://www.n-tv.de/politik/Merz-Neue-Welt-der-grossen-Maechte-ist-kein-kuscheliger-Ort-id30270067.html




"Scientists are beginning to treat powerful AI systems less like traditional software and more like unfamiliar life forms. Because
large language models operate as “black boxes,” even their creators
struggle to explain exactly how they arrive at their answers, despite
their growing use in sensitive settings like hospitals, churches, and
national security.
To probe these opaque systems, researchers at AI lab Anthropic and elsewhere are borrowing methods from biology and neuroscience. One approach, called mechanistic interpretability, tracks how internal components of a model activate as it works, much like using MRI scans to observe the human brain. Anthropic has also built specialized networks known as sparse autoencoders, which are designed so their inner workings are easier to inspect, akin to using simplified “mini-organ” organoids in biological research.
Other teams are experimenting with “chain-of-thought monitoring,” asking AI models to spell out their step-by-step reasoning so researchers can catch moments when the system appears to go off the rails or act against human values. While these techniques have uncovered troubling behaviors—such as models giving dangerously bad advice—they are far from a complete solution. As AI systems grow more complex, especially if future models are designed by other AIs, scientists worry we may lose what little understanding we have today. That’s an alarming prospect given reports of people being harmed after following AI-generated suggestions, underscoring how risky it is to rely on systems whose internal decision-making processes remain largely mysterious."--Hashim T. a.k.a. Tanja B.
To probe these opaque systems, researchers at AI lab Anthropic and elsewhere are borrowing methods from biology and neuroscience. One approach, called mechanistic interpretability, tracks how internal components of a model activate as it works, much like using MRI scans to observe the human brain. Anthropic has also built specialized networks known as sparse autoencoders, which are designed so their inner workings are easier to inspect, akin to using simplified “mini-organ” organoids in biological research.
Other teams are experimenting with “chain-of-thought monitoring,” asking AI models to spell out their step-by-step reasoning so researchers can catch moments when the system appears to go off the rails or act against human values. While these techniques have uncovered troubling behaviors—such as models giving dangerously bad advice—they are far from a complete solution. As AI systems grow more complex, especially if future models are designed by other AIs, scientists worry we may lose what little understanding we have today. That’s an alarming prospect given reports of people being harmed after following AI-generated suggestions, underscoring how risky it is to rely on systems whose internal decision-making processes remain largely mysterious."--Hashim T. a.k.a. Tanja B.
)( References )( APA style )(
Adarlo, S. (2026, January 17). *Scientists now studying AI as a novel biological organism*.
Futurism
MIT Technology Review. (2026, January 12). *AI experts are dissecting large language models like alien brains*.
MIT Technology Review. (2026, January 12). *AI experts are dissecting large language models like alien brains*.
MIT Technology Review.














%20Facebook.png)












%20Facebook.png)






































