Digital Ethics

Digital Ethics

4031 bookmarks
Custom sorting
This Existential Threat Calls For Philosophers, Not AI Experts
This Existential Threat Calls For Philosophers, Not AI Experts
Google’s former AI chief, Geoffrey Hinton, says there are two ways in which AI poses an existential threat to humanity. But he misses the biggest existential risk of all.
·forbes.com·
This Existential Threat Calls For Philosophers, Not AI Experts
Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians
Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians
"AI psychosis" or "delusional spiraling" is an emerging phenomenon where AI chatbot users find themselves dangerously confident in outlandish beliefs after extended chatbot conversations. This phenomenon is typically attributed to AI chatbots' well-documented bias towards validating users' claims, a property often called "sycophancy." In this paper, we probe the causal link between AI sycophancy and AI-induced psychosis through modeling and simulation. We propose a simple Bayesian model of a user conversing with a chatbot, and formalize notions of sycophancy and delusional spiraling in that model. We then show that in this model, even an idealized Bayes-rational user is vulnerable to delusional spiraling, and that sycophancy plays a causal role. Furthermore, this effect persists in the face of two candidate mitigations: preventing chatbots from hallucinating false claims, and informing users of the possibility of model sycophancy. We conclude by discussing the implications of these results for model developers and policymakers concerned with mitigating the problem of delusional spiraling.
·arxiv.org·
Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians
When the Village Dies, Algorithms Raise Our Sons
When the Village Dies, Algorithms Raise Our Sons
Manosphere radicalization, AI development, and the crisis of male isolation are the same system and women are the canary in the coal mine. In the same week as I was in the thought process of evaluating my own abilities after reading Dr.
·linkedin.com·
When the Village Dies, Algorithms Raise Our Sons
We've been here before!
We've been here before!
Parallels between AI and tobacco, and other warnings.
·olivia.science·
We've been here before!
There is No "Human-Centered 'AI'"
There is No "Human-Centered 'AI'"
Two recently-published reports use the same phrase – “human-centered AI” – urging schools to adopt automated and predictive technologies that, as The 74’s Greg Toppo reports, “serve human-centered learning [and] that doesn’t simply push for more efficiency. To do anything else risks creating a generation of young people ill-equipped for
·2ndbreakfast.audreywatters.com·
There is No "Human-Centered 'AI'"
AI isn't a dual-use technology, it is inherently violent
AI isn't a dual-use technology, it is inherently violent
pemWhen the Pentagon branded Anthropic CEO Dario Amodei “a liar with a god complex” over fears that his company’s AI could be used for weapons and surveillance, it exposed a deeper truth: the boundary between civilian and military technology no longer exists. The same systems that power translation, logistics, and digital assistants can just as easily identify targets or manipulate populations. strongThomas Christian Bächle/strong and strongJascha Bareis/strong argue that today’s AI is not simply “dual use” — it is inherently violent in design. Adaptive, autonomous, and globally networked, these machines fuse daily life with geopolitics, making peace itself a fading abstraction./em/pp /ppDrones have become an uncanny threat—not least in the wake of the cost of human life and the degrees of suffering and destruction they have inflicted in Russia’s war on Ukraine. In many European countries they have been sighted near critical infrastructure or military sites, either used for reconnaissance or sabotage, at times causing major disruptions in civilian air travel. Drones unsettle a population that is fearful and weary of the brutality of war at their doorstep. They have become a major element to what is labelled emhybrid/em warfare, fought beyond the conventional ways of violence./ppBut this is not the whole picture. For years, drones have also been envisioned as a technology that bears the potential of bringing about major changes for the better: more efficient disaster relief, medical supply chains reaching even the remotest areas, optimized logistics or transportation. Drones also introduced a new visual – bird’s-eye-aesthetic of how to see the world./ppThis ambiguity is characteristic of emany /emtechnology. It is never distinct from social processes; technology shapes and is being shaped by them—both sides are inextricably linked with each other. Drones are no different in this regard. What makes them currently stand out, however, is that they are emblematic of what elsewhere we have called a href="http://autonomous-weapons-book.com"the realities of autonomous weapons/a: they symbolize a complex mix of meaning—a diffuse idea of destructive potential combined with a presumed human-like agency and artificial intelligence (AI). These realities blend existing military technologies with visions of their envisioned future capabilities, and in so doing point to the interplay between fact and fiction, actual developments and creative imagination./pp class="article-plus-content--header" style="text-align: center;"___/pp class="article-plus-content--header" style="text-align: center;"Digital technologies are bound—even designed—to defy regulatory and ethical stances./pp class="article-plus-content--header" style="text-align: center;"___/ppThis piece argues that recent technological developments always carry within themselves latent abilities to inflict harm, violence, and aggression. Yes, technology has certainly never been neutral; and, yes, these technologies also bear the potential to serve a benevolent purpose in a society, contributing for example to medicine, research, industry, or education. But still, digital, networked, and AI-enabled technologies bear a particularly harmful and violent potential for at least three reasons./ppThe first concerns the universality of these technologies, that can be used for a multitude of purposes that are not all predictable. AI chatbots such as Chat GPT, for example, can be used as information retrieval tools, as assistants, or as functional substitutes for interpersonal relationships. These open functional properties of a href="https://thebulletin.org/2023/08/war-is-messy-ai-cant-handle-it/#post-heading"AI applications have long found their way into war technologies/a. The second has to do with the high degrees of machine autonomy that enables the execution of violence to be split into automated steps of identifying, tracking, or engaging enemies that no longer require human involvement. The third relates to the changing nature of violence itself: a society that relies so heavily on digital communication and infrastructures becomes increasingly vulnerable. This vulnerability ranges from physical attacks on digital networks that are essential for transportation, health services, electricity or water supply, to the disruption of financial systems or exploitation of sensitive data. The vulnerability is also political and psychological through means of manipulation that can target public opinion or individuals, with ‘deep fake’ content as one of the better-known examples./ppAs a common descriptor, a technology’s potential for violence has been labelled, also a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32009R0428"legally/a, as emdual use/em, indicating that it can be employed for both civilian and military purposes. While this term has always been messy and imperfect, we argue that in today’s technological age of AI and connectivity, the distinction between military and civilian purposes has become utterly untenable, making this labelem /emanalytically ineffectual. Digital technologies are bound—even designed—to defy regulatory and ethical stances./ppWhat we propose is a greater awareness of the violent dangers underlying even seemingly innocuous AI uses. At the same time, by acknowledging these current dangers and likely future realities of warfare facing international conflict, we claim that the realistic goal can only lie in a geopolitical equilibrium of attack and defense, of force and counter-force. This claim deliberately goes against the naivety that too often finds expression in the public and political arena. Beyond what we suggest here, we need to accept: In the current geopolitical state of affairs there is emno /emrealistic path to an end goal of world peace, an effective regulatory framework, or compliance with universal ethical values./pp /ppstrongAI and warfare: the old concepts of war have become obsolete/strong/ppThe dual use label only makes sense if we can maintain the distinction between civilian and military applications in the first place. Making this distinction was more easily possible in the age of industrial-type warfare, with nation states waging war against each other, using mechanical and often single-purpose technologies such as rifles or tanks. One of the first dual-use cases that sparked controversy was the mass production of chemicals during the First World War. The development of procedures such as the “a href="https://link.springer.com/chapter/10.1007/978-3-319-51664-6_2"Haber-Bosch process/a” were the cornerstone of both modern fertilization techniques emand /emindustrially produced explosives or chemical weapons on the battlefield. Entering the accelerating pace of technological modernity in the 20th century, the military/civilian distinction has since become increasingly blurry, as the two ways of characterizing the “use” of weapons—the emfunctionality/em of the underlying technological principles, and their emcontext/em of employment—are shifting rapidly./pp class="article-plus-content--header" style="text-align: center;"___/pp class="article-plus-content--header" style="text-align: center;"The emintended/em uses of a technology and the ways it is emactually/em used—or appropriated—can now and more than ever before be two very different things./pp class="article-plus-content--header" style="text-align: center;"___/ppRegarding emfunctionality/em, the post-mechanical era of electronic computation that began in the 1940s heralded massive developments in automation and machine autonomy, currently being further advanced with AI. This contrasts sharply with previous styles of warfare that were necessarily carried forward by human agents who used force and “single use” technologies to achieve their goals. Regarding the emcontexts of employment/em, the clear-cut duality of peace and war characteristic of modernity (its end roughly coinciding with the end of WWII) has largely dissolved into a type of warfare that is characterized as hybrid: applying irregular and indirect forms of conflict, such as political, economic, cyber, or communication means, as well as integrating the possibilities of AI-enabled automation. Additionally, state actors are no longer the only ones playing in this type of warfare—private (military) companies, groups of mercenaries, terrorists, and cyber criminals are also driving it forward./ppAs a consequence of this newly indefinite character of both function and context, the “dual use” label is losing its relevance in regulatory, legal, and ethical discourses. Simply putem: /emAI-based weapons utilized in hybrid warfare are no longer captured by the theoretical categories of an obsolete age of war/peace, military/civilian, or state/private binaries./ppThis has significant repercussions for research and development, education, policy-making, and legal frameworks because emany /emAI-enabled, digital, and networked technology now may bring about a militarized or violent use, unforeseeable before the fact. This challenge is magnified by the rapid development and dissemination of such technologies, which can no longer be controlled. In short, the emintended/em uses of a technology and the ways it is emactually/em used—or appropriated—can now and more than ever before be two very different things. A widely-discussed example of this are home-installed a href="https://en.wikipedia.org/wiki/3D-printed_firearm"3-D printers capable of easily producing firearms such as rifles or pistols/a./pp span class="article-content-box" a href="https://iai.tv/articles/ai-war-and-transdisciplinary-philsophy-2583-auid-2684" target="_blank" class="iai-related-in-article click_on_suggestion_link--gtm-track" span class="iai-card" span
·iai.tv·
AI isn't a dual-use technology, it is inherently violent
Red Cross adopts AI guidelines to contain risks - Geneva Solutions
Red Cross adopts AI guidelines to contain risks - Geneva Solutions
The International Committee of the Red Cross released last month a set of general principles to guide its use of artificial intelligence. The aim is to harness the technology’s potential while averting risks for the populations it assists.
·genevasolutions.news·
Red Cross adopts AI guidelines to contain risks - Geneva Solutions
Meta insiders break cover on Australia's under-16s ban
Meta insiders break cover on Australia's under-16s ban
#428: Two former chiefs reveal that Zuck knows how young kids are but chooses growth over safety - and how Australia can reset the world...
·rickysutton.substack.com·
Meta insiders break cover on Australia's under-16s ban
Surveillance Self-Defense
Surveillance Self-Defense
We’re the Electronic Frontier Foundation, a member-supported non-profit working to protect online privacy for over thirty-five years. This is Surveillance Self-Defense: our expert guide to protecting you and your friends from online spying. Read the BASICS to find out how online surveillance works. Dive into our TOOL GUIDES for instructions...
·ssd.eff.org·
Surveillance Self-Defense