Digital Ethics

Digital Ethics

4042 bookmarks
Custom sorting
AI isn't a dual-use technology, it is inherently violent
AI isn't a dual-use technology, it is inherently violent
pemWhen the Pentagon branded Anthropic CEO Dario Amodei “a liar with a god complex” over fears that his company’s AI could be used for weapons and surveillance, it exposed a deeper truth: the boundary between civilian and military technology no longer exists. The same systems that power translation, logistics, and digital assistants can just as easily identify targets or manipulate populations. strongThomas Christian Bächle/strong and strongJascha Bareis/strong argue that today’s AI is not simply “dual use” — it is inherently violent in design. Adaptive, autonomous, and globally networked, these machines fuse daily life with geopolitics, making peace itself a fading abstraction./em/pp /ppDrones have become an uncanny threat—not least in the wake of the cost of human life and the degrees of suffering and destruction they have inflicted in Russia’s war on Ukraine. In many European countries they have been sighted near critical infrastructure or military sites, either used for reconnaissance or sabotage, at times causing major disruptions in civilian air travel. Drones unsettle a population that is fearful and weary of the brutality of war at their doorstep. They have become a major element to what is labelled emhybrid/em warfare, fought beyond the conventional ways of violence./ppBut this is not the whole picture. For years, drones have also been envisioned as a technology that bears the potential of bringing about major changes for the better: more efficient disaster relief, medical supply chains reaching even the remotest areas, optimized logistics or transportation. Drones also introduced a new visual – bird’s-eye-aesthetic of how to see the world./ppThis ambiguity is characteristic of emany /emtechnology. It is never distinct from social processes; technology shapes and is being shaped by them—both sides are inextricably linked with each other. Drones are no different in this regard. What makes them currently stand out, however, is that they are emblematic of what elsewhere we have called a href="http://autonomous-weapons-book.com"the realities of autonomous weapons/a: they symbolize a complex mix of meaning—a diffuse idea of destructive potential combined with a presumed human-like agency and artificial intelligence (AI). These realities blend existing military technologies with visions of their envisioned future capabilities, and in so doing point to the interplay between fact and fiction, actual developments and creative imagination./pp class="article-plus-content--header" style="text-align: center;"___/pp class="article-plus-content--header" style="text-align: center;"Digital technologies are bound—even designed—to defy regulatory and ethical stances./pp class="article-plus-content--header" style="text-align: center;"___/ppThis piece argues that recent technological developments always carry within themselves latent abilities to inflict harm, violence, and aggression. Yes, technology has certainly never been neutral; and, yes, these technologies also bear the potential to serve a benevolent purpose in a society, contributing for example to medicine, research, industry, or education. But still, digital, networked, and AI-enabled technologies bear a particularly harmful and violent potential for at least three reasons./ppThe first concerns the universality of these technologies, that can be used for a multitude of purposes that are not all predictable. AI chatbots such as Chat GPT, for example, can be used as information retrieval tools, as assistants, or as functional substitutes for interpersonal relationships. These open functional properties of a href="https://thebulletin.org/2023/08/war-is-messy-ai-cant-handle-it/#post-heading"AI applications have long found their way into war technologies/a. The second has to do with the high degrees of machine autonomy that enables the execution of violence to be split into automated steps of identifying, tracking, or engaging enemies that no longer require human involvement. The third relates to the changing nature of violence itself: a society that relies so heavily on digital communication and infrastructures becomes increasingly vulnerable. This vulnerability ranges from physical attacks on digital networks that are essential for transportation, health services, electricity or water supply, to the disruption of financial systems or exploitation of sensitive data. The vulnerability is also political and psychological through means of manipulation that can target public opinion or individuals, with ‘deep fake’ content as one of the better-known examples./ppAs a common descriptor, a technology’s potential for violence has been labelled, also a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32009R0428"legally/a, as emdual use/em, indicating that it can be employed for both civilian and military purposes. While this term has always been messy and imperfect, we argue that in today’s technological age of AI and connectivity, the distinction between military and civilian purposes has become utterly untenable, making this labelem /emanalytically ineffectual. Digital technologies are bound—even designed—to defy regulatory and ethical stances./ppWhat we propose is a greater awareness of the violent dangers underlying even seemingly innocuous AI uses. At the same time, by acknowledging these current dangers and likely future realities of warfare facing international conflict, we claim that the realistic goal can only lie in a geopolitical equilibrium of attack and defense, of force and counter-force. This claim deliberately goes against the naivety that too often finds expression in the public and political arena. Beyond what we suggest here, we need to accept: In the current geopolitical state of affairs there is emno /emrealistic path to an end goal of world peace, an effective regulatory framework, or compliance with universal ethical values./pp /ppstrongAI and warfare: the old concepts of war have become obsolete/strong/ppThe dual use label only makes sense if we can maintain the distinction between civilian and military applications in the first place. Making this distinction was more easily possible in the age of industrial-type warfare, with nation states waging war against each other, using mechanical and often single-purpose technologies such as rifles or tanks. One of the first dual-use cases that sparked controversy was the mass production of chemicals during the First World War. The development of procedures such as the “a href="https://link.springer.com/chapter/10.1007/978-3-319-51664-6_2"Haber-Bosch process/a” were the cornerstone of both modern fertilization techniques emand /emindustrially produced explosives or chemical weapons on the battlefield. Entering the accelerating pace of technological modernity in the 20th century, the military/civilian distinction has since become increasingly blurry, as the two ways of characterizing the “use” of weapons—the emfunctionality/em of the underlying technological principles, and their emcontext/em of employment—are shifting rapidly./pp class="article-plus-content--header" style="text-align: center;"___/pp class="article-plus-content--header" style="text-align: center;"The emintended/em uses of a technology and the ways it is emactually/em used—or appropriated—can now and more than ever before be two very different things./pp class="article-plus-content--header" style="text-align: center;"___/ppRegarding emfunctionality/em, the post-mechanical era of electronic computation that began in the 1940s heralded massive developments in automation and machine autonomy, currently being further advanced with AI. This contrasts sharply with previous styles of warfare that were necessarily carried forward by human agents who used force and “single use” technologies to achieve their goals. Regarding the emcontexts of employment/em, the clear-cut duality of peace and war characteristic of modernity (its end roughly coinciding with the end of WWII) has largely dissolved into a type of warfare that is characterized as hybrid: applying irregular and indirect forms of conflict, such as political, economic, cyber, or communication means, as well as integrating the possibilities of AI-enabled automation. Additionally, state actors are no longer the only ones playing in this type of warfare—private (military) companies, groups of mercenaries, terrorists, and cyber criminals are also driving it forward./ppAs a consequence of this newly indefinite character of both function and context, the “dual use” label is losing its relevance in regulatory, legal, and ethical discourses. Simply putem: /emAI-based weapons utilized in hybrid warfare are no longer captured by the theoretical categories of an obsolete age of war/peace, military/civilian, or state/private binaries./ppThis has significant repercussions for research and development, education, policy-making, and legal frameworks because emany /emAI-enabled, digital, and networked technology now may bring about a militarized or violent use, unforeseeable before the fact. This challenge is magnified by the rapid development and dissemination of such technologies, which can no longer be controlled. In short, the emintended/em uses of a technology and the ways it is emactually/em used—or appropriated—can now and more than ever before be two very different things. A widely-discussed example of this are home-installed a href="https://en.wikipedia.org/wiki/3D-printed_firearm"3-D printers capable of easily producing firearms such as rifles or pistols/a./pp span class="article-content-box" a href="https://iai.tv/articles/ai-war-and-transdisciplinary-philsophy-2583-auid-2684" target="_blank" class="iai-related-in-article click_on_suggestion_link--gtm-track" span class="iai-card" span
·iai.tv·
AI isn't a dual-use technology, it is inherently violent
Red Cross adopts AI guidelines to contain risks - Geneva Solutions
Red Cross adopts AI guidelines to contain risks - Geneva Solutions
The International Committee of the Red Cross released last month a set of general principles to guide its use of artificial intelligence. The aim is to harness the technology’s potential while averting risks for the populations it assists.
·genevasolutions.news·
Red Cross adopts AI guidelines to contain risks - Geneva Solutions
Meta insiders break cover on Australia's under-16s ban
Meta insiders break cover on Australia's under-16s ban
#428: Two former chiefs reveal that Zuck knows how young kids are but chooses growth over safety - and how Australia can reset the world...
·rickysutton.substack.com·
Meta insiders break cover on Australia's under-16s ban
Surveillance Self-Defense
Surveillance Self-Defense
We’re the Electronic Frontier Foundation, a member-supported non-profit working to protect online privacy for over thirty-five years. This is Surveillance Self-Defense: our expert guide to protecting you and your friends from online spying. Read the BASICS to find out how online surveillance works. Dive into our TOOL GUIDES for instructions...
·ssd.eff.org·
Surveillance Self-Defense
AI Is a Burnout Machine
AI Is a Burnout Machine
AI may be accelerating productivity, but it's also quickly driving the programmers that use it towards burnout.
·futurism.com·
AI Is a Burnout Machine
AI Doesn’t Reduce Work—It Intensifies It
AI Doesn’t Reduce Work—It Intensifies It
An eight-month study found that these tools made productivity surge—as well as cognitive fatigue, unsustainable hours, and other problems.
·hbr.org·
AI Doesn’t Reduce Work—It Intensifies It
How Hackers Are Fighting Back Against ICE
How Hackers Are Fighting Back Against ICE
A few enterprising hackers have started projects to do counter surveillance against ICE, and hopefully protect their communities through clever use of technology.
·eff.org·
How Hackers Are Fighting Back Against ICE
Resisting Algorithms: Human Rights in the Age of Platforms
Resisting Algorithms: Human Rights in the Age of Platforms
In a digital age, where content creators are booming, shaping culture, influencing politics, and building entire livelihoods online, we find ourselves in a world governed by algorithms we didn’t…
·pca.st·
Resisting Algorithms: Human Rights in the Age of Platforms
AI Data Centers Pushing Electric Grid Into Meltdown
AI Data Centers Pushing Electric Grid Into Meltdown
AI data centers are using up so much energy that grid operator PJM may be forced to enact rolling blackouts on its customers.
·futurism.com·
AI Data Centers Pushing Electric Grid Into Meltdown
‘ELITE’: The Palantir App ICE Uses to Find Neighborhoods to Raid
‘ELITE’: The Palantir App ICE Uses to Find Neighborhoods to Raid
Internal ICE material and testimony from an official obtained by 404 Media provides the clearest link yet between the technological infrastructure Palantir is building for ICE and the agency’s activities on the ground.
·404media.co·
‘ELITE’: The Palantir App ICE Uses to Find Neighborhoods to Raid
AI Is Hollowing Out Higher Education
AI Is Hollowing Out Higher Education
Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.
·project-syndicate.org·
AI Is Hollowing Out Higher Education
What Does 'Human-Centred AI' Mean?
What Does 'Human-Centred AI' Mean?
While it seems sensible that human-centred artificial intelligence (AI) means centring "human behaviour and experience," it cannot be any other way. AI, I argue, is usefully seen as a relationship...
·arxiv.org·
What Does 'Human-Centred AI' Mean?
(2) Post by @olivia.science — Bluesky
(2) Post by @olivia.science — Bluesky
Boiling here at home in Cyprus but I put the finishing touches a couple of days ago on this preprint: What Does 'Human-Centred AI' Mean? https://doi.org/10.48550/arXiv.2507.19960 Wherein I analyse HCAI & demonstrate through 3 triplets my new tripartite definition of AI (Table 1) that properly centres the human. 1/n
·bsky.app·
(2) Post by @olivia.science — Bluesky
The Battle for Your Time: Exposing the Hidden Costs of Social Media
The Battle for Your Time: Exposing the Hidden Costs of Social Media
Do we truly comprehend how much of our time and attention is given to technology? In his talk, Dino Ambrosi reframes how we think about our relationships to devices, and shares his ideas on how to create healthy digital habits. Dino Ambrosi, the founder of Project Reboot, is an expert at guiding teens and young adults to relationships with technology that empower them. While studying at UC Berkeley, he created a popular course called Becoming Tech Intentional which he taught to over 60 of his fellow students who reduced their screen time by an average of over 3 hours per day. After graduating in May, he embarked on a mission to spread the contents of his course to a broader audience. Through school assemblies, workshops, and consulting, he has worked with over 500 students and parents to raise awareness about the addictive potential of our devices, drive conversations around digital wellness, and deliver practical strategies to build healthy digital habits. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx
·ted.com·
The Battle for Your Time: Exposing the Hidden Costs of Social Media
Apropos DeepSeek by Dan Bergh Johnsson
Apropos DeepSeek by Dan Bergh Johnsson
DeepSeek is an open LLM model promising innovation in both efficiency and transparency – but how much do we really know about what’s happening under the hood...
·m.youtube.com·
Apropos DeepSeek by Dan Bergh Johnsson
Rep. Zooey Zephyr (@zoandbehold.bsky.social)
Rep. Zooey Zephyr (@zoandbehold.bsky.social)
"And I never say no" We need to have a serious talk about the way "AI companion" apps not only prey on the vulnerable, but are priming their users to ignore consent and to conflate love with control. We need AI regulations across so many sectors, but this area is particularly horrifying.
·bsky.app·
Rep. Zooey Zephyr (@zoandbehold.bsky.social)
Anil Dash (@anildash.com)
Anil Dash (@anildash.com)
Remember: When Tylenol was poisoned *by an outsider* and killed people, the company recalled all their products & redesigned them. When Intel’s Pentium had a bug so obscure it affected 1 in _9 billion_ long division calculations, they recalled their chips. ChatGPT was made deadly *by its team*. [contains quote post or other embedded content]
·bsky.app·
Anil Dash (@anildash.com)