Technologies as Tools for Ethical Action
- Leonie Maya Gantenbein

- 2 days ago
- 5 min read

We are living in a time when technology expands our spaces of action, yet the moral and legal frameworks that should guide it are still underdeveloped and insufficiently negotiated. I would argue that today’s youth face the greatest challenge in this regard: they grow up in an era where technology is omnipresent but scarcely regulated. They have access to tools more creative, far-reaching, and dangerous than anything previous generations knew – yet clear moral and legal regulations are missing.
While our imagination has become boundless through AI, simulation, and digital spaces, responsibility remains diffuse. What we need is an ethics that not only warns, but accompanies – an ethics that understands engagement with technology as a learning process and sees the dynamics of technologies themselves as moral terrain.
An active ethics through technologies thus becomes part of the infrastructure of the future:
“Ethics reflects on the decision to consider something good, and thereby becomes the instrument of reflection for morality and for all questions arising in relation to the moral.”– Gantenbein: Annetzung, pp. 188 f.
Moral principles for humans share the same imperative signature – or “command code” – as the strict if-then codings of an algorithm. Rigid moral rules function as 1:1 commands: one does not kill; one helps when one can; the drowning child must always be saved. From a developmental-psychological perspective, humans also need fixed rules when learning moral behavior. Kohlberg’s stage model moves from externalized, ego-driven goals toward autonomous moral reasoning.
Ethics, alongside its intuitive, cultural, and emotional layers, also possesses a logical structure that concerns machines just as much as it concerns us. It can be conceived algorithmically – but also as a learning process: as the gradual evolution of moral reasoning. “Decision-making” may be too anthropomorphic a term; “inference” perhaps fits better. What I mean is that basic moral principles, in their logical structure, are programmable and can be enacted by technical agents.
An AI that follows moral rules will not produce child pornography, will not assist in stalking or unlawful behavior, and will refrain from giving unprofessional advice when it recognizes the intention behind a prompt. Fundamental norms, in other words, are programmable.
A technical agent such as artificial intelligence – and it is an “agent,” as voice-bots and robotic systems increasingly gain room for autonomous action – can therefore prioritize moral behavior for itself, and for us. And this is the decisive point:
“Bruno Latour’s socio-technical hypothesis states: ‘The mass of morality remains constant, but its distribution varies.’” (Latour, The Berlin Key).Latour refers to the seat belt: due to technical programming, one cannot drive off until the belt is fastened. Evgeny Morozov’s example of parking-meter evaders in Santa Monica makes a similar point: as long as users cannot find a way to bypass the system, they will comply with prescriptive norms. The question is whether this constitutes the danger described by Albert Hirschman – whether technological constraints render us morally immature citizens “who will not do the right thing unless the technical infrastructure explicitly deprives them of the possibility of doing the wrong thing.” (Morozov, To Save Everything, Click Here). Many moral actions in everyday life are already automated through such prescriptive constraints: the demand for strong passwords, CAPTCHA tests confirming our humanity, digital identification processes, or the traffic-light system at an intersection – all these protect values by imposing restrictions. It seems reasonable, for example, that drivers should not be able to wear certain VR headsets while driving. The use of specific prosthetic devices could likewise be technically restricted in risky contexts. “Submerging” (Annetzung) in hazardous situations – or under conditions likely to endanger fundamental rights – should not only be prohibited but technically prevented through prescriptive design. The seat belt marks merely the beginning of a long series of such moral-technical constraints. Today, smartphones, tablets, and GPS devices can detect driving mode and automatically disable certain functions – an explicit curtailment of freedom and convenience. Future attention tests through eye-tracking or reaction-time measurement could further monitor drivers’ fitness. Given the rising number of cars per capita and the accidents caused by distraction or consumption at the wheel, the demand for maximal freedom of action is difficult to justify. Our societies have already adapted to surveillance and prescriptive spaces, and we are growing accustomed to moral constraints imposed by technology. Yet these constraints must always remain subject to critical reflection and sound justification.– Gantenbein: Annetzung, pp. 314–316.
My argument is not that technologies should decide for us what is right or wrong. Rather, they should support us in decision-making – even by constraining us or influencing us persuasively to act rightly, when “right” has first been ethically defined and collectively agreed upon.
“Humans are more likely to follow moral imperatives when these are communicated by a computer rather than by another human. For lived and enforced moral principles, this opens an entirely new field. Perhaps we must therefore rethink our perspective on the relationship between technology and ethics. It is no longer only about the one-sided regulation of technologies. Artificial intelligence and its convergents – robots, voice-bots, NPCs, etc. – may make a significant, perhaps even decisive, contribution to the realization of moral values. Ethics thus operates over, with, and through technologies, both implementing and demanding them. We are therefore speaking of an ethics that not only regulates technology, but recognizes it as an autonomous agent of moral communication.”– Gantenbein: Annetzung, pp. 348 f.
Technologies, then, need not be seen as threats but as moral amplifiers – as partners in communication who remind us of what we could be and do in a given situation.
“This does not mean that we should blindly obey what an AI tells us to do. Yet the transmission, remembrance, and stabilization of shared values can occur in automated ways. Technologies thus become tools of a cooperative, human-defined ethics – efficiently communicating and embedding it in everyday life. […]Our Annetzung (engl. submerging) with technological spaces is still young, and we have the opportunity to make it a space free from fear. The technologies through which we interact are only as good in their use and application as we allow them to be. […]For all actors to benefit from this new world-mode, this can only happen through interdisciplinary dialogue and a shared commitment to ethical foundations.”– Gantenbein: Annetzung, p. 349.
An ethics of technology is therefore not an ethics of resistance – one that merely seeks to regulate. It is a cooperative ethics: a dialogue between humans, machines, and society. It does not ask whether technology can be moral, but how it can help us remain moral, despite our many technological prostheses. A functioning ethics through technologies sees them not only as actors, but also as tools and mediators of morality.
We may no longer have a new god to tell us what to do;but we do have an extended arm.And when ethics does not merely speak about technology,but with and through it,a new chapter of human responsibility begins.
More in my book: Annetzung. Handeln durch Technologie und Imagination (2025)

Comments