I began to wonder if AI rivals the biggest ethical concerns since the development of atomic energy—-the development of it for peaceful purposes vs. the use of it for war. Have there been other social/scientific/political issues between the atomic bomb Then and the AI Now that presented greater opportunities for both peace and disaster? Let us know in the comments box below.
To move the discussion along, I decided to consult with my techno/philosophical café table companion, Steve Mays.
I had coffee (well he had an Expresso and I had a Diet Coke) with Steve last Saturday morning. We enjoy discussing philosophical issues and I mentioned to him that in my next “Notes from a Quiet Hill” entry I was going to ask if Artificial Intelligence presented the greatest ethical issue for humanity since the creation of the Atomic Bomb.
On one hand it can identify cancer before it’s cancer. On the other hand, it can replace humans in an uncomfortable number of ways.
We quickly agreed it might be the great ethical issue of our present time. And then Steve—who many years ago bludgeoned me into acknowledging that this internet thing might be useful—decided to ask one of his I-phone AI sites to answer the question. (Steve is an Applelarian, an early adopter who at times stood in line before sunrise for the latest iteration is his cell phone and I an Androidian who last week was happy to stroll into Best Buy and go back to the future by buying a flip phone.)
Yes, after 20 or more years carrying around what seemed to be increasingly larger cell phones, I have sought a smaller pocket load.
I chose this color because it’s harder to forget where I put it.
This is the response from AI:
Is AI Humanity’s Greatest Ethical Issue Since the Atomic Bomb?
The question of whether AI is humanity’s greatest ethical issue since the atomic bomb is not a settled one — experts are deeply divided, and the answer depends on how one defines “greatest” and “since the atomic bomb.”
Historical context
The atomic bomb’s ethical weight came from its unprecedented destructive power, the existential threat it posed, and the moral responsibility of its creators and deployers. AI, by contrast, is a general-purpose technology with dual-use potential — it can be deployed for immense good (e.g., medical diagnostics, climate modeling) or for profound harm (e.g., autonomous weapons, deepfakes, mass surveillance) Yale University Press.
Why AI is ethically urgent
- Scale and pervasiveness: Unlike nuclear weapons, AI systems are now embedded in nearly every sector — healthcare, finance, law, education, and governance — making them a constant presence in human life Yale University Press+1.
- Autonomy and decision-making: AI can make decisions without human oversight, raising questions about accountability, consent, and the erosion of human agency pmc.ncbi.nlm.nih.gov.
- Bias and fairness: AI systems can perpetuate or amplify societal biases, affecting everything from hiring to criminal justice Springer.
- Truth and authority: Generative AI challenges traditional notions of authorship, truth, and expertise, potentially undermining trust in information MDPI.
- Existential risk: Some experts warn of long-term risks such as job displacement on a scale unseen since industrialization, or AI systems that could act against human interests Psychology Today+1.
Why it may not yet be “greatest”
While AI’s ethical stakes are high, the atomic bomb’s moral gravity was tied to its immediate, catastrophic consequences and the nuclear arms race. AI’s risks are more diffuse, long-term, and systemic, but they are also more preventable through regulation, transparency, and ethical design Yale University Press+1. Many scholars argue that AI’s ethical challenges are significant but not yet on the same existential scale as nuclear weapons.
Conclusion
AI is arguably humanity’s most pressing ongoing ethical challenge since the atomic bomb — not because it matches the bomb’s immediate destructiveness, but because it is a pervasive, evolving technology with the potential to reshape society, values, and human relationships in ways that demand urgent, global ethical governance. Whether it is “the greatest” depends on one’s priorities: immediate existential threat, long-term societal transformation, or the erosion of human control.
In short, it is—as was the case with atomic energy and any other revolutionary technology such as moveable type, mass production of cotton gins, internal combustion engines, broadcasting, satellite communication and imaging—up to humans to decide how a significant new technology will be used.
We have only ourselves to trust. Or, sometime in the future, to blame.
-0-
