Posts tagged technology

An ethical advancement, regardless of religion

You might have heard last week that Shinya Yamanaka, a scientist at Kyoto University, won the Nobel Prize for Medicine for his work showing how induced pluripotent stem cells could be derived from adult cells and substituted, in both research and therapy, for embryonic stem cells

In other words: Yamanaka found a way to get the potential benefits of embryonic stem cells from non-embryonic stem cells. 

Yamanaka’s achievement was undoubtedly driven by his religious convictions, and his award is a certainly a victory for religious outfits such as the Roman Catholic Church and the National Right to Life Committee — which is precisely why secular liberals might be less than joyous over the news. 

However, Slate columnist William Saletan says secular liberals should put aside their religious (or non-religious) baggage and accept, if not celebrate, Yamanaka’s work:

And we shouldn’t turn away from the moral aspect of this achievement just because it gratifies the conservative side of the old stem-cell debate. Yamanaka transformed that debate forever. He tore down the wall between preserving embryos and saving lives. He did what only a scientist could have done: He made it possible for both sides to win. In the words of Julian Savulescu, an ethicist and supporter of embryonic stem-cell research, Yamanaka “deserves not only a Nobel Prize for Medicine, but a Nobel Prize for ethics.”

Rationally designed babies

On Friday I posted about a new scientific study that suggests babies are not born with a basic right sense of right and wrong, but instead largely depend on their social development — for example, parenting — to gain moral understanding. This is not necessarily a problem; it’s just what the science says is (probably) true.

Yet I’m guessing that’s not how Julian Savulescu, a professor of practical ethics at Oxford University, sees the situation. According to a recent article in The Telegraph, Savulescu thinks parents have a moral duty to genetically engineer ethical babies.

Yes, you read that correctly. Here’s Savulescu making his case:

By screening in and screening out certain genes in the embryos, it should be possible to influence how a child turns out. In the end, he said that “rational design” would help lead to a better, more intelligent and less violent society in the future.

"Surely trying to ensure that your children have the best, or a good enough, opportunity for a great life is responsible parenting? … So where genetic selection aims to bring out a trait that clearly benefits an individual and society, we should allow parents the choice. … To do otherwise is to consign those who come after us to the ball and chain of our squeamishness and irrationality."

"Indeed, when it comes to screening out personality flaws, such as potential alcoholism, psychopathy and disposition to violence, you could argue that people have a moral obligation to select ethically better children. … They are, after all, less likely to harm themselves and others. … If we have the power to intervene in the nature of our offspring — rather than consigning them to the natural lottery — then we should."

Whether or not this would work is one thing. Obviously the question is whether, regardless of effectiveness, this kind of procedure is ethically sound. I’m not sure I have an answer to that yet, or even a compelling counter-question. But just in case you think this should be outright rejected as too radical an idea:

He said that we already routinely screen embryos and foetuses for conditions such as cystic fibrosis and Down’s syndrome and couples can test embryos for inherited bowel and breast cancer genes.

Rational design is just a natural extension of this, he said.

He said that unlike the eugenics movements, which fell out of favour when it was adopted by the Nazis, the system would be voluntary and allow parents to choose the characteristics of their children.

Empathy for robots?

David Gunkel, professor at Northern Illinois University, has written an interesting new book in which he argues our moral considerations are too restrictive. Gunkel, who holds a Ph.d in philosophy, told the college newspaper NIU Today that we should expand our ethical circle to include, well … robots. 

“Historically, we have excluded many entities from moral consideration and these exclusions have had devastating effects for others,” Gunkel says. “Just as the animal has been successfully extended moral consideration in the second-half of the 20th century, I conclude that we will, in the 21st century, need to consider doing something similar for the intelligent machines and robots that are increasingly part of our world.”

Well that’s interesting, but what does it mean? How should we “consider” robots? Should we not eat them? Should we not deconstruct and destroy them? Do they have worker’s rights? Should I feel bad when I harm them?

No, says Gunkel. He is merely arguing that we should consider the influence of robots and technology on our moral beliefs and actions. He doesn’t want to propose specific positions so much as he wants to get a conversation going:

Gunkel says he was inspired to write “The Machine Question” because engineers and scientists are increasingly bumping up against important ethical questions related to machines.

“Engineers are smart people but are not necessarily trained in ethics,” Gunkel says. “In a way, this book aims to connect the dots across the disciplinary divide, to get the scientists and engineers talking to the humanists, who bring 2,500 years of ethical thinking to bear on these problems posed by new technology.

“The real danger,” Gunkel adds, “is if we don’t have these conversations.”

With that, I agree. Which is good. I’m not about to start feeling empathy for robots. 

Note: you can read an excerpt from The Machine Question here.

The moral case against drones

Earlier in the week I posted about Bradley Strawser, a politically liberal philosophy professor who not only defends the use of unmanned drones in warfare, but also makes the case that the use of drones is moral. 

As one would expect, Strawser’s arguments have drawn harsh critiques. Here are two worth reading.

 Salon writer Murtaza Hussain:

Drones are thus not just a new weapon with which to fight conventional wars; they represent a sea change in the way conflicts in general are approached. Low-cost, low-risk killing will mean fewer questions and less scrutiny and ever higher body counts as the number of drones in the air continues to increase exponentially. The real ethical obligation is to remain vigilant against morally cretinous arguments such as the one put forth by Strawser and to fight against the normalization of a new, dangerous and in many respects fundamentally immoral form of warfare. That there is “no downside,” as Strawser claims, is only from the perspective of the military establishment he is a mouthpiece for; for the rest of us the downside is very real.

Historian and professor Mark LeVine:

And now, at least one philosopher, Bradley Jay Strawser, has taken up the challenge of offering a viable justification for the use of drones. A recent hire at the Naval Postgraduate School, his arguments have caused enough of a stir to warrant a profile and opinion piece in the Guardian. Strawser now claims that the Guardian profile in fact misrepresented some of his views; but after reading two of his published papers on the subject, the profile in fact underplays the glaring problems in his arguments. When applied to US policy more broadly, they reveal just how far into a moral and ethical quagmire the United States has sunk under the Bush and Obama administrations. 

More on the moral case for drones

Yesterday I posted about a feature article on Bradley Strawser, a politically liberal philosophy professor who not only defends the use of unmanned drones in warfare, but also makes the case that the use of drones is moral. 

It turns out Strawser was not entirely happy with how the article represented his views, so The Guardian gave him an opportunity to make the case for drones in his own words. Here’s what he had to say:

My view is this: drones can be a morally preferable weapon of war if they are capable of being more discriminate than other weapons that are less precise and expose their operators to greater risk. Of course, killing is lamentable at any time or place, be it in war or any context. But if a military action is morally justified, we are also morally bound to ensure that it is carried out with as little harm to innocent people as possible.

The best empirical evidence suggests that drones are more precise, result in fewer unintended deaths of civilian bystanders, and better protect their operators from risk than other weapons, such as manned aircraft, carrying out similar missions. Other things being equal, then, drones should be used in place of other less accurate and riskier weapons. But they should be used only for morally justified missions, in pursuit of a just cause.

Thus, my claim about drones is entirely conditional: they should be used only if the mission is just. As with all conditional claims, if the antecedent is false, then the entire claim is invalidated. In this case, if the current US policy being carried out by drones is unjust and wrong, then, of course, such drone use is morally wrong, even if it causes less harm than the use of some other weapon would.

Keep reading here.

The moral case for drones

The moral debate on drones keeps rolling on. The latest installment: a politically liberal philosophy professor who not only defends the use of unmanned drones in warfare, but also makes the case that the use of drones is moral:

At first sight, Bradley Strawser resembles a humanities professor from central casting. He has a beard, wears jeans, quotes Augustine and calls himself, only half in jest, a hippie. He opposes capital punishment and Guantánamo Bay, calls the Iraq invasion unjust and scorns neo-conservative foreign policy hawks. “Whatever a neocon is, I’m the opposite.”

His office overlooks a placid campus in Monterey, an oasis of California sun and Pacific zephyrs, and he lives up the road in Carmel, a forested beauty spot with an arts colony aura. Strawser has published works on metaphysics and Plato and is especially fond of Immanuel Kant.

Strawser is also, it turns out, an outspoken and unique advocate for what is becoming arguably the US’s single most controversial policy: drone strikes. Strawser has plunged into the churning, anguished debate by arguing the US is not only entitled but morally obliged to use drones.

Why? According to Strawser:

"It’s all upside. There’s no downside. Both ethically and normatively, there’s a tremendous value. You’re not risking the pilot. The pilot is safe. And all the empirical evidence shows that drones tend to be more accurate. We need to shift the burden of the argument to the other side. Why not do this? The positive reasons are overwhelming at this point. This is the future of all air warfare. At least for the US."

Keep reading this thought-provoking article here.

Note: thanks to Tauriq Moosa for the link.

Ethics for robots

No, the headline is not in reference to ethics that robots should be obliged to follow. Rather, it refers to ethics that humans should apply to robots in order to better manage the relationship between flesh and technology.

In the classic science-fiction film “2001”, the ship’s computer, HAL, faces a dilemma. His instructions require him both to fulfil the ship’s mission (investigating an artefact near Jupiter) and to keep the mission’s true purpose secret from the ship’s crew. To resolve the contradiction, he tries to kill the crew.

As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was.

You can read more in The Economist here.

Morally culpable robots?

Well, this is an interesting article:

As militaries develop autonomous robotic warriors to replace humans on the battlefield, new ethical questions emerge. If a robot in combat has a hardware malfunction or programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it?

Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But psychologists are finding that people don’t have such a clear-cut view of humanoid robots.

The author goes on to discuss a recent study that found many humans — regardless of whether they think machines have free will — do blame robots in certain circumstances. Of course, this doesn’t mean robots ought to be blamed for their mistakes. It simply means some humans think they should.

Which raises a deeper and more important question: who — if anyone — should be held accountable for the robot’s mistake? Because you can’t seriously argue that robots should be put on trial or throw in jail.

Or can you?

Should we boycott Apple?

If you’re like me, you’re currently sitting within arm’s reach of an iPhone, an iPad, and a MacBook (the device on which I am typing this post). Excessive? Perhaps, but I absolutely love Apple products. They make my life easier and more enjoyable.

This is precisely why I was so disturbed last week read an article in the New York Times on the horrid conditions faced by the workers who build Apple devices in China, and company’s apparent disregard for its workers troubles. According to Thane Rosenbaum, the Times expose should make people like me think twice about supporting Apple going forward:

Apple shareholders and aficionados are now faced with a moral dilemma—revel in the company’s stock price and whisper sweet words of indifference to Siri, the virtual personal assistant in the iPhone4S, all the while remaining silent about the grim workplace conditions of Apple’s suppliers?

Yet I can’t help but wonder: why such a close focus on Apple fans? Isn’t Rosenbaum’s argument applicable across lines? Aren’t many, or even most, of our electronic devices built in foreign countries by people working in what we might deem poor conditions? Or, for that matter, aren’t most of the products we buy generally built in such situations? Should Americans boycott every company that outsources the construction of its goods? Is that realistic? Would it do any good? Or are there other, better options to get companies to change their ways?

These are the questions I pose to you. Now, back to my iPad …

Advances in science demand an earlier introduction to ethics

You might recall an article I posted in July that made the case for moral instruction at the high school level. David Briceno wrote that:

Even though most young people are not immoral, criminal or evil, there still needs to be secular (nonreligious) ethics classes in America’s high schools that teach modern moral issues so that teens can be well-informed when it comes to making right moral choices in their lives.

This sentiment was echoed last week by Paul O’Donoghue, a clinical psychologist and president of the Irish Skeptics Society, who wrote in the Irish Times that:

In an increasingly multicultural and secular environment, which continues to undergo rapid change, it is crucial that formal education and training in the methods of ethical and moral reasoning and analysis be provided as early as possible in the education system.

I agree with both Briceno and O’Donoghue. It is a shame the American and other public school systems ignore formal instruction on basic and foundational concepts like right and wrong, good and bad, and moral character. Childrens’ beliefs on such concepts shape the kind of people they will become, and kind of acts they will perform.

Yet O’Donoghue has another compelling reason for an earlier introduction to ethics. Advances in science and technology — which promise to eradicate disease, extend human life, improve cognitive abilities, and allow parents to choose the sex, and know the future health, of their babies — raise ethical questions not previously considered by much of humanity.

O’Donoghue writes:

There is also an evolving philosophical movement that concerns itself with future possibilities in science that are likely to generate new ethical challenges. … Given scientific advances, what may now seem like science fiction may rapidly become real and available. To preserve a vibrant and effective democracy we need our citizens to be well informed, competent and critical thinkers. To achieve this we need to ensure that the necessary experience is available to students at the earliest appropriate stage in their education.

Just a thought: if the general argument for ethics instruction (i.e., that ethics is important) does not resonate with people, perhaps this more modern and specific argument will. I’d like to think so.