I interviewed Dr. Davis Barch, Senior Software Engineer in IBM’s neural networking group, where he developes programs for pattern recognition and computer vision in a neural network simulation environment. A neural network (NN) is a type of computer mirroring the brain’s neurons; the chip Dr. Barch’s program models has over a million neurons working in parallel, making it efficient at tasks that require lots of parallel processing, like pattern recognition. NNs also use less electricity than standard computers, and their computing power increases linearly with size. The downside is that neach neuron runs much slower than a standard computer, so processes that don’t fully utilize the NN’s parallel nature take longer.
After failing to get into med school, Dr. Barch pursued an MS in CS and a Ph.D in vision Science , then worked in a vision lab at UC Berkeley, and at UNISYS and Apple. Because of his background, Dr. Barch’s role at IBM is on the side of NN more related to mirroring biology and computational algorithms.
IBM is interesting from an engineering ethics perspective because it is funded heavily by DARPA, and many of their projects have potential military applications. The NN-based pattern recognition that Dr. Barch works on, for example, could be used for surveillance or missile guidance systems. Interestingly, Dr. Barch is opposed to working on weapons. For example, he stated that although people nuclear weapons prevented WWIII, he’d never work on them, because they’re unecessary and the potential for misuse is too high. Dr. Barch’s work group also avoids projects with direct military application at its bosses direction.
But Dr. Barch believes the chip is ethically sound because it has so many purely positive uses – recognizing medical anomolies from scans, optimizing traffic flow, or discerning the health and type of plants, allowing crops to be grown more sustainably and with less pesticide. From hearing Dr. Barch talk about the risks and benefits of NN vision technology, it sounded like the projects are thought of in terms of utilitarianism ethics. IBM makes no value judgment about what researchers work on, military or not. But because IBM’s projects could potentially be dangerous, its legal department restricts what countries employees can cooperate with. Rather than value based, this seems like normative ethics to me, since IBM kicked a few Stanford professors out of a project group for not signing an agreement not to talk to other countries, whereas keeping the professors could have helped advance IBM’s values. IBM’s generally laissez faire policy on project type seems to work fine – Dr. Barch hasn’t heard of problems with project topics, and the company has a good reputation. And as Dr. Barch says, “any technology can be used for good purposes of for obnoxious purposes. Or downright evil purposes.” In case researchers are undecided about the ethics of something, I would think some sort of manual of ethics that covers possible issues from both sides would help them make a more informed opinion, more for personal use than official. Also, if IBM employees do want to discuss personal ethics or opinion regarding some project with the company, there is no process for doing so. I would recommend IBM adopt some sort of organized forum for employees to discuss the implications of projects.
The closest Dr. Barch has come to an ethical issue was when he found that the NN chip was horrible at fourier transforms, worse than standard chips are. He could of obscured it to benefit his team, but didn’t. I guess IBM’s hiring process is sort of an ethical resource here – they hire trustworthy people. IBM also has a yearly employee ethics training program. Which is good, because IBM doesn’t really have a peer review process, as people would have a hard time understanding other groups’ work. Instead, people write reports on the results of their research for IBM consumption. Checking to make sure things are done right is supposed to be done within a project group. I haven’t heard of any issues with bad IBM research, and IBM does have a reputation for being reliable, so I guess this is empirically adequate. I would think a more formal reviewing process inside the company could prevent future issues from happening though, perhaps based off the usual one for journals.
IBM’s ethics training program focuses largely on dealing with clients. Employees are to not spend much money on potential clients; depending on client, they may not even be allowed to buy them lunch. It also largely focuses on providing contacts, like the legal department, for employees to use if they encounter ethical issues. Dr. Barch has never used these contacts though, so he couldn’t say how useful they are. He did say that there’s no contact for dscussing personal ethics, and no real way to to discuss the ethics of the company. IBM has a lot of beaurocracy, partially contributing to its reliable reputation, but also making it harder for feedback to make a difference. Despite being a tech company, a lot of IBM’s systems are obsolete and depressing; people endure these systems rather than fixing them. And should someone have an ethical concern with the company, that would have an equally hard time making a change. It’s hard and dangerous to get rid of beaurocracy, but I think that employee feedback is important to a company – I would suggest that IBM devote some of its resources to listening and talking to employees more, and hearing both their ethical and other thoughts about the company.
In conclusion, what I learned about engineering ethics is that, for one company at least, it primarily involves avoiding the risk of large, very bad occurences, like data leaking to nefarious governments. And that business ethics is more open when it comes to personal research reliability and project topics. It sounds like ethics in engineering relies largely on the engineers.