Can Laws and Ethics Keep Pace With Technology?

Article: http://www.technologyreview.com/view/526401/laws-and-ethics-cant-keep-pace-with-technology/

Source: MIT Technology Review by Vivek Wadhwa

This article talks about the regulatory gap in laws pertaining to  technology and examples of the problems created by this lag. The gaps are getting wider as technology advances ever more rapidly. Wadhwa gives examples as to this gap and how it was plugged historically, but does not really offer a solution but instead ends with this statement: “The problem is that the human mind itself can’t keep pace with the advances that computers are enabling.”

There is a legitimate dichotomy illustrated by how employers cannot ask interviewees about their religion, political views, and relationship status but are free to use social media  to glean information that might bias them. An important example is the Genetic Information Nondiscrimination Act of 2008 that prohibits the use of genetic information in health insurance and employment. But it provides no protection from discrimination in long-term-care, disability, and life insurance.

Another example is the Telecommunications Act of 1996 that mandated that phone companies share their lines to allow long distance firms to enter local markets and vice versa — the idea being that the consumer then would have more choices. Though the act did have provisions for emerging technologies, it did not fully anticipate that cable companies would one day offer voice services, phone companies would offer video services, and that there would be Web and wireless services that offer a hybrid of both. Because of the 1996 act, the phone companies had to share their wires, whereas the cable services did not. The rates for services that are described as “telecommunications” are regulated, whereas those classified as “information” are not. Due to that uncertainty, critics say, the United States now lags behind countries such as South Korea and Japan in Internet and wireless development, where it once was the leader.

I want to discuss ways in which this lag can be improved. I propose the government to be open to a more open free-market system like the internet itself which has evolved naturally very well.

Another important area is privacy. Internet-related cases are tricky because they confront new and unaddressed areas of American law. How can society balance accountability with free speech? And if information — from private thoughts to public data — is so readily available, how do we define what constitutes privacy? This is where we debate the competing ideas of free speech and privacy. Another challenge for the law is the way the Web crosses state and international borders. 

 Thanks to the Internet, it’s now relatively easy to find the value of a person’s home or the extent of their political contributions and leanings. Meanwhile, people use social media applications like Twitter or Facebook to share personal details with the world. The result is a blurring of the lines between what ought to be considered private and public.

I want to explore firstly, whether even if the law were to catch up to technology, could it actually remedy these kinds of situations where there is a substantial grey area. I believe that the law can only act on societal consensus. The law is only effective at policing the most extreme and outrageous cases. 

 

Will the Woolly Mammoths Make Us Gods (Dolly certainly did not)?

3/14/2014: Woolly mammoths are coming back.

Genetic engineering has seen a great amount of technological and scientific breakthroughs in the past few years, and with them a vast trove of ethical questions. However, this one discovery and breakthrough, in my opinion, almost perfectly sums up the ethical concerns that is implicit in many of these questions.

About eleven months ago, in the permafrost of Siberia’s Maloyakhovsky Island, scientists found the frozen carcass of a woolly mammoth. However, what made this one discovery so special, was just how well-preserved the specimen was. According to one of the scientists on the team: “The carcass is more than 43,300 years old, but it has preserved better than a body of a human buried for six months.” Not surprisingly, this resulted in the successful extraction of various blood and tissue samples, including haemolysed blood containing erythrocytes, and migrating cells in the lymphoid tissue. The latter two finds are particularly important to the cloning process, indicating that there is a very high likelihood that the scientists can use these samples to clone a woolly mammoth. If the scientists do choose to clone the mammoth, however, the result will not be an exact copy of the discovered mammoth, as a female elephant would be used as the surrogate mother in the process.

The issue that is immediately brought up in the article is the danger that the scientists are “playing God” by creating what will essentially be a new species of animal. Moreover, it focused on the need to have the right motives for cloning the mammoth, not just for the sake of curiosity. But, looking at the past, there are vastly more cases of cloning where this issue clearly did not phase the scientists. There was the case of Dolly the sheep, which did spark a lot of controversy, mainly over Dolly’s resultant health complications. Many more animals were cloned after Dolly; perhaps the most relevant case to this article was of the Pyreanian ibex. The ibex, a then extinct animal, was cloned from a preserved tissue sample, but died shortly after birth due to health complications.  Therefore, I feel that the central issue isn’t about whether or not we should clone animals that are extinct; the article on the ibex even mentions about how cloning can be used as a tool to support endangered species. The central issue that I see regarding cloning relates to how stable the result is. Looking back at previous cases of cloned animals, such as Dolly or the ibex, they suffered health complications, particularly with the lungs, and died at an early age. I believe that the choice of whether or not we should follow through with cloning should be determined by how the cloned animal will live on this planet, particularly with regards to its own health, and its potential impact on the ecosystem.

Therefore I pose two questions: are we truly “playing God” through cloning, or are we examining the wrong set of reasons?

Can you really fall in love with technology?

http://www.usatoday.com/story/news/nation/2013/01/23/love-algorithms-online-dating/1856853/

01/23/2013: Online dating has changed everything

Technology has made us all interconnected. Wherever we go, whatever we do, we have the opportunity to stay connected with all of our “friends” virtually through our smartphones and the internet. Although we have the opportunity to continuously communicate, are we really closer to all of the people with whom we converse with online? USA Today’s Sharon Jayson interviewed Dan Slater, the author of Love in the Time of Algorithms: What Technology Does to Meeting and Mating, about how the age of technology has affected our love life.

I agree with Slater’s point stating how online dating has made relationships more disposable. Before the age of the internet, we did not have a database filled with potential mates at the edge of our fingertips. When people were forced to meet organically, you really needed to give the people you meet a chance. If a friend sets you up, you would sincerely consider them as a potential partner. In today’s day-in-age, you don’t have to like the people you already know. If one is optimistic about meeting people online, he or she can form a new relationship every week until his or her dream partner is found. The abundance of potential partners online leads people to take for granted the relationships they already have.

Behind the matching, there are a lot of data-driven algorithms at work. When users sign up for popular dating site OkCupid, they are asked a series of questions about traits of potential partners and asked to rank how important they are to the user. Based on these questions, OkCupid calculates a percentage match with every user on the site, which is supposed to notify users of how compatible they are. How accurate are these algorithms in predicting a couple’s success? According to Slater, “psychological science has not provided the ability to predict long-term compatibility between a couple who have never met.” However, what online dating seems to have improved is how people get along on a first date. I believe that because people are matched based on common interests, they will have a lot of surface-level conversation topics. So on a first date, when you are first meeting someone, you will have a lot of things to talk about, leading to a good time. It still seems, though, that online dating has not been able find the deep-seeded emotions that make two people bond together.

Although psychology has not proven online dating’s link to marriages, it has helped sole mates get together more quickly. According to statisticbrain.com, people who get married after meeting online date for an average of 18.5 months, while people who get married after meeting offline date for an average of 42 months. One could argue, however, that people who meet online may be more actively looking for a spouse, leading to a lower courtship time.

While technology has allowed us to meet more people, it changes the way we value relationships along with the way we interact with each other. This new frontier of communication has enabled countless people to get together, while also prevents people from getting a deeper, intimate connection. As our society continues to get more connected through technology, online dating will continue to grow in popularity. I, for one, hope that people will appreciate those around them as much as they can, but still utilize the technology available so they end up with someone they’ll be happy with.

Other sources:

http://www.theguardian.com/commentisfree/2011/jul/25/online-dating-love-product
http://www.statisticbrain.com/online-dating-statistics/
http://www.huffingtonpost.com/cristen-conger/online-dating-facts_b_823816.html

Post-Fukushima Condition of US Nuclear Industry

http://www.nytimes.com/2013/04/09/us/ex-regulator-says-nuclear-reactors-in-united-states-are-flawed.html

Gregory B. Jaczko, the former chairman of Nuclear Regulatory Comission, said on, April 8, 2013, that all US nuclear reactors are facing safety problems. He argued that the problem cannot simply be fixed so we have to shut down all the US nuclear reactors gradually (Wald). As noted in the article, it is concerning when the former chairman of the NRC has doubted the safety standard of the U.S. nuclear reactors. His opinion is a common point of view of anti-nuclear enthusiasts after Fukushima Dai-Chi accident. After some research, I would argue that the post-Fukushima condition of U.S nuclear reactors is tested and proven to be safe.

The Fukushima incident raises concern of the safety of our reactors. A lot of critics have called a moratorium on nuclear industry. However, the Nuclear Regulatory Commission (NRC) has confirmed that the UC nuclear plants are safe to operate. US plants have implemented defense-in-depth system to insure the safety of the public. Multiple layers of protection are installed to prevent nuclear accidents. Defense-in-depth system consists of independent layer of protection so each layer is not relied on exclusively (Nuclear Regulatory Commission 25).

The flooding caused by the tsunami in the Fukushima accident is  an example of  beyond-design-basis accident. Although such accidents are very unlikely to happen, nuclear safety design has also included measures for these types of incidents. Station blackout (SBO) is an example of deyond-design-basis events (Nuclear Regulatory Commission 9). Station blackout refers to a complete loss of onsite and offsite ac power. SBO can result in loss of cooling, which causes meltdown, as seen in the Fukushima incident. Knowing the severity of SBO, NRC passed SBO rule, in 1988, and it was amended in 1988 and again in 2007 (Nuclear Regulatory Commission). In 2013, NRC is working on a new version of SBO rules in response to the Fukushima incident to ensure that SBO would not cause harmful damage to the public.

From the public view, 69% of the population supports nuclear energy and 29% oppose it (Bisconti1). 69% of Americans support new reactors at existing sites as opposed to 26% do not accept new nuclear reactors (Bisconti 2). In general, the public still supports the nuclear industry. Also, the NRC has been constantly checking and improving the safety standard of US reactors. In my opinion, the nuclear industry has a promising future but it requires more investment in terms of time and money.

Citations

Charles et al. “Recommendation for Enhancing Reactor Safety in the 21st Century.” NRC.org. Nuclear Regulatory Commission. July 12 2011. Web. April 11 2014.

Bisconti Ann, PhD. “Perspective on Public Opinion.” NEI.org. Nuclear Energy Institute. November 2012. Web. April 11 2014

Wald, Matthew. “Ex-Regulator Says Reactors Are Flawed.” New York Times. New York Times, April 8 2013. Web. April 11 2014.

 

PG&E Indicted on a Dozen Charges for a Pipeline Explosion, How do we Prepare for the Future

Several years ago, a small neighborhood in San Bruno, California was rocked when a natural gas pipeline exploded; killing 8.  The explosion injured 58 people not including the dead and created a 67 feet long crater by 27 feet deep.  The pipeline was originally installed in the 1940s and finally failed after a continuous use for over 70 years.

Ethically, I believe this incident directly calls into question the ability of engineers employed by PG&E to protect the public.  Following this incident, PG&E Chairman and Chief Executive Tony Earley said in a statement “We’ve taken accountability [for the accident] and are deeply sorry,” which anyone would hope to be true following this horrible incident.  The biggest problem I find with this statement though is why they would not have understood the system in place before an incident like this occurred.  Over the course of 70 years, someone must have committed maintenance or tested the system to ensure its ability.  This lack of oversight presents a serious issue within the bureaucracy of PG&E; how can we trust them to be accountable of themselves in these ethical issues?

This pipeline in question had never been pressure tested, or the statistical information was lost and forgotten according to the company.  The company during this period knew of these deficiencies, but kept business as usual.  Their lack of accountability regarding their employees is also worrisome, as they may be hiring incompetent people.  Nevertheless, PG&E will continue to supply power and gas to a majority of Northern California in the near future, as they have the infrastructure.

As a resource, natural gas use is rapidly expanding the first half of the 21st century as great reserves of it are found through fracking.  According to the EIA, consumption of natural gas has been rapidly expanding in the industrial and power generations sectors, just putting more stress on this system.  The infrastructure devoted to the distribution of gas will become increasingly necessary, as more homes are built and vehicles potentially begin to burn compressed natural gas instead of petroleum.  Without proper oversight, this incident could be the initial event which happens with other companies as their demand grows.  This must be attacked now in order to ensure the safety of individuals in the future.

PG&E in this incident had absolutely no idea of what was going on.  Even after the explosion occurred, it took their ground staff more than an hour to shut off the lines and stop the flow.  This level of ignorance and malpractice is absolutely unacceptable in today’s society.  Despite all these know facts, I now ask does the responsibility in this situation lie on the engineers or the administrative staff at PG&E?  How do we create an adequate system which provides oversight for these companies?  Is the legal action enough or do we need more?

Someone must be held accountable and thankfully these charges stuck so complacent companies in the future will be more careful about citizen’s lives in the future.

Link: http://www.latimes.com/local/la-me-pipeline-charges-20140402,0,4974930.story#axzz2yGp9kmWT

Managing the biosecurity of gene synthesis

Gene synthesis is the production of a physical DNA strand given the sequence information (i.e. the sequence of A’s, C’s, G’s, and T’s in a computer). The cost of gene synthesis has been falling, which has been a very good thing for synthetic biology.

The genetic sequences of deadly pathogens, such as smallpox and Ebola virus, are publically available: <http://www.ncbi.nlm.nih.gov/nuccore/L22579.1> and <http://www.ncbi.nlm.nih.gov/nuccore/NC_002549.1>. It is now technically feasible to synthesize these genes. Given the physical DNA or RNA, it is then not difficult to recreate the pathogens themselves.

To counter this, gene synthesis companies screen sequence orders for dangerous genes <http://gspp.berkeley.edu/iths/Maurer_IASB_Screening.pdf>. Some companies compare the order against the select agents list <http://www.selectagents.gov/Select%20Agents%20and%20Toxins%20List.html> and automatically flag any hits. Others employ humans to compare the sequence against GenBank (or a similar large database of sequences) to determine whether the sequence matches anything pathogenic.

There are several problems with this arrangement. Firstly, the screening is voluntary. There is no government-enforced process. Secondly, with the technology advancing so rapidly, independent groups may soon be able to economically synthesize their own genes. This would circumvent any screening performed by large suppliers. Thirdly, screening against a list of known sequences might not be sufficient; it would not capture to-be-discovered pathogens. Fourthly, having humans screen orders would lengthen the turnaround time and/or increase the cost of synthesis. There is a conflict of the companies’ interests between biosecurity and business competition. Fifthly, if in the future people submit sequences derived from their own genes, there might be a privacy issue: there have been cases where anonymous participants were identified in human genome databases <http://www.nature.com/news/genetic-privacy-1.12238>.

Thus, the efficacy of current screening practices is questionable, and there may be privacy issues with screening.

Carbon Credit? Another “Dirty” Currency?

http://www.theguardian.com/environment/2012/oct/15/pacific-iron-fertilisation-geoengineering

The article talks about a guy named Russ George, an American businessman who did a geo-engineering experiment by dumping 100 tons of iron sulfate into a most celebrated biological diverse part of the Pacific Ocean. He claimed that this movement is good for salmon restoration which is crucial to the livelihood and culture of the locals. But the truth is that it violates UN Conventions for biological diversity by dumping iron into ocean due to its damage to the marine ecosystems. Russ advertised that dumping 100 tons of iron sulfate into the ocean will promote phytoplankton bloom which then absorbs green house gas from the atmosphere. However, what really happens is more complicated than his publication. From my research, I figure out that there are three factors that will weaken his statement. First of all, shrimps will eat the phytoplankton before they can bloom to a significant size to impact the climate system. Secondly, there is a diatom group of phytoplankton that consumes more iron and grow in size instead of reproducing, which leads to a depleted supply of iron for the regular phytoplankton to grow. The last factor is that each iron atom that is not consumed by the algae will react with water to produce three additional Hydrogen ions which can lead to a lower pH level for the local marine ecosystems. This can in turn disrupt the norm, harm marine lives, and even cause local extinctions.

This article reminds me of a news I read long time ago which talked about some bogus companies that generate nothing but pollutions in order to earn carbon credits. From Wiki, Carbon credits and carbon markets are defined as a component of national and international attempts to mitigate the growth of greenhouse gases. One carbon credit is equal to one ton of carbon dioxide or equivalent gases. Carbon trading is an application of an emissions of trading approach. Due to the regulations of carbon credit, more and more companies are trading and capitalizing on the carbon credits without producing any useful products which creates a reverse effect from the original intentions for establishing the carbon credits and markets.

From my point of view, carbon credit becomes the dirty currency of emitting greenhouse gas. It seems to me that Russ George, the guy in the article, is trying to earn carbon credits as he illegally dumped the iron into the ocean. Whose fault is that and who gains the most out of it? In Russ George’s case, it seems that he solved the short term solution in promoting phytoplankton growth, but did not consider the long term impacts on the entire ecosystems. This brings up a lot of ethics questions. As engineers, should we only solve the problem in front of us and ignore the long term repercussions or should we consider the problem as a whole including it’s time dimensions without taking any shortcuts? Should we put a time discount factor when we try to solve a problem? Do the regulators and organizers play an important roles in promoting either side of approach in this case? Companies pay big checks to engineers to get them for the companies’ benefits. The relationships between the companies and the regulators are beyond public eyes. How does the chain influence the engineering decision?

Food for thought:

The Surgical Robot: A Tool? A Toy? An Advance?

03/2014: The Surgical Robot: A Tool? A Toy? An Advance?

http://www.anesthesiologynews.com/ViewArticle.aspx?d=Technology&d_id=8&i=March+2014&i_id=1045&a_id=26071

With advances in modern technologies, the fields of robotic engineering and medicine have meshed together to create exciting new methods of handling surgery. While previously, surgeons relied upon experience and gathered knowledge to perform their operations, the surgeries of today have progressed even to the realms of science fiction.

Although robotic surgery was originally developed for military use in the field, it was soon adopted in another fashion when medical specialists realized that it was more applicable as an on-site tool. The advancement of robotic medical systems can be traced from a first prostate exam, to intra-abdominal surgery with the da Vinci robotics systems, and finally colorectal surgeries to this day. Surgeons can now utilize robots to conduct precise operations, ensuring that surgeries go smoothly. With the lowered costs and such precision, the future of robotic medicinal treatments seems to be bright and beautiful. At least, that is what it would be seem to be. However, many ethical concerns must be addressed before the use of robotics becomes the norm for surgery.

An important concern is, of course, accountability when surgeries fail. As the “pilot” that guides the robot, does the surgeon bear the brunt of the blame, or is the robot somehow responsible? Of course, it might seem obvious that the surgeon would be to blame, as the machine is incapable of completing the task without the surgeon’s guidance. However, several cases of robot flaws have come to light regarding the capabilities of the robot. As the robots lack responses to tactile sensations and tensile feedback, robotic arms have been known to cause unintentional tissue damage during movement.

On another note, a fundamental question must be asked: is it ethical to employ the use of these robots? Issues have been documented after the FDA took a closer look at da Vinci robot operations, with an increasing amount of accidents, and even several deaths. Furthermore, it is suspected that many problems linked with robotic surgery are actually underreported, with the rapid adoption of such technologies and insufficient methods of analyzing complications.

The adopting of robots for surgery also has questionable effects on the skillset of surgeons. Of course, the robot is more precise and lacks shaking hands, but surgeons also gain valuable practical experience from manual cases. When complications arise, it is up to the surgeon to pull upon his repertoire of knowledge in resolving the situation; however, a primary shift towards robot use limits the experience level of practicing surgeons. If a reliance on these machines is widespread, the health industry may eventually train robot operators for medicinal practices rather than surgeons. Such a notion is worrying towards trust when it comes to the patient placing his/her life in the practitioner’s hands.

For all its ethical dilemmas, the field of robotic surgery is still in a tentative growth state with the potential to improve surgeries to a whole new level. It is important that we continue to develop this tool in the hopes of future benefits to society.

Dead or Alive?

02/03/2014: When is Someone Dead?

Article Link

 

This article discusses the difficulty of determining when someone is dead due to current technology. In particular, the widespread use of ventilators in hospitals has caused a lot of confusion and scrutiny about how cases are handled. Each case handles the idea of “brain-death” in a different manner. Recently, a case involving a 13-year-old named Jahi McMath has resulted in discussions about what it means to be dead.

Jahi suffered a major hemorrhage after throat surgery and fell into a deep coma despite full life support. A few days letter, doctors had confirmed that she had irreversible loss of all brain function. The legal standards for all 50 states says that Jahi was now dead, but the parents refused to accept this conclusion, as Jahi still had a beating heart. The mechanical ventilator is able to breathe for her and supply her body with sufficient oxygen, so it has continued to function.

In my opinion, as long as the brain death is irreversible, there should be no more discussions about keeping the person alive. We have to trust that the judgment of the doctor is accurate and well-informed and there is no benefit to giving the family false hope while draining both their resources and the hospital’s. I think people have been too greatly influenced by TV dramas and movies that show miraculous recovers at a far more exaggerated rate. Not to mention that most people only hear about these extremely rare recoveries on the news; it is rare to see a normal, expected death talked about. It’s similar to why people believe they have a larger chance to win the lottery than they really do. They only hear about the few lucky ones who have won and not the millions who lose every time.

This begs the increasing difficulty of determining when someone is dead as technology continues to progress. It is not hard to imagine that sometime in the future we would be able to transplant a working brain into a new body. Would that person be the same person? Or if we are so effective and creating artificial organs and tissues that a person is able to swap out all his old organs for new ones? There is also the difficulty in deciding who gets the final say on when to pull the plug. There have also been some controversies that are opposite, where the hospital believes it is correct to keep the patient alive, but the family believes that the patient would have wanted to pull the plug, like the popular Terri Schiavo case. Schiavo was in a persistent vegetative state, but the family believed she would not have wanted to live this way. There has also been another recent case, Marlise Munoz, who has been kept alive because it is illegal to pull the plug on a pregnant woman, despite the family’s wishes to take her off the ventilator.

Neuroscience and Law

Business Insider SEP 26 2013 –
The Bizarre Case Of a Guy who says Brain Surgery made him Obsessed with Child Porn

http://www.businessinsider.com/radiolab-story-on-klverbucy-syndrome-2013-9

In the recent advent of fMRI and other powerful neural imaging techniques, scientists are able to diagnose brain disorders more reliably. This is turn has led to the field of neural-law where the accountability of crime is being questioned.

This article is especially interesting as it addresses a patient “Kevin” who was in all societal measures considered normal until he was treated for epilepsy and had neural surgery. After the surgery, Kevin was obsessed with sexual desires. He downloaded massive amounts of porn, including child, animal, or whatever he could get. He was arrested after about a year after his surgery for child porn. Kevin argued in his defense that he couldn’t help himself because of the surgery; in fact his exact pathology has been identified as the Kluever-Bucy condition. The prosecution, however, argued that Kevin did have some amount of free will, because Kevin was not on the child porn websites 24/7 and he could have sought help.

I thought this was rather an interesting article because of the broader implications of science effecting judicial proceedings. Law was first introduced to punish a crime. The biblical “eye for an eye” was the literal way of punishing people. If there was evidence of the crime of being committed then the according punishment was given out. However, in the modern western society law has moved away from judging the crime to judging the person’s soul. The accused is suddenly humanized, and his actions are judged according to his context. The jury are presented with two distinct characterizations of a person’s personality (one from the defense and the other the prosecution) and the actions are then connected rather as a extension of that character. The crime itself has lost its meaning.

Now the soul is becoming quantified. The neural networks are being carefully mapped and the pathological condition is being named. Currently this is employed only for simple very apparent brain disorders, but soon they will be able to quantify the reasons for every crime. There is nothing more to a person than his neural connections. We will be able to judge the range of control the person has in committing crime and then judge the person according to the neural systems.

This brings up two very interesting scenarios. The first is that people without a criminal record or who have committed minor offenses are going to be judged on their disposition to commit future crime. Second the idea of crime itself will have to be redefined. In a prior world free will existed and thus it was possible to judge a man for committing a crime. Now with no free will a man is being judged for being himself and thus the crime is a manifestation of his existence. Now since he can’t help it, do we let him go after some neural treatment. However, if the crime is a heinous act against our community where is the justice? What happened to the victim?

There clearly has to be a line in judgement. Currently, due to the crudeness of locating neurological pathologies, most cases are judged too have insufficient neural data and the crime is being judged on traditional means. However, in the future this will not be the case.