Changes for page West Nelson Aff
Summary
-
Objects (2 modified, 2 added, 6 removed)
Details
- Caselist.CitesClass[2]
-
- EntryDate
-
... ... @@ -1,1 +1,1 @@ 1 -2016-09-18 14:24:24.0 1 +2016-09-18 14:24:24.709
- Caselist.CitesClass[3]
-
- Cites
-
... ... @@ -1,66 +1,0 @@ 1 -AC: 2 -FWK 3 -The standard is consistency with utilitarianism. 4 -Psychological evidence proves we don’t identify with our future selves. Continuous personal identity doesn’t exist. Opar 14 5 -(Alisa Opar is the articles editor at Audubon magazine; cites Hal Hershfield, an assistant professor at New York University’s Stern School of Business; and Emily Pronin, a psychologist at Princeton) “Why We Procrastinate” Nautilus January 2014 AT 6 -The British philosopher Derek Parfit espoused a severely reductionist view of personal identity in his seminal book, Reasons and Persons: It does not exist, at least not in the way we usually consider it. We humans, Parfit argued, are not a consistent identity moving through time, but a chain of successive selves, each tangentially linked to, and yet distinct from, the previous and subsequent ones. The boy who begins to smoke despite knowing that he may suffer from the habit decades later should not be judged harshly: “This boy does not identify with his future self,” Parfit wrote. “His attitude towards this future self is in some ways like his attitude to other people.” Parfit’s view was controversial even among philosophers. But psychologists are beginning to understand that it may accurately describe our attitudes towards our own decision-making: It turns out that we see our future selves as strangers. Though we will inevitably share their fates, the people we will become in a decade, quarter century, or more, are unknown to us. This impedes our ability to make good choices on their—which of course is our own—behalf. That bright, shiny New Year’s resolution? If you feel perfectly justified in breaking it, it may be because it feels like it was a promise someone else made. “It’s kind of a weird notion,” says Hal Hershfield, an assistant professor at New York University’s Stern School of Business. “On a psychological and emotional level we really consider that future self as if it’s another person.” Using fMRI, Hershfield and colleagues studied brain activity changes when people imagine their future and consider their present. They homed in on two areas of the brain called the medial prefrontal cortex and the rostral anterior cingulate cortex, which are more active when a subject thinks about himself than when he thinks of someone else. They found these same areas were more strongly activated when subjects thought of themselves today, than of themselves in the future. Their future self “felt” like somebody else. In fact, their neural activity when they described themselves in a decade was similar to that when they described Matt Damon or Natalie Portman. And subjects whose brain activity changed the most when they spoke about their future selves were the least likely to favor large long-term financial gains over small immediate ones. Emily Pronin, a psychologist at Princeton, has come to similar conclusions in her research. In a 2008 study, Pronin and her team told college students that they were taking part in an experiment on disgust that required drinking a concoction made of ketchup and soy sauce. The more they, their future selves, or other students consumed, they were told, the greater the benefit to science. Students who were told they’d have to down the distasteful quaff that day committed to consuming two tablespoons. But those that were committing their future selves (the following semester) or other students to participate agreed to guzzle an average of half a cup. We think of our future selves, says Pronin, like we think of others: in the third person. The disconnect between our present and time-shifted selves has real implications for how we make decisions. We might choose to procrastinate, and let some other version of our self deal with problems or chores. Or, as in the case of Parfit’s smoking boy, we can focus on that version of our self that derives pleasure, and ignore the one that pays the price. But if procrastination or irresponsibility can derive from a poor connection to your future self, strengthening this connection may prove to be an effective remedy. This is exactly the tactic that some researchers are taking. Anne Wilson, a psychologist at Wilfrid Laurier University in Canada, has manipulated people’s perception of time by presenting participants with timelines scaled to make an upcoming event, such as a paper due date, seem either very close or far off. “Using a longer timeline makes people feel more connected to their future selves,” says Wilson. That, in turn, spurred students to finish their assignment earlier, saving their end-of-semester self the stress of banging it out at the last minute. We think of our future selves, says Pronin, like we think of others: in the third person. Hershfield has taken a more high-tech approach. Inspired by the use of images to spur charitable donations, he and colleagues took subjects into a virtual reality room and asked them to look into a mirror. The subjects saw either their current self, or a digitally aged image of themselves (see the figure, Digital Old Age). When they exited the room, they were asked how they’d spend $1,000. Those exposed to the aged photo said they’d put twice as much into a retirement account as those who saw themselves unaged. This might be important news for parts of the finance industry. Insurance giant Allianz is funding a pilot project in the midwest in which Hershfield’s team will show state employees their aged faces when they make pension allocations. Merrill Edge, the online discount unit of Bank of America Merrill Lynch, has taken this approach online, with a service called Face Retirement. Each decade-jumping image is accompanied by startling cost-of-living projections and suggestions to invest in your golden years. Hershfield is currently investigating whether morphed images can help people lose weight. Of course, the way we treat our future self is not necessarily negative: Since we think of our future self as someone else, our own decision making reflects how we treat other people. Where Parfit’s smoking boy endangers the health of his future self with nary a thought, others might act differently. “The thing is, we make sacrifices for people all the time,” says Hershfield. “In relationships, in marriages.” The silver lining of our dissociation from our future self, then, is that it is another reason to practice being good to others. One of them might be you. 7 -Justifies Util: the absence of personal identity, only end states can matter. Shoemaker 99 8 -Shoemaker, David (Dept of Philosophy, U Memphis). “Utilitarianism and Personal Identity.” The Journal of Value Inquiry 33: 183–199, 1999. http://www.csun.edu/~ds56723/jvipaper.pdf 9 -Extreme reductionism might lend support to utilitarianism in the following way. Many people claim that we are justified in maximizing the good in our own lives, but not justified in maximizing the good across sets of lives, simply because each of us is a single, deeply unified person, unified by the further fact of identity, whereas there is no such corresponding unity across sets of lives. But if the only justification for the different treatment of individual lives and sets of lives is the further fact, and this fact is undermined by the truth of reductionism, then nothing justifies this different treatment. There are no deeply unified subjects of experience. What remains are merely the experiences themselves, and so any ethical theory distinguishing between individual lives and sets of lives is mistaken. If the deep, further fact is missing, then there are no unities. The morally significant units should then be the states people are in at particular times, and an ethical theory that focused on them and attempted to improve their quality, whatever their location, would be the most plausible. Utilitarianism is just such a theory. 10 -Prefer Additionally: 11 -1 Policymaking is key to critical thinking, thus the role of the ballot is to access the desirability of an aff policy option via empirical evidence. Harwood 5 12 -(Karey, associate professor in the Department of Philosophy and Religious Studies) “Teaching Bioethics through Participation and Policy-Making” Essays on Teaching Excellence Toward the Best in the Academy Vol. 16, No. 4, 2004-2005 A publication of The Professional and Organizational Development Network in Higher Education AT 13 -Teaching bioethics to undergraduate students in the humanities and social sciences differs from teaching ethics to medical students or residents. One primary difference is that undergraduates are removed from the clinical setting, where a clinically-based case method of teaching is widely practiced and where students can develop their decision-making skills "at the bedside" through the mentoring of more senior physicians. Another difference is that undergraduates are not in training to join a profession, in this case a profession that has developed a fairly stable body of principles that are "applied" to real-life moral dilemmas (Jonsen, Siegler, and Winslade, 2002; Wear, 2002). Instead, as part of a liberal arts education, an undergraduate course in bioethics should aim to prepare students for life as an engaged citizen in a democratic society (Callahan and Bok, 1980; Kohlberg, 1981) by developing skills in critical thinking and encouraging active engagement in the deliberation of issues in the areas of medicine and biotechnology. Critical thinking, most plainly, is the ability to make well-considered judgments. Critical thinking involves the analysis of concepts and arguments and the interpretation of concrete data or evidence (APA, 1990); but it also requires capacities for self-criticism, moral imagination, and empathy (Momeyer, 2002). It enables the discernment of better and worse arguments or better and worse courses of action, and thus rests on the premise that such judgments of value are possible. It is an essential set of skills, not because it is immediately applicable to a chosen career, but because "wide-awake, careful, thorough habits of thinking" (Dewey, 1933, p. 274) are important in all areas of human life, both individual and social. How to Teach Bioethics One way to foster the development of critical reasoning skills in the undergraduate setting is to provide groups of students with the opportunity to research, analyze, discuss, and propose public policy on emerging topics in bioethics. This type of activity simulates the work of a national bioethics commission and encourages students to view themselves as participants in a significant public debate. For example, a group of students might study stem cell research or international research on AIDS, acquiring enough scientific, medical, and historical background on these topics to be able to identify potential ethical questions. Some questions that might be considered include: Do the benefits of stem cell research justify the use of human embryos? Are all sources of human stem cells morally equivalent? Are the existing safeguards to protect human subjects adequate for international research on AIDS? Should developing countries be able to benefit from AIDS research when their citizens serve as research subjects? Without necessarily working to achieve complete agreement, students try to reach enough of a consensus to propose a policy or regulation. A group might decide that allowing stem cell research from "leftover" embryos created in the context of in vitro fertilization is acceptable, for example, but that creating embryos for the sole purpose of research is not. Students must give reasons for their regulations; and, in searching for and articulating these reasons, students are encouraged to examine the moral values and commitments that underlie their positions. An in-class presentation of the group’s work serves as the culminating exercise, and other students are invited to challenge and contribute to the debate about what ought to be done. Students typically relish this opportunity, seeing themselves not as a passive audience to be fed neutral information but as participants in a debate that matters. In other words, they exhibit the traits of engaged citizens. These activities are highly participatory and inquiry-guided, which means the learning is driven by the task of solving a problem: devising a public policy. Students are invested in and motivated by the group’s task and discover together what they need to learn about their topic. Included in this learning process is the integration of abstract ethical theories and concepts — ideally studied throughout the entirety of the course — into the concrete details of the case at hand. It is not a matter of simply "applying" the principle of justice to the topic of international research on AIDS, for example, just for the sake of getting something done (Evans, 2000). Students must ask: what does justice look like in this case? Does conducting an experiment to see how cheaply an individual in a developing country can be treated for AIDS promote justice, as we understand it? In asking these substantive questions, students in an undergraduate bioethics course are engaged in what Callahan calls "foundational" bioethics (Callahan, 1999). They are not merely engaged in means-end reasoning: how best to achieve an already settled goal (Wear, 2002). They are examining the goals themselves, and thus considering "a multiplicity of ultimate values" (Momeyer, 2002). Developing a Wide-awake Citizenry As any teacher of undergraduate ethics can attest, this kind of substantive discussion of "ultimate values" or "the good" can be murky territory. The allure of moral relativism is strong and the resources for challenging it seem limited. As Momeyer observed, "Students frequently arrive in our classrooms with very limited ways of morally engaging problematic situations, by, for instance, appealing to religious dogmas or a relentless subjectivism and/or relativism, or by privileging – as well enculturated Americans seemingly must, – the exercise of individual autonomy over all other values"(p. 412). Regardless of how one explains the allure of relativism, what is clear is that undergraduates need to develop skills in critical thinking if they are to be able to make the well-considered judgments that are inevitable and necessary in life. One benefit of a simulated bioethics commission is that it directs students’ attention toward a problem of public policy, which is to say a problem of societal significance. Discussing classic cases in medical ethics that focus on an individual patient’s dilemma, such as, famously, whether Dax Cowart’s requests to die after suffering severe burns over most of his body should have been honored by his physicians, provide essential occasions to learn about important concepts like informed consent, competence, and respect for autonomy. Indeed, effective teaching of ethics in any setting arguably requires a dynamic balance between conceptual analysis and concrete engagement of cases. But undergraduates also need opportunities to learn that their critical thinking skills will be needed in shaping the social policies of the future. Why is critical thinking a legitimate and valuable goal? And why is active engagement or participation in shaping social policies important? As Dewey once argued, the point of education is to teach students to think on their own because conscious thinking and participation are the hallmarks of democratic citizenship. Others have followed Dewey’s pragmatic sensibilities, including the developmental psychologist, Lawrence Kohlberg, whose "just community" schools were an outgrowth of his belief that democratic participation in the making of rules for everyone in a community fosters students’ moral development. The writings of Jürgen Habermas (1995) on discourse ethics have also influenced legions of teachers to examine anew the value of a consensus-seeking dialogue that is widely inclusive and highly participatory. Conclusion If we are to avoid living in an "administered society," where we passively receive what is handed down to us from others, it is important to develop a sense of engagement in the social policies that are made and to practice the critical reasoning skills necessary to make well-considered judgments (Bellah, et al., 1991). Fortunately, continuing developments in medicine and biotechnology offer an abundance of ethical issues to debate. Teaching bioethics in the undergraduate setting is about paying attention to these debates and having a stake in their outcome. 14 -2 Actor specificity: governments are obligated to use util. Goodin 90 15 -Robert Goodin, Professor of Government, University of Essex, Australian National Defense University, “THE UTILITARIAN RESPONSE,” p. 141-2, 1990. 16 -My larger argument turns on the proposition that there is Something special about the situation of public officials that makes utilitarianism more probable for them than private individuals. Before proceeding with the large argument, I must therefore say what it is that makes it so special about public officials and their situations that make it both more necessary and more desirable for them to adopt a more credible form of util.itarianism. Consider, first, the argument from necessity. Public officials are obliged to make their choices under uncertainty., and uncertainty of a very special sort at that. All choices – public and private alike – are made under some degree of uncertainty, of course. But in the nature of things, private Individuals will usually have more complete information on the peculiarities of their own circumstances. and on the ramifications that alternative possible choices might have for them. Public officials, in contrast, are relatively poorly informed as to the effects that their choices will have on individuals, one by one. What they typically do know are generalities: averages and aggregates. They know what will happen most often to most people as a result of their various possible choices, but that is all. That is enough to allow public policy-makers to use the utilitarian calculus. – assuming they want to use it at all – to choose general rules or conduct. 17 -Accidents 18 -Nuclear power plants contain multiple nonlinear interactions – kills safety systems and guarantees accidents. Perrow 11 19 -Charles Perrow Charles Perrow is an emeritus professor of sociology at Yale University and visiting professor at Stanford University. The author of several books and many articles on organizations, he is primarily concerned with the impact of large organizations on society (Organizing America: Wealth, Power, and the Origins of Corporate Capitalism, Princeton University Press, 2001), and their catastrophic potentials (Normal Accidents: Living with HighRisk Technologies, Princeton University Press, 1999; The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters, Princeton University Press, 2011)., November 1, 2011, "Fukushima and the Inevitability of Accidents" Sagepub/Bulletin of Atomic Scientists, http://bos.sagepub.com/content/67/6/44 20 -In my work on “normal accidents,” I have argued that some complex organizations such as chemical plants, nuclear power plants, nuclear weapons systems, and, to a more limited extent, air transport networks have so many nonlinear system properties that eventually the unanticipated interaction of multiple failures may create an accident that no designer could have anticipated and no operator can understand. Everything is subject to failure designs, procedures, supplies and equipment, operators, and the environment. The government and businesses know this and design safety devices with multiple redundancies and all kinds of bells and whistles. But nonlinear, unexpected interactions of even small failures can defeat these safety systems. If the system is also tightly coupled, no intervention can prevent a cascade of failures that brings it down. I use the term “normal” because these characteristics are built into the systems; there is nothing one can do about them other than to initiate massive system redesigns to reduce interactive complexity and to loosen coupling. Companies and governments can modularize integrated designs and deconcentrate hazardous material. Actually, though, compared with the prosaic cases previously mentioned, normal accidents are rare. (Three Mile Island is the only accident in my list that qualifies.) It is much more common for systems with catastrophic potential to fail because of poor regulation, ignored warnings, production pressures, cost cutting, poor training, and so on. All of the organizational faults I have noted have their counterpart in daily life. Like organizations and their leaders, people seek wealth and prestige and a reputation for integrity. In the process, they occasionally find it necessary to be deceitful, engaging in denials and coverups, cheating and fabrication. Everyone has violated regulations, failed to plan ahead, and bungled in crises. But people are not, as individuals, repositories of radioactive materials, toxic substances, and explosives, nor do they sit astride critical infrastructures. Organizations do. The consequences of an individual’s failures can only be catastrophic if they are magnified by organizations. The larger the organizations, the greater the concentration of destructive power. The larger the organizations, the greater the potential for political power that can influence regulations and ignore warnings. 21 - 22 - 23 - 24 -Statistical evidence proves – probability of a core melt accident in the next decade is 70 percent. Rose and Sweeting 3/2 25 -Thomas Rose Professor of sensor technology at the Münster University of Applied Science; PhD in nuclear physics. Trevor Sweeting; emeritus professor of statistics at University College London, PhD in statistics and has published statistical information in medicine, engineering, computer science and elsewhere. 3-2-2016, "How safe is nuclear power? A statistical study suggests less than expected: Bulletin of the Atomic Scientists: Vol 72, No 2," Taylor and Francis, http://tandfonline.com/doi/full/10.1080/00963402.2016.1145910?platform=hootsuiteand 26 -The 2011 Fukushima disaster in Japan suggested once more that severe nuclear accidents could be even more frequent than safety studies had predicted and Feiveson had hoped. So we decided to estimate the probability of a severe accident – that is, a core-melt accident – by relating the number of past core-melt accidents to the total number of years reactors have been operating (i.e. “reactor years”). This type of prediction often runs up against the argument that nuclear operators learn from the past. Therefore we also tried to account for any learning effects in our analysis. We restricted our analysis to accidents related to civil nuclear reactors used for power generation, as arguments about trade-offs for using nuclear technology differ depending on the application. And, because the International Atomic Energy Agency (IAEA) does not distribute comprehensive, long-term reports on nuclear incidents and accidents because of confidentiality agreements with the countries it works with, we have had to use alternative sources for information on nuclear accidents over time. By our calculations, the overall probability of a coremelt accident in the next decade, in a world with 443 reactors, is almost 70. (Because of statistical uncertainty, however, the probability could range from about 28 to roughly 95.) The United States, with 104 reactors, has about a 50 probability of experiencing one core-melt accident within the next 25 years.1 27 -Prefer this analysis: 28 -First, accident risk assessment based on paths towards an accident fail – using past data is the most reliable – consensus of experts. Rose and Sweeting 2 29 -Thomas Rose Professor of sensor technology at the Münster University of Applied Science; PhD in nuclear physics. Trevor Sweeting; emeritus professor of statistics at University College London, PhD in statistics and has published statistical information in medicine, engineering, computer science and elsewhere. 3-2-2016, "How safe is nuclear power? A statistical study suggests less than expected: Bulletin of the Atomic Scientists: Vol 72, No 2," Taylor and Francis, http://tandfonline.com/doi/full/10.1080/00963402.2016.1145910?platform=hootsuiteand 30 -In the past, several studies have investigated the probability of a core melt using the probabilistic risk assessment (PRA) method. This determines probability prior to accidents by analyzing possible paths toward a severe accident, rather than using existing data to determine probability empirically. Two studies by the US Nuclear Regulatory Commission (1975, 1990) as well as a German government study (Hörtner 1980) examined seven different cases or reactors. Three calculations resulted in 1 accident in more than 200,000 reactor years, and a further three resulted in 1 accident in 11,000–25,000 reactor years. Only the result for the Zion reactor had an accident rate similar to ours, with 1 accident in 3000 years. After Chernobyl, Islam and Lindgren (1986, 691) published a short note in Nature in which, based on the known accidents (Three Mile Island and Chernobyl) and reactor years (approximately 4000) at the time, they concluded that “…the probability of having one accident every two decades is more than 95.” Regarding PRA, they wrote: “Our view is that this method should be replaced by risk assessment using the observed data.” This sparked an intensive discussion of statistical issues in the following year (Edwards 1986; Schwartz 1986; Fröhner 1987; Chow and Oliver 1987; Edwards 1987); however, there was agreement on the substantive conclusions of Islam and Lindgren. 31 -Second, there’s no learning effect – humans keep making the same mistakes. Rose and Sweeting 3 32 -Thomas Rose Professor of sensor technology at the Münster University of Applied Science; PhD in nuclear physics. Trevor Sweeting; emeritus professor of statistics at University College London, PhD in statistics and has published statistical information in medicine, engineering, computer science and elsewhere. 3-2-2016, "How safe is nuclear power? A statistical study suggests less than expected: Bulletin of the Atomic Scientists: Vol 72, No 2," Taylor and Francis, http://tandfonline.com/doi/full/10.1080/00963402.2016.1145910?platform=hootsuiteand 33 -We also wanted to see whether accidents become less frequent with more operational experience. But simply analyzing the number of severe accidents against reactor years is not very illuminating because, luckily, these accidents are rather rare. So we examined the relationship between the cumulative number of all accidents, from severe to minor ones, and cumulative reactor years. The accident rate is then estimated as the ratio of cumulative number of accidents to cumulative reactor years. If the probability of an accident remained constant over time, then a graph of the above accidentrate estimates against reactor years would exhibit no trend, whereas a learning effect would result in a decreasing accident probability and the graph would exhibit a decreasing trend. We began by plotting the data from the Guardian list, with a few exclusions.3 The graph shows a high accident rate at the beginning because of one accident in Russia in 1957. The accident rate then drops because the following years were accident-free. After around 500 reactor years, the plot appears to stabilize, varying around a constant value. This is confirmed by a detailed statistical analysis, which produces a probability for a (minor or major) accident in a nuclear power plant of about 1 in 1000 reactor years and shows no evidence of a learning effect. 34 -Third, the authors analyzed every core melt accident, despite organizations like the IAEA hiding information. Rose and Sweeting 4 35 -Thomas Rose Professor of sensor technology at the Münster University of Applied Science; PhD in nuclear physics. Trevor Sweeting; emeritus professor of statistics at University College London, PhD in statistics and has published statistical information in medicine, engineering, computer science and elsewhere. 3-2-2016, "How safe is nuclear power? A statistical study suggests less than expected: Bulletin of the Atomic Scientists: Vol 72, No 2," Taylor and Francis, http://tandfonline.com/doi/full/10.1080/00963402.2016.1145910?platform=hootsuiteand 36 -After the Fukushima disaster, the authors analyzed all past core-melt accidents and estimated a failure rate of 1 per 3704 reactor years. This rate indicates that more than one such accident could occur somewhere in the world within the next decade. The authors also analyzed the role that learning from past accidents can play over time. This analysis showed few or no learning effects occurring, depending on the database used. Because the International Atomic Energy Agency (IAEA) has no publicly available list of nuclear accidents, the authors used data compiled by the Guardian newspaper and the energy researcher Benjamin Sovacool. The results suggest that there are likely to be more severe nuclear accidents than have been expected and support Charles Perrow’s “normal accidents” theory that nuclear power reactors cannot be operated without major accidents. However, a more detailed analysis of nuclear accident probabilities needs more transparency from the IAEA. Public support for nuclear power cannot currently be based on full knowledge simply because important information is not available. 37 -Advocacy 38 -I advocate that all countries prohibit the production of nuclear power which is currently connected to the electrical grid. To clarify, this would include nuclear power plants, but would exclude things like research reactors. I defend action via the federal governments of countries. I defend it as a phaseout. I’ll spec to whatever in CX so long as it doesn’t screw solvency ~-~- but only I determine what that is. 39 -Only phaseout solves. Lucas 12 40 -Caroline Lucas MP for Brighton Pavilion and a member of the cross-party parliamentary environment audit committee, “Why we must phase out nuclear power,” The Guardian, February 17, 2012, https://www.theguardian.com/environment/2012/feb/17/phase-out-nuclear-power 41 -Fukushima, like Chernobyl 25 years before it, has shown us that while the likelihood of a nuclear disaster occurring may be low, the potential impact is enormous. The inherent risk in the use of nuclear energy, as well as the related proliferation of nuclear technologies, can and does have disastrous consequences. The only certain way to eliminate this potentially devastating risk is to phase out nuclear power altogether. Some countries appear to have learnt this lesson. In Germany, the government changed course in the aftermath of Fukushima and decided to go ahead with a previously agreed phase out of nuclear power. Many scenarios now foresee Germany sourcing 100 of its power needs from renewables by 2030. Meanwhile Italian citizens voted against plans to go nuclear with a 90 majority. The same is not yet true in Japan. Although only three out of its 54 nuclear reactors are online and generating power, while the Japanese authorities conduct "stress tests", the government hopes to reopen almost all of these and prolong the working life of a number of its ageing reactors by to up to 60 years. The Japanese public have made their opposition clear however. Opinion polls consistently show a strong majority of the population is now against nuclear power. Local grassroots movements opposing nuclear power have been springing up across Japan. Mayors and governors in fear of losing their power tend to follow the majority of their citizens. The European level response has been to undertake stress tests on nuclear reactors across the union. However, these stress tests appear to be little more than a PR exercise to encourage public acceptance in order to allow the nuclear industry to continue with business as usual. The tests fail to assess the full risks of nuclear power, ignoring crucial factors such as fires, human failures, degradation of essential infrastructure or the impact of an airplane crash. 42 -Advantage 1 is Structural Violence 43 -Accidents cause structural violence – marginalized people bear the brunt of them. Cousins et al. 13 44 -Elicia Cousins Elicia Cousins is a doctoral student in sociology and a research assistant in SSEHRI. She is from Tokyo, Japan and received her BA in Environmental Studies from Carleton College in Minnesota, Claire Karban , Fay Li , and Marianna Zapanta , 2013 (The Study references other studies from 2013 implying it’s from at least 2013, but it does not list a formal date), " Nuclear Power and Environmental Justice: A Mixed-Methods Study of Risk, Vulnerability, and the Victim Experience" Environmental Studies Comprehensive Project”, https://apps.carleton.edu/curricular/ents/assets/Cousins_Karban_Li_Zapanta.pdf 45 -The range of potential impacts on biological health, wellbeing, and health-related behavior is vast for anyone involved, regardless of age, gender, race, or socioeconomic status. However, we suggest that such stress is amplified in certain populations of heightened vulnerability. For example, the stress of parents (especially mothers) with preschool age children was particularly evident throughout the present analysis, a well-documented trend associated with technological disasters (Havenaar et al. 1996; Bromet and Schulberg 1986). Children are also particularly sensitive to physical and mental harm (Peek 2008; Bromet et al. 2000), a trend reflected in a recent survey of child evacuees from Fukushima24 that revealed stress levels double the Japanese average (Brumfiel 2013). A similar trend is evident in the United States, where low-income populations and people of color may encounter such stressors at higher levels and face greater difficulties in coping with them. Low-income populations are more likely to lack financial resources (Cutter 2006) and expansive social networks to rely upon (Dominguez and Watkins 2003) during emergencies. The differential response and recovery to Hurricane Katrina is a telling example of such trends: while those with resources left before the hurricane arrived, those without resources (mainly the poor, African American, elderly, or residents without private cars) had to remain and deal with the oncoming disaster (Cutter 2006). Low socioeconomic status is also associated with lower educational attainment, which constrains understanding of and access to disaster warnings and information on recovery (Ibid.). It follows that such populations would be more prone to stress from feelings of uncertainty. 46 -No amount of radiation is good – it kills – consensus and empirics. Mushak 7 47 -Paul Mushak magna cum laude at University of Scranton, Ph.D. in metalloorganic/organic chemistry and biochemistry at the University of Florida, postdoctoral work as a fellow in the Department of Molecular Biophysics and Biochemistry in the School of Medicine at Yale University, was on the faculty of the University of North Carolina School of Medicine in the Department of Pathology from 1971 to 1985, and was an adjunct professor from 1985 to 1993. From 1995 to 2010, he was a member of the Montefiore Medical Center-Second Medical University of Shanghai, China Collaborating Centers for Prevention of Childhood Lead Poisoning and a visiting professor of Pediatric Environmental Health, Department of Pediatrics, April 2007, "Hormesis and Its Place in Nonmonotonic Dose–Response Relationships: Some Scientific Reality Checks," Environmental Health Perspectives, http://pubmedcentralcanada.ca/pmcc/articles/PMC1852676/ 48 -Radiologic hormesis. Some proponents of radiologic hormesis hold that the linear no-threshold (LNT) dose-response relationship for radiologic carcinogenesis in humans and experimental animals is no longer tenable and radiologic hormesis should be the prevailing model (e.g., Calabrese 2005b; Pollycove and Feinendegen 2001). However, research into radiologic carcinogenesis in humans continues to feed a huge epidemiologic database documenting the persistence of radiologic carcinogenicity with lower dose. Little convincing evidence exists to support human radiologic hormetic responses nullifying the LNT model of low-dose carcinogenesis. Current thinking also is mixed about a conceptual context for human radiologic hormesis (Johansson 2003; Pollycove and Feinendegen 2001; Upton 2001). Recent expert consensus treatises on low-dose carcinogenicity of ionizing radiation in humans present analyses largely supporting the LNT model. The most recent report by the NAS/NRC on the biological effects of ionizing radiation, BEIR VII-Phase 2 (NAS/NRC 2006), endorsed preservation of the LNT model, that is, cancer risk proceeds in a linear fashion at lower doses without a threshold. Appendix D of the report covers radiation hormesis. The report concluded that animal and cell studies suggesting benefits or threshold to harm from ionizing radiation are "not compelling" and that, at present, any assumption of net health benefits of radiation hormesis over detrimental impacts at the same dose is unwarranted. The December 2004 draft report of the International Commission on Radiation Protection (ICRP 2004), the report of the United Nations Scientific Committee on the Effects of Atomic Radiation (2000), and the report of the National Council on Radiation Protection and Measurements (NCRP 2001) have all concluded the LNT model remains valid. The 2005 report of the joint French National Academy of Medicine and the Institute of France Academy of Sciences on low dose-carcinogenic effect relationships for ionizing radiation (Academie Nationale de Medecine 2005) raises doubts about use of, or recommendations for, the LNT model at low ( 100 mSv) and very low ( 10 mSv) doses. The French report refers to radiation hormesis only in passing and in narrow context. Several recent epidemiologic studies are consistent with an LNT dose response. The Techa River study in the Southern Urals region of Russia (Krestinina et al. 2005) reported individualized risk estimates for excess cancers from radioactivity exposures traced to a weapons plant leak in the 1950s. The extended cohort, with 29,873 people born between 1950 and 1960, provided strong evidence of low-dose radiologic carcinogenesis. Cardis et al. (2005), in their studies of 400,000 nuclear plant workers in 15 countries, concluded that 1-2 of cancer deaths among the cohort may be due to radiation. The results for the nuclear workers produced conclusions similar to those of the Techa River findings and both articles support the ICRP standard of 20 mSv/year for occupational protection. 49 -Accidents trap the poor who cannot afford to leave, and those who can receive huge emotional stress. Cousins et al 13 50 -Elicia Cousins Elicia Cousins is a doctoral student in sociology and a research assistant in SSEHRI. She is from Tokyo, Japan and received her BA in Environmental Studies from Carleton College in Minnesota, Claire Karban , Fay Li , and Marianna Zapanta , 2013 (The Study references other studies from 2013 implying it’s from at least 2013, but it does not list a formal date), " Nuclear Power and Environmental Justice: A Mixed-Methods Study of Risk, Vulnerability, and the Victim Experience" Environmental Studies Comprehensive Project”, https://apps.carleton.edu/curricular/ents/assets/Cousins_Karban_Li_Zapanta.pdf 51 -Technological disasters often involve the release of toxins into the environment, necessitating the displacement of residents of newly contaminated regions. First and foremost, this involves the stress arising from new financial burdens17, and the need for securing new housing and employment. However, it also involves the difficulties of adjusting to the new environment and to separation from friends, relatives, and familiar social networks. In the freeresponse section of the NAIIC survey, 334 respondents (4) wrote of constant stress due to an unfamiliar environment and prolonged refugee life. Another 24 (0.3) respondents explicitly mentioned the difficulties of building new relationships and getting along with people in their new environment, and feeling isolated and alone. One woman from Fukushima explained her feeling of being in limbo: I’m getting used to life here in Kyoto, but I can’t help but feel as though it’s just a temporary, almost “borrowed” life. I don’t have relatives or close friends in Kyoto, and it’s hard to feel like this is where I really should be, and that my feet are firmly on the ground (Fukushima City Happy Island Newspaper). Housing, job, and financial uncertainty can also serve as barriers to evacuation, heightening the stress associated with the inability to evacuate. In Japan, only areas in which cumulative radiation exposure is projected to reach 20mSv/year18 have been evacuated by the government, leaving residents of other areas to decide whether to evacuate “voluntarily” without government compensation. As a result, many who wish to evacuate cannot afford to; according to a 2011 survey by Friends of Earth Japan, financial and employment uncertainty were the main barriers to evacuation, and this continues to be true (Mitsuta interview). In many cases the mother and children have evacuated for the sake of the children’s health, while the father remains in Fukushima to continue his job and earn money. In the free response section of the NAIIC survey, 290 respondents (4) wrote about how much they missed seeing members of their family. 52 - 53 - 54 -Advantage 2 is BioD 55 -Nuclear power plants irreversibly damage biodiversity from accidents and waste: laundry list, empirics, and literature review. Kabasakal and Albayrak 11. 56 -November 17-20, 2011 Paper was presented in this time. Bekir Kabasakal Department of Biology, Graduate School of Natural and Applied Sciences, Mehmet Akif Ersoy University. and Tamer Albayrak Department of Biology, Faculty of Science and Art, Mehmet Akif Ersoy University. Paper presented at the VI. International Symposium on Ecology and Environmental Problems. "Effects of Nuclear Power Plant Accidents on Biodiversity and Awareness of Potential Nuclear Accident Risk near the Eastern Border of Turkey," Fresenius Environmental Bulletin, http://www.academia.edu/2303350/Effects_of_Nuclear_Power_Plant_Accidents_on_Biodiversity_and_Awareness_of_Potential_Nuclear_Accident_Risk_near_the_Eastern_Border_of_Turkey 57 -Previous studies indicated that NPP accidents cause permanent damage to biodiversity. Recent publications indicate that if an accident happens at MNPP, Turkey, Azerbaijan, Georgia, and Iran might be influenced. Because of this reason it need to have an urgent action plan and take the necessary precautions for this possible catastrophe. KEYWORDS: Nuclear power plant, nuclear accident, nuclear risk, Metsamor, Metsamor Nuclear Power Plant * Corresponding author 1 INTRODUCTION Although Nuclear power plants (NPP) are designed to withstand earthquakes or other natural disasters and to shut down safely in the event of major earth movement, nonuclear facility is 100 safe due to a possible meltdown of the reactor (due to loss of coolant water leading to over-heating). If an accident occurs, it would create a major public hazard and may cause human fatalities and biodiversity loss 1. Moreover, nuclear reactors produce toxic waste, which is highly radioactive and can remain in the environment for several hundred years. Because of these reasons NPPs are highly risky energy resources while they do not produce CO 2 and other green gasses. The severity of any nuclear accident is measured on the international event scale (INES) from 0 to 7 (0 means no consequences and 7 is the major accident; level 1-3 are called incidents and level 4-7 are called accidents). Eight accidents have been reported above level 4 so far (Table 1). Two of those were the major accidents (INES level 7): Chernobyl (1986) and Fukushima (2011). Chernobyl nu-clear power plant disaster is the worst nuclear disaster 2. TABLE 1 - Nuclear power plant accidents and their effects. Date Location INES Level* 2011 Fukushima, Japan 7 1999 Tokaimura, Japan 41986 Chernobyl, Ukraine 7 1980 Saint-Laurent A2, France 41979 Three Mile Island, USA 51969 Saint-Laurent A1, France 41957 Mayak, Russia 61957 Windscale, UK 5*INES event scale: Level 7 = major accident; level 6 = serious accident; level 5 = accident with wider consequences; level 4 = accident with local consequences; Level 3 = serious incident; 2 = incident; 1 = anomaly; level O = no safety significance. © by PSP Volume 21 – No 11b. 2012 Fresenius Environmental Bulletin 34352 EFFECTS OF NUCLEAR POWERPLANT ACCIDENT ON BIODIVERSITY Previous nuclear accidents showed that right after the accident happens, vast amounts of radioactive material, which are harmful to living organisms such as Iodine -131and Caesium -137 or Cesium -137, are transported into the atmosphere 3. Once the radioactive materials are released, all living organism can be influenced in several ways 3, 4. For instance people can be exposed by two ways: from the deposited material itself as external irradiation, or the inhalation of any material resuspended into the atmosphere, and the transfer of material through the terrestrial and aquatic environment to food and water inhalation of radioactive material in the air as internal irradiation 5. Most information about effects of nuclear accidents on biodiversity and human were gained from the Cherno- byl accident. On-going studies on the consequences of the Fukushima accident have been showing that the marine ecosystem will be effected more than the terrestrial eco-systems 6. Effects of NPP accidents on biodiversity can be di-vided into two main topics: physiological and genetic effects of radiation and ecological consequences of radiation 7. Physiological and genetic effects of radiation: increased morphologic, physiologic, genetic disorders; increased mutation rates and developmental abnormalities, increase in general oncological morbidity, accelerated aging, reduction in body antioxidant levels 8-12. Ecological consequences of radiation: reduced adult survival and reproduction suggests and reduction in species abundance 13-16. 58 -Accidents go global. Max Planck Institute for Chemistry 12 Cites Lelevield. 59 -Max Planck Institute for Chemistry Non Government, Non Profit Organization Dedicated to Chemistry, Citing: Johannes Lelieveld PhD in Physics and Astronomy, 5-22-2012, "Probability of contamination from severe nuclear reactor accidents is higher than expected,", https://www.mpg.de/5809418/reactor_accidents 60 -Catastrophic nuclear accidents such as the core meltdowns in Chernobyl and Fukushima are more likely to happen than previously assumed. Based on the operating hours of all civil nuclear reactors and the number of nuclear meltdowns that have occurred, scientists at the Max Planck Institute for Chemistry in Mainz have calculated that such events may occur once every 10 to 20 years (based on the current number of reactors) — some 200 times more often than estimated in the past. The researchers also determined that, in the event of such a major accident, half of the radioactive caesium-137 would be spread over an area of more than 1,000 kilometres away from the nuclear reactor. Their results show that Western Europe is likely to be contaminated about once in 50 years by more than 40 kilobecquerel of caesium-137 per square meter. According to the International Atomic Energy Agency, an area is defined as being contaminated with radiation from this amount onwards. In view of their findings, the researchers call for an in-depth analysis and reassessment of the risks associated with nuclear power plants. Global risk of radioactive contamination. The map shows the annual probability in percent of radioactive contamination by more than 40 kilobecquerels per square meter. In Western Europe the risk is around two percent per year. Image Omitted The reactor accident in Fukushima has fuelled the discussion about nuclear energy and triggered Germany's exit from their nuclear power program. It appears that the global risk of such a catastrophe is higher than previously thought, a result of a study carried out by a research team led by Jos Lelieveld, Director of the Max Planck Institute for Chemistry in Mainz: "After Fukushima, the prospect of such an incident occurring again came into question, and whether we can actually calculate the radioactive fallout using our atmospheric models." According to the results of the study, a nuclear meltdown in one of the reactors in operation worldwide is likely to occur once in 10 to 20 years. Currently, there are 440 nuclear reactors in operation, and 60 more are planned. To determine the likelihood of a nuclear meltdown, the researchers applied a simple calculation. They divided the operating hours of all civilian nuclear reactors in the world, from the commissioning of the first up to the present, by the number of reactor meltdowns that have actually occurred. The total number of operating hours is 14,500 years, the number of reactor meltdowns comes to four—one in Chernobyl and three in Fukushima. This translates into one major accident, being defined according to the International Nuclear Event Scale (INES), every 3,625 years. Even if this result is conservatively rounded to one major accident every 5,000 reactor years, the risk is 200 times higher than the estimate for catastrophic, non-contained core meltdowns made by the U.S. Nuclear Regulatory Commission in 1990. The Mainz researchers did not distinguish ages and types of reactors, or whether they are located in regions of enhanced risks, for example by earthquakes. After all, nobody had anticipated the reactor catastrophe in Japan. 25 percent of the radioactive particles are transported further than 2,000 kilometres Subsequently, the researchers determined the geographic distribution of radioactive gases and particles around a possible accident site using a computer model that describes the Earth's atmosphere. The model calculates meteorological conditions and flows, and also accounts for chemical reactions in the atmosphere. The model can compute the global distribution of trace gases, for example, and can also simulate the spreading of radioactive gases and particles. To approximate the radioactive contamination, the researchers calculated how the particles of radioactive caesium-137 (137Cs) disperse in the atmosphere, where they deposit on the earth’s surface and in what quantities. The 137Cs isotope is a product of the nuclear fission of uranium. It has a half-life of 30 years and was one of the key elements in the radioactive contamination following the disasters of Chernobyl and Fukushima. The computer simulations revealed that, on average, only eight percent of the 137Cs particles are expected to deposit within an area of 50 kilometres around the nuclear accident site. Around 50 percent of the particles would be deposited outside a radius of 1,000 kilometres, and around 25 percent would spread even further than 2,000 kilometres. These results underscore that reactor accidents are likely to cause radioactive contamination well beyond national borders. 61 -Biodiversity is on the brink of global collapse – that causes extinction and is a threat multiplier. Torres 2/10 62 -Phil Torres Phil Torres is author, Affiliate Scholar at the Institute for Ethics and Emerging Technologies, and freelance writer with publications in Salon, Skeptic, the Humanist, American Atheist, The Progressive, Humanity+, and many others. His forthcoming book is called The End: What Science and Religion Tell Us About the Apocalypse (Pitchstone Publishing) , 2-10-16, "Biodiversity Loss and the Doomsday Clock: An Invisible Disaster Almost No One is Talking About," Common Dreams, http://www.commondreams.org/views/2016/02/10/biodiversity-loss-and-doomsday-clock-invisible-disaster-almost-no-one-talking-about 63 -Everywhere one looks, the biosphere is wilting — and a single bipedal species with large brains and opposable thumbs is almost entirely responsible for this worsening plight. If humanity continues to prune back the Tree of Life with reckless abandon, we could be forced to confront a global disaster of truly unprecedented proportions. Along these lines, a 2012 article published in Nature and authored by over twenty scientists claims that humanity could be teetering on the brink of a catastrophic, irreversible collapse of the global ecosystem. According to the paper, there could be “tipping points” — also called “critical thresholds” — lurking in the environment that, once crossed, could initiate radical and sudden changes in the biosphere. Thus, an event of this sort could be preceded by little or no warning: everything might look more or less okay, until the ecosystem is suddenly in ruins. We must, moving forward, never forget that just as we’re minds embodied, so too are we bodies environed, meaning that if the environment implodes under the weight of civilization, then civilization itself is doomed. While the threat of nuclear weapons deserves serious attention from political leaders and academics, as the Bulletin correctly observes, it’s even more imperative that we focus on the broader “contextual problems” that could inflate the overall probability of wars and terrorism in the future. Climate change and biodiversity loss are both conflict multipliers of precisely this sort, and each is a contributing factor that’s exacerbating the other. If we fail to make these threats a top priority in 2016, the likelihood of nuclear weapons — or some other form of emerging technology, including biotechnology and artificial intelligence — being used in the future will only increase. Perhaps there’s still time to avert the sixth mass extinction or a sudden collapse of the global ecosystem. But time is running out — the doomsday clock is ticking. 64 -One species going extinct triggers the brink. Diner ’94 Brackets 65 -David N. Diner 1994, Judge Advocate’s General’s Corps of US Army, Military Law Review, Winter, 143 Mil. L. Rev. 161, l/n, David N. 66 -In past mass extinction episodes, as many as ninety percent of the existing species perished, and yet the world moved forward, and new species replaced the old. So why should the world should be concerned now? The prime reason is the world's survival. Like all animal life, humans live off of other species. At some point, the number of species could decline to the point at which the ecosystem fails, and then humans also would become extinct. No one knows how many species the world needs to support human life, and to find out ~-~- by allowing certain species to become extinct ~-~- would not be sound policy. In addition to food, species offer many direct and indirect benefits to mankind. n68 2. Ecological Value. ~-~- Ecological value is the value that species have in maintaining the environment. Pest, n69 erosion, and flood control are prime benefits certain species provide to man. Plants and animals also provide additional ecological services ~-~- pollution control, n70 oxygen production, sewage treatment, and biodegradation. n71 3. Scientific and Utilitarian Value. ~-~- Scientific value is the use of species for research into the physical processes of the world. n72 Without plants and animals, a large portion of basic scientific research would be impossible. Utilitarian value is the direct utility humans draw from plants and animals. n73 Only a fraction of the *172 earth's species have been examined, and mankind may someday desperately need the species that it is exterminating today. To accept that the snail darter, harelip sucker, or Dismal Swamp southeastern shrew n74 could save mankind may be difficult for some. Many, if not most, species are useless to man in a direct utilitarian sense. Nonetheless, they may be critical in an indirect role, because their extirpations could affect a directly useful species negatively. In a closely interconnected ecosystem, the loss of a species affects other species dependent on it. n75 Moreover, as the number of species decline, the effect of each new extinction on the remaining species increases dramatically. n76 4. Biological Diversity. ~-~- The main premise of species preservation is that diversity is better than simplicity. n77 As the current mass extinction has progressed, the world's biological diversity generally has decreased. This trend occurs within ecosystems by reducing the number of species, and within species by reducing the number of individuals. Both trends carry serious future implications. Biologically diverse ecosystems are characterized by a large number of specialist species, filling narrow ecological niches. These ecosystems inherently are more stable than less diverse systems. "The more complex the ecosystem, the more successfully it can resist a stress. . . . like a net, in which each knot is connected to others by several strands, such a fabric can resist collapse better than a simple, unbranched circle of threads ~-~- which if cut anywhere breaks down as a whole." n79 By causing widespread extinctions, humans have artificially simplified many ecosystems. As biologic simplicity increases, so does the risk of ecosystem failure. The spreading Sahara Desert in Africa, and the dustbowl conditions of the 1930s in the United States are relatively mild examples of what might be expected if this trend continues. Theoretically, each new animal or plant extinction, with all its dimly perceived and intertwined affects, could cause total ecosystem collapse and human extinction. Each new extinction increases the risk of disaster. Like a mechanic removing, one by one, the rivets from an aircraft's wings, mankind may be edging closer to the abyss. - EntryDate
-
... ... @@ -1,1 +1,0 @@ 1 -2016-10-28 20:13:23.0 - Judge
-
... ... @@ -1,1 +1,0 @@ 1 -Ryan Fink - Opponent
-
... ... @@ -1,1 +1,0 @@ 1 -Immaculate MC - ParentRound
-
... ... @@ -1,1 +1,0 @@ 1 -5 - Round
-
... ... @@ -1,1 +1,0 @@ 1 -2 - Team
-
... ... @@ -1,1 +1,0 @@ 1 -West Nelson Aff - Title
-
... ... @@ -1,1 +1,0 @@ 1 -Accidents Aff - Tournament
-
... ... @@ -1,1 +1,0 @@ 1 -Meadows
- Caselist.CitesClass[4]
-
- Cites
-
... ... @@ -1,66 +1,0 @@ 1 -AC: 2 -FWK 3 -The standard is consistency with utilitarianism. 4 -Psychological evidence proves we don’t identify with our future selves. Continuous personal identity doesn’t exist. Opar 14 5 -(Alisa Opar is the articles editor at Audubon magazine; cites Hal Hershfield, an assistant professor at New York University’s Stern School of Business; and Emily Pronin, a psychologist at Princeton) “Why We Procrastinate” Nautilus January 2014 AT 6 -The British philosopher Derek Parfit espoused a severely reductionist view of personal identity in his seminal book, Reasons and Persons: It does not exist, at least not in the way we usually consider it. We humans, Parfit argued, are not a consistent identity moving through time, but a chain of successive selves, each tangentially linked to, and yet distinct from, the previous and subsequent ones. The boy who begins to smoke despite knowing that he may suffer from the habit decades later should not be judged harshly: “This boy does not identify with his future self,” Parfit wrote. “His attitude towards this future self is in some ways like his attitude to other people.” Parfit’s view was controversial even among philosophers. But psychologists are beginning to understand that it may accurately describe our attitudes towards our own decision-making: It turns out that we see our future selves as strangers. Though we will inevitably share their fates, the people we will become in a decade, quarter century, or more, are unknown to us. This impedes our ability to make good choices on their—which of course is our own—behalf. That bright, shiny New Year’s resolution? If you feel perfectly justified in breaking it, it may be because it feels like it was a promise someone else made. “It’s kind of a weird notion,” says Hal Hershfield, an assistant professor at New York University’s Stern School of Business. “On a psychological and emotional level we really consider that future self as if it’s another person.” Using fMRI, Hershfield and colleagues studied brain activity changes when people imagine their future and consider their present. They homed in on two areas of the brain called the medial prefrontal cortex and the rostral anterior cingulate cortex, which are more active when a subject thinks about himself than when he thinks of someone else. They found these same areas were more strongly activated when subjects thought of themselves today, than of themselves in the future. Their future self “felt” like somebody else. In fact, their neural activity when they described themselves in a decade was similar to that when they described Matt Damon or Natalie Portman. And subjects whose brain activity changed the most when they spoke about their future selves were the least likely to favor large long-term financial gains over small immediate ones. Emily Pronin, a psychologist at Princeton, has come to similar conclusions in her research. In a 2008 study, Pronin and her team told college students that they were taking part in an experiment on disgust that required drinking a concoction made of ketchup and soy sauce. The more they, their future selves, or other students consumed, they were told, the greater the benefit to science. Students who were told they’d have to down the distasteful quaff that day committed to consuming two tablespoons. But those that were committing their future selves (the following semester) or other students to participate agreed to guzzle an average of half a cup. We think of our future selves, says Pronin, like we think of others: in the third person. The disconnect between our present and time-shifted selves has real implications for how we make decisions. We might choose to procrastinate, and let some other version of our self deal with problems or chores. Or, as in the case of Parfit’s smoking boy, we can focus on that version of our self that derives pleasure, and ignore the one that pays the price. But if procrastination or irresponsibility can derive from a poor connection to your future self, strengthening this connection may prove to be an effective remedy. This is exactly the tactic that some researchers are taking. Anne Wilson, a psychologist at Wilfrid Laurier University in Canada, has manipulated people’s perception of time by presenting participants with timelines scaled to make an upcoming event, such as a paper due date, seem either very close or far off. “Using a longer timeline makes people feel more connected to their future selves,” says Wilson. That, in turn, spurred students to finish their assignment earlier, saving their end-of-semester self the stress of banging it out at the last minute. We think of our future selves, says Pronin, like we think of others: in the third person. Hershfield has taken a more high-tech approach. Inspired by the use of images to spur charitable donations, he and colleagues took subjects into a virtual reality room and asked them to look into a mirror. The subjects saw either their current self, or a digitally aged image of themselves (see the figure, Digital Old Age). When they exited the room, they were asked how they’d spend $1,000. Those exposed to the aged photo said they’d put twice as much into a retirement account as those who saw themselves unaged. This might be important news for parts of the finance industry. Insurance giant Allianz is funding a pilot project in the midwest in which Hershfield’s team will show state employees their aged faces when they make pension allocations. Merrill Edge, the online discount unit of Bank of America Merrill Lynch, has taken this approach online, with a service called Face Retirement. Each decade-jumping image is accompanied by startling cost-of-living projections and suggestions to invest in your golden years. Hershfield is currently investigating whether morphed images can help people lose weight. Of course, the way we treat our future self is not necessarily negative: Since we think of our future self as someone else, our own decision making reflects how we treat other people. Where Parfit’s smoking boy endangers the health of his future self with nary a thought, others might act differently. “The thing is, we make sacrifices for people all the time,” says Hershfield. “In relationships, in marriages.” The silver lining of our dissociation from our future self, then, is that it is another reason to practice being good to others. One of them might be you. 7 -Justifies Util: the absence of personal identity, only end states can matter. Shoemaker 99 8 -Shoemaker, David (Dept of Philosophy, U Memphis). “Utilitarianism and Personal Identity.” The Journal of Value Inquiry 33: 183–199, 1999. http://www.csun.edu/~ds56723/jvipaper.pdf 9 -Extreme reductionism might lend support to utilitarianism in the following way. Many people claim that we are justified in maximizing the good in our own lives, but not justified in maximizing the good across sets of lives, simply because each of us is a single, deeply unified person, unified by the further fact of identity, whereas there is no such corresponding unity across sets of lives. But if the only justification for the different treatment of individual lives and sets of lives is the further fact, and this fact is undermined by the truth of reductionism, then nothing justifies this different treatment. There are no deeply unified subjects of experience. What remains are merely the experiences themselves, and so any ethical theory distinguishing between individual lives and sets of lives is mistaken. If the deep, further fact is missing, then there are no unities. The morally significant units should then be the states people are in at particular times, and an ethical theory that focused on them and attempted to improve their quality, whatever their location, would be the most plausible. Utilitarianism is just such a theory. 10 -Prefer Additionally: 11 -1 Policymaking is key to critical thinking, thus the role of the ballot is to access the desirability of an aff policy option via empirical evidence. Harwood 5 12 -(Karey, associate professor in the Department of Philosophy and Religious Studies) “Teaching Bioethics through Participation and Policy-Making” Essays on Teaching Excellence Toward the Best in the Academy Vol. 16, No. 4, 2004-2005 A publication of The Professional and Organizational Development Network in Higher Education AT 13 -Teaching bioethics to undergraduate students in the humanities and social sciences differs from teaching ethics to medical students or residents. One primary difference is that undergraduates are removed from the clinical setting, where a clinically-based case method of teaching is widely practiced and where students can develop their decision-making skills "at the bedside" through the mentoring of more senior physicians. Another difference is that undergraduates are not in training to join a profession, in this case a profession that has developed a fairly stable body of principles that are "applied" to real-life moral dilemmas (Jonsen, Siegler, and Winslade, 2002; Wear, 2002). Instead, as part of a liberal arts education, an undergraduate course in bioethics should aim to prepare students for life as an engaged citizen in a democratic society (Callahan and Bok, 1980; Kohlberg, 1981) by developing skills in critical thinking and encouraging active engagement in the deliberation of issues in the areas of medicine and biotechnology. Critical thinking, most plainly, is the ability to make well-considered judgments. Critical thinking involves the analysis of concepts and arguments and the interpretation of concrete data or evidence (APA, 1990); but it also requires capacities for self-criticism, moral imagination, and empathy (Momeyer, 2002). It enables the discernment of better and worse arguments or better and worse courses of action, and thus rests on the premise that such judgments of value are possible. It is an essential set of skills, not because it is immediately applicable to a chosen career, but because "wide-awake, careful, thorough habits of thinking" (Dewey, 1933, p. 274) are important in all areas of human life, both individual and social. How to Teach Bioethics One way to foster the development of critical reasoning skills in the undergraduate setting is to provide groups of students with the opportunity to research, analyze, discuss, and propose public policy on emerging topics in bioethics. This type of activity simulates the work of a national bioethics commission and encourages students to view themselves as participants in a significant public debate. For example, a group of students might study stem cell research or international research on AIDS, acquiring enough scientific, medical, and historical background on these topics to be able to identify potential ethical questions. Some questions that might be considered include: Do the benefits of stem cell research justify the use of human embryos? Are all sources of human stem cells morally equivalent? Are the existing safeguards to protect human subjects adequate for international research on AIDS? Should developing countries be able to benefit from AIDS research when their citizens serve as research subjects? Without necessarily working to achieve complete agreement, students try to reach enough of a consensus to propose a policy or regulation. A group might decide that allowing stem cell research from "leftover" embryos created in the context of in vitro fertilization is acceptable, for example, but that creating embryos for the sole purpose of research is not. Students must give reasons for their regulations; and, in searching for and articulating these reasons, students are encouraged to examine the moral values and commitments that underlie their positions. An in-class presentation of the group’s work serves as the culminating exercise, and other students are invited to challenge and contribute to the debate about what ought to be done. Students typically relish this opportunity, seeing themselves not as a passive audience to be fed neutral information but as participants in a debate that matters. In other words, they exhibit the traits of engaged citizens. These activities are highly participatory and inquiry-guided, which means the learning is driven by the task of solving a problem: devising a public policy. Students are invested in and motivated by the group’s task and discover together what they need to learn about their topic. Included in this learning process is the integration of abstract ethical theories and concepts — ideally studied throughout the entirety of the course — into the concrete details of the case at hand. It is not a matter of simply "applying" the principle of justice to the topic of international research on AIDS, for example, just for the sake of getting something done (Evans, 2000). Students must ask: what does justice look like in this case? Does conducting an experiment to see how cheaply an individual in a developing country can be treated for AIDS promote justice, as we understand it? In asking these substantive questions, students in an undergraduate bioethics course are engaged in what Callahan calls "foundational" bioethics (Callahan, 1999). They are not merely engaged in means-end reasoning: how best to achieve an already settled goal (Wear, 2002). They are examining the goals themselves, and thus considering "a multiplicity of ultimate values" (Momeyer, 2002). Developing a Wide-awake Citizenry As any teacher of undergraduate ethics can attest, this kind of substantive discussion of "ultimate values" or "the good" can be murky territory. The allure of moral relativism is strong and the resources for challenging it seem limited. As Momeyer observed, "Students frequently arrive in our classrooms with very limited ways of morally engaging problematic situations, by, for instance, appealing to religious dogmas or a relentless subjectivism and/or relativism, or by privileging – as well enculturated Americans seemingly must, – the exercise of individual autonomy over all other values"(p. 412). Regardless of how one explains the allure of relativism, what is clear is that undergraduates need to develop skills in critical thinking if they are to be able to make the well-considered judgments that are inevitable and necessary in life. One benefit of a simulated bioethics commission is that it directs students’ attention toward a problem of public policy, which is to say a problem of societal significance. Discussing classic cases in medical ethics that focus on an individual patient’s dilemma, such as, famously, whether Dax Cowart’s requests to die after suffering severe burns over most of his body should have been honored by his physicians, provide essential occasions to learn about important concepts like informed consent, competence, and respect for autonomy. Indeed, effective teaching of ethics in any setting arguably requires a dynamic balance between conceptual analysis and concrete engagement of cases. But undergraduates also need opportunities to learn that their critical thinking skills will be needed in shaping the social policies of the future. Why is critical thinking a legitimate and valuable goal? And why is active engagement or participation in shaping social policies important? As Dewey once argued, the point of education is to teach students to think on their own because conscious thinking and participation are the hallmarks of democratic citizenship. Others have followed Dewey’s pragmatic sensibilities, including the developmental psychologist, Lawrence Kohlberg, whose "just community" schools were an outgrowth of his belief that democratic participation in the making of rules for everyone in a community fosters students’ moral development. The writings of Jürgen Habermas (1995) on discourse ethics have also influenced legions of teachers to examine anew the value of a consensus-seeking dialogue that is widely inclusive and highly participatory. Conclusion If we are to avoid living in an "administered society," where we passively receive what is handed down to us from others, it is important to develop a sense of engagement in the social policies that are made and to practice the critical reasoning skills necessary to make well-considered judgments (Bellah, et al., 1991). Fortunately, continuing developments in medicine and biotechnology offer an abundance of ethical issues to debate. Teaching bioethics in the undergraduate setting is about paying attention to these debates and having a stake in their outcome. 14 -2 Actor specificity: governments are obligated to use util. Goodin 90 15 -Robert Goodin, Professor of Government, University of Essex, Australian National Defense University, “THE UTILITARIAN RESPONSE,” p. 141-2, 1990. 16 -My larger argument turns on the proposition that there is Something special about the situation of public officials that makes utilitarianism more probable for them than private individuals. Before proceeding with the large argument, I must therefore say what it is that makes it so special about public officials and their situations that make it both more necessary and more desirable for them to adopt a more credible form of util.itarianism. Consider, first, the argument from necessity. Public officials are obliged to make their choices under uncertainty., and uncertainty of a very special sort at that. All choices – public and private alike – are made under some degree of uncertainty, of course. But in the nature of things, private Individuals will usually have more complete information on the peculiarities of their own circumstances. and on the ramifications that alternative possible choices might have for them. Public officials, in contrast, are relatively poorly informed as to the effects that their choices will have on individuals, one by one. What they typically do know are generalities: averages and aggregates. They know what will happen most often to most people as a result of their various possible choices, but that is all. That is enough to allow public policy-makers to use the utilitarian calculus. – assuming they want to use it at all – to choose general rules or conduct. 17 -Accidents 18 -Nuclear power plants contain multiple nonlinear interactions – kills safety systems and guarantees accidents. Perrow 11 19 -Charles Perrow Charles Perrow is an emeritus professor of sociology at Yale University and visiting professor at Stanford University. The author of several books and many articles on organizations, he is primarily concerned with the impact of large organizations on society (Organizing America: Wealth, Power, and the Origins of Corporate Capitalism, Princeton University Press, 2001), and their catastrophic potentials (Normal Accidents: Living with HighRisk Technologies, Princeton University Press, 1999; The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters, Princeton University Press, 2011)., November 1, 2011, "Fukushima and the Inevitability of Accidents" Sagepub/Bulletin of Atomic Scientists, http://bos.sagepub.com/content/67/6/44 20 -In my work on “normal accidents,” I have argued that some complex organizations such as chemical plants, nuclear power plants, nuclear weapons systems, and, to a more limited extent, air transport networks have so many nonlinear system properties that eventually the unanticipated interaction of multiple failures may create an accident that no designer could have anticipated and no operator can understand. Everything is subject to failure designs, procedures, supplies and equipment, operators, and the environment. The government and businesses know this and design safety devices with multiple redundancies and all kinds of bells and whistles. But nonlinear, unexpected interactions of even small failures can defeat these safety systems. If the system is also tightly coupled, no intervention can prevent a cascade of failures that brings it down. I use the term “normal” because these characteristics are built into the systems; there is nothing one can do about them other than to initiate massive system redesigns to reduce interactive complexity and to loosen coupling. Companies and governments can modularize integrated designs and deconcentrate hazardous material. Actually, though, compared with the prosaic cases previously mentioned, normal accidents are rare. (Three Mile Island is the only accident in my list that qualifies.) It is much more common for systems with catastrophic potential to fail because of poor regulation, ignored warnings, production pressures, cost cutting, poor training, and so on. All of the organizational faults I have noted have their counterpart in daily life. Like organizations and their leaders, people seek wealth and prestige and a reputation for integrity. In the process, they occasionally find it necessary to be deceitful, engaging in denials and coverups, cheating and fabrication. Everyone has violated regulations, failed to plan ahead, and bungled in crises. But people are not, as individuals, repositories of radioactive materials, toxic substances, and explosives, nor do they sit astride critical infrastructures. Organizations do. The consequences of an individual’s failures can only be catastrophic if they are magnified by organizations. The larger the organizations, the greater the concentration of destructive power. The larger the organizations, the greater the potential for political power that can influence regulations and ignore warnings. 21 - 22 - 23 - 24 -Statistical evidence proves – probability of a core melt accident in the next decade is 70 percent. Rose and Sweeting 3/2 25 -Thomas Rose Professor of sensor technology at the Münster University of Applied Science; PhD in nuclear physics. Trevor Sweeting; emeritus professor of statistics at University College London, PhD in statistics and has published statistical information in medicine, engineering, computer science and elsewhere. 3-2-2016, "How safe is nuclear power? A statistical study suggests less than expected: Bulletin of the Atomic Scientists: Vol 72, No 2," Taylor and Francis, http://tandfonline.com/doi/full/10.1080/00963402.2016.1145910?platform=hootsuiteand 26 -The 2011 Fukushima disaster in Japan suggested once more that severe nuclear accidents could be even more frequent than safety studies had predicted and Feiveson had hoped. So we decided to estimate the probability of a severe accident – that is, a core-melt accident – by relating the number of past core-melt accidents to the total number of years reactors have been operating (i.e. “reactor years”). This type of prediction often runs up against the argument that nuclear operators learn from the past. Therefore we also tried to account for any learning effects in our analysis. We restricted our analysis to accidents related to civil nuclear reactors used for power generation, as arguments about trade-offs for using nuclear technology differ depending on the application. And, because the International Atomic Energy Agency (IAEA) does not distribute comprehensive, long-term reports on nuclear incidents and accidents because of confidentiality agreements with the countries it works with, we have had to use alternative sources for information on nuclear accidents over time. By our calculations, the overall probability of a coremelt accident in the next decade, in a world with 443 reactors, is almost 70. (Because of statistical uncertainty, however, the probability could range from about 28 to roughly 95.) The United States, with 104 reactors, has about a 50 probability of experiencing one core-melt accident within the next 25 years.1 27 -Prefer this analysis: 28 -First, accident risk assessment based on paths towards an accident fail – using past data is the most reliable – consensus of experts. Rose and Sweeting 2 29 -Thomas Rose Professor of sensor technology at the Münster University of Applied Science; PhD in nuclear physics. Trevor Sweeting; emeritus professor of statistics at University College London, PhD in statistics and has published statistical information in medicine, engineering, computer science and elsewhere. 3-2-2016, "How safe is nuclear power? A statistical study suggests less than expected: Bulletin of the Atomic Scientists: Vol 72, No 2," Taylor and Francis, http://tandfonline.com/doi/full/10.1080/00963402.2016.1145910?platform=hootsuiteand 30 -In the past, several studies have investigated the probability of a core melt using the probabilistic risk assessment (PRA) method. This determines probability prior to accidents by analyzing possible paths toward a severe accident, rather than using existing data to determine probability empirically. Two studies by the US Nuclear Regulatory Commission (1975, 1990) as well as a German government study (Hörtner 1980) examined seven different cases or reactors. Three calculations resulted in 1 accident in more than 200,000 reactor years, and a further three resulted in 1 accident in 11,000–25,000 reactor years. Only the result for the Zion reactor had an accident rate similar to ours, with 1 accident in 3000 years. After Chernobyl, Islam and Lindgren (1986, 691) published a short note in Nature in which, based on the known accidents (Three Mile Island and Chernobyl) and reactor years (approximately 4000) at the time, they concluded that “…the probability of having one accident every two decades is more than 95.” Regarding PRA, they wrote: “Our view is that this method should be replaced by risk assessment using the observed data.” This sparked an intensive discussion of statistical issues in the following year (Edwards 1986; Schwartz 1986; Fröhner 1987; Chow and Oliver 1987; Edwards 1987); however, there was agreement on the substantive conclusions of Islam and Lindgren. 31 -Second, there’s no learning effect – humans keep making the same mistakes. Rose and Sweeting 3 32 -Thomas Rose Professor of sensor technology at the Münster University of Applied Science; PhD in nuclear physics. Trevor Sweeting; emeritus professor of statistics at University College London, PhD in statistics and has published statistical information in medicine, engineering, computer science and elsewhere. 3-2-2016, "How safe is nuclear power? A statistical study suggests less than expected: Bulletin of the Atomic Scientists: Vol 72, No 2," Taylor and Francis, http://tandfonline.com/doi/full/10.1080/00963402.2016.1145910?platform=hootsuiteand 33 -We also wanted to see whether accidents become less frequent with more operational experience. But simply analyzing the number of severe accidents against reactor years is not very illuminating because, luckily, these accidents are rather rare. So we examined the relationship between the cumulative number of all accidents, from severe to minor ones, and cumulative reactor years. The accident rate is then estimated as the ratio of cumulative number of accidents to cumulative reactor years. If the probability of an accident remained constant over time, then a graph of the above accidentrate estimates against reactor years would exhibit no trend, whereas a learning effect would result in a decreasing accident probability and the graph would exhibit a decreasing trend. We began by plotting the data from the Guardian list, with a few exclusions.3 The graph shows a high accident rate at the beginning because of one accident in Russia in 1957. The accident rate then drops because the following years were accident-free. After around 500 reactor years, the plot appears to stabilize, varying around a constant value. This is confirmed by a detailed statistical analysis, which produces a probability for a (minor or major) accident in a nuclear power plant of about 1 in 1000 reactor years and shows no evidence of a learning effect. 34 -Third, the authors analyzed every core melt accident, despite organizations like the IAEA hiding information. Rose and Sweeting 4 35 -Thomas Rose Professor of sensor technology at the Münster University of Applied Science; PhD in nuclear physics. Trevor Sweeting; emeritus professor of statistics at University College London, PhD in statistics and has published statistical information in medicine, engineering, computer science and elsewhere. 3-2-2016, "How safe is nuclear power? A statistical study suggests less than expected: Bulletin of the Atomic Scientists: Vol 72, No 2," Taylor and Francis, http://tandfonline.com/doi/full/10.1080/00963402.2016.1145910?platform=hootsuiteand 36 -After the Fukushima disaster, the authors analyzed all past core-melt accidents and estimated a failure rate of 1 per 3704 reactor years. This rate indicates that more than one such accident could occur somewhere in the world within the next decade. The authors also analyzed the role that learning from past accidents can play over time. This analysis showed few or no learning effects occurring, depending on the database used. Because the International Atomic Energy Agency (IAEA) has no publicly available list of nuclear accidents, the authors used data compiled by the Guardian newspaper and the energy researcher Benjamin Sovacool. The results suggest that there are likely to be more severe nuclear accidents than have been expected and support Charles Perrow’s “normal accidents” theory that nuclear power reactors cannot be operated without major accidents. However, a more detailed analysis of nuclear accident probabilities needs more transparency from the IAEA. Public support for nuclear power cannot currently be based on full knowledge simply because important information is not available. 37 -Advocacy 38 -I advocate that all countries prohibit the production of nuclear power which is currently connected to the electrical grid via shutdown. To clarify, this would include nuclear power plants, but would exclude things like research reactors. I defend action via the federal governments of countries. I’ll spec to whatever in CX so long as it doesn’t screw solvency ~-~- but only I determine what that is. 39 -The plan solves – shutdowns are immediate and on time. Wells 15 40 -Deutsche Welle Germany’s international broadcaster, 6-29-15, "How far along is Germany's nuclear phase-out?," DW, http://www.dw.com/en/how-far-along-is-germanys-nuclear-phase-out/a-18547065 41 -Immediately after Fukushima, eight of 17 functioning nuclear plants were shut down, and the June decision established a timeline of taking the remaining plants offline by 2022. This past weekend, at midnight on Saturday (25.06.2015), the next shutdown took place: The Grafenrheinfeld power plant in Bavaria has been removed from the power grid, nearly exactly on schedule. 42 -Advantage 1 is Structural Violence 43 -Accidents cause structural violence – marginalized people bear the brunt of them. Cousins et al. 13 44 -Elicia Cousins Elicia Cousins is a doctoral student in sociology and a research assistant in SSEHRI. She is from Tokyo, Japan and received her BA in Environmental Studies from Carleton College in Minnesota, Claire Karban , Fay Li , and Marianna Zapanta , 2013 (The Study references other studies from 2013 implying it’s from at least 2013, but it does not list a formal date), " Nuclear Power and Environmental Justice: A Mixed-Methods Study of Risk, Vulnerability, and the Victim Experience" Environmental Studies Comprehensive Project”, https://apps.carleton.edu/curricular/ents/assets/Cousins_Karban_Li_Zapanta.pdf 45 -The range of potential impacts on biological health, wellbeing, and health-related behavior is vast for anyone involved, regardless of age, gender, race, or socioeconomic status. However, we suggest that such stress is amplified in certain populations of heightened vulnerability. For example, the stress of parents (especially mothers) with preschool age children was particularly evident throughout the present analysis, a well-documented trend associated with technological disasters (Havenaar et al. 1996; Bromet and Schulberg 1986). Children are also particularly sensitive to physical and mental harm (Peek 2008; Bromet et al. 2000), a trend reflected in a recent survey of child evacuees from Fukushima24 that revealed stress levels double the Japanese average (Brumfiel 2013). A similar trend is evident in the United States, where low-income populations and people of color may encounter such stressors at higher levels and face greater difficulties in coping with them. Low-income populations are more likely to lack financial resources (Cutter 2006) and expansive social networks to rely upon (Dominguez and Watkins 2003) during emergencies. The differential response and recovery to Hurricane Katrina is a telling example of such trends: while those with resources left before the hurricane arrived, those without resources (mainly the poor, African American, elderly, or residents without private cars) had to remain and deal with the oncoming disaster (Cutter 2006). Low socioeconomic status is also associated with lower educational attainment, which constrains understanding of and access to disaster warnings and information on recovery (Ibid.). It follows that such populations would be more prone to stress from feelings of uncertainty. 46 -No amount of radiation is good – it kills – consensus and empirics. Mushak 7 47 -Paul Mushak magna cum laude at University of Scranton, Ph.D. in metalloorganic/organic chemistry and biochemistry at the University of Florida, postdoctoral work as a fellow in the Department of Molecular Biophysics and Biochemistry in the School of Medicine at Yale University, was on the faculty of the University of North Carolina School of Medicine in the Department of Pathology from 1971 to 1985, and was an adjunct professor from 1985 to 1993. From 1995 to 2010, he was a member of the Montefiore Medical Center-Second Medical University of Shanghai, China Collaborating Centers for Prevention of Childhood Lead Poisoning and a visiting professor of Pediatric Environmental Health, Department of Pediatrics, April 2007, "Hormesis and Its Place in Nonmonotonic Dose–Response Relationships: Some Scientific Reality Checks," Environmental Health Perspectives, http://pubmedcentralcanada.ca/pmcc/articles/PMC1852676/ 48 -Radiologic hormesis. Some proponents of radiologic hormesis hold that the linear no-threshold (LNT) dose-response relationship for radiologic carcinogenesis in humans and experimental animals is no longer tenable and radiologic hormesis should be the prevailing model (e.g., Calabrese 2005b; Pollycove and Feinendegen 2001). However, research into radiologic carcinogenesis in humans continues to feed a huge epidemiologic database documenting the persistence of radiologic carcinogenicity with lower dose. Little convincing evidence exists to support human radiologic hormetic responses nullifying the LNT model of low-dose carcinogenesis. Current thinking also is mixed about a conceptual context for human radiologic hormesis (Johansson 2003; Pollycove and Feinendegen 2001; Upton 2001). Recent expert consensus treatises on low-dose carcinogenicity of ionizing radiation in humans present analyses largely supporting the LNT model. The most recent report by the NAS/NRC on the biological effects of ionizing radiation, BEIR VII-Phase 2 (NAS/NRC 2006), endorsed preservation of the LNT model, that is, cancer risk proceeds in a linear fashion at lower doses without a threshold. Appendix D of the report covers radiation hormesis. The report concluded that animal and cell studies suggesting benefits or threshold to harm from ionizing radiation are "not compelling" and that, at present, any assumption of net health benefits of radiation hormesis over detrimental impacts at the same dose is unwarranted. The December 2004 draft report of the International Commission on Radiation Protection (ICRP 2004), the report of the United Nations Scientific Committee on the Effects of Atomic Radiation (2000), and the report of the National Council on Radiation Protection and Measurements (NCRP 2001) have all concluded the LNT model remains valid. The 2005 report of the joint French National Academy of Medicine and the Institute of France Academy of Sciences on low dose-carcinogenic effect relationships for ionizing radiation (Academie Nationale de Medecine 2005) raises doubts about use of, or recommendations for, the LNT model at low ( 100 mSv) and very low ( 10 mSv) doses. The French report refers to radiation hormesis only in passing and in narrow context. Several recent epidemiologic studies are consistent with an LNT dose response. The Techa River study in the Southern Urals region of Russia (Krestinina et al. 2005) reported individualized risk estimates for excess cancers from radioactivity exposures traced to a weapons plant leak in the 1950s. The extended cohort, with 29,873 people born between 1950 and 1960, provided strong evidence of low-dose radiologic carcinogenesis. Cardis et al. (2005), in their studies of 400,000 nuclear plant workers in 15 countries, concluded that 1-2 of cancer deaths among the cohort may be due to radiation. The results for the nuclear workers produced conclusions similar to those of the Techa River findings and both articles support the ICRP standard of 20 mSv/year for occupational protection. 49 -Accidents trap the poor who cannot afford to leave, and those who can receive huge emotional stress. Cousins et al 13 50 -Elicia Cousins Elicia Cousins is a doctoral student in sociology and a research assistant in SSEHRI. She is from Tokyo, Japan and received her BA in Environmental Studies from Carleton College in Minnesota, Claire Karban , Fay Li , and Marianna Zapanta , 2013 (The Study references other studies from 2013 implying it’s from at least 2013, but it does not list a formal date), " Nuclear Power and Environmental Justice: A Mixed-Methods Study of Risk, Vulnerability, and the Victim Experience" Environmental Studies Comprehensive Project”, https://apps.carleton.edu/curricular/ents/assets/Cousins_Karban_Li_Zapanta.pdf 51 -Technological disasters often involve the release of toxins into the environment, necessitating the displacement of residents of newly contaminated regions. First and foremost, this involves the stress arising from new financial burdens17, and the need for securing new housing and employment. However, it also involves the difficulties of adjusting to the new environment and to separation from friends, relatives, and familiar social networks. In the freeresponse section of the NAIIC survey, 334 respondents (4) wrote of constant stress due to an unfamiliar environment and prolonged refugee life. Another 24 (0.3) respondents explicitly mentioned the difficulties of building new relationships and getting along with people in their new environment, and feeling isolated and alone. One woman from Fukushima explained her feeling of being in limbo: I’m getting used to life here in Kyoto, but I can’t help but feel as though it’s just a temporary, almost “borrowed” life. I don’t have relatives or close friends in Kyoto, and it’s hard to feel like this is where I really should be, and that my feet are firmly on the ground (Fukushima City Happy Island Newspaper). Housing, job, and financial uncertainty can also serve as barriers to evacuation, heightening the stress associated with the inability to evacuate. In Japan, only areas in which cumulative radiation exposure is projected to reach 20mSv/year18 have been evacuated by the government, leaving residents of other areas to decide whether to evacuate “voluntarily” without government compensation. As a result, many who wish to evacuate cannot afford to; according to a 2011 survey by Friends of Earth Japan, financial and employment uncertainty were the main barriers to evacuation, and this continues to be true (Mitsuta interview). In many cases the mother and children have evacuated for the sake of the children’s health, while the father remains in Fukushima to continue his job and earn money. In the free response section of the NAIIC survey, 290 respondents (4) wrote about how much they missed seeing members of their family. 52 - 53 - 54 -Advantage 2 is BioD 55 -Nuclear power plants deck biodiversity from accidents and waste: laundry list, empirics, and previous literature. Kabasakal and Albayrak 11. 56 -November 17-20, 2011 Paper was presented in this time. Bekir Kabasakal Department of Biology, Graduate School of Natural and Applied Sciences, Mehmet Akif Ersoy University. and Tamer Albayrak Department of Biology, Faculty of Science and Art, Mehmet Akif Ersoy University. Paper presented at the VI. International Symposium on Ecology and Environmental Problems. "Effects of Nuclear Power Plant Accidents on Biodiversity and Awareness of Potential Nuclear Accident Risk near the Eastern Border of Turkey," Fresenius Environmental Bulletin, http://www.academia.edu/2303350/Effects_of_Nuclear_Power_Plant_Accidents_on_Biodiversity_and_Awareness_of_Potential_Nuclear_Accident_Risk_near_the_Eastern_Border_of_Turkey 57 -Previous studies indicated that NPP accidents cause permanent damage to biodiversity. Recent publications indicate that if an accident happens at MNPP, Turkey, Azerbaijan, Georgia, and Iran might be influenced. Because of this reason it need to have an urgent action plan and take the necessary precautions for this possible catastrophe. KEYWORDS: Nuclear power plant, nuclear accident, nuclear risk, Metsamor, Metsamor Nuclear Power Plant * Corresponding author 1 INTRODUCTION Although Nuclear power plants (NPP) are designed to withstand earthquakes or other natural disasters and to shut down safely in the event of major earth movement, nonuclear facility is 100 safe due to a possible meltdown of the reactor (due to loss of coolant water leading to over-heating). If an accident occurs, it would create a major public hazard and may cause human fatalities and biodiversity loss 1. Moreover, nuclear reactors produce toxic waste, which is highly radioactive and can remain in the environment for several hundred years. Because of these reasons NPPs are highly risky energy resources while they do not produce CO 2 and other green gasses. The severity of any nuclear accident is measured on the international event scale (INES) from 0 to 7 (0 means no consequences and 7 is the major accident; level 1-3 are called incidents and level 4-7 are called accidents). Eight accidents have been reported above level 4 so far (Table 1). Two of those were the major accidents (INES level 7): Chernobyl (1986) and Fukushima (2011). Chernobyl nu-clear power plant disaster is the worst nuclear disaster 2. TABLE 1 - Nuclear power plant accidents and their effects. Date Location INES Level* 2011 Fukushima, Japan 7 1999 Tokaimura, Japan 41986 Chernobyl, Ukraine 7 1980 Saint-Laurent A2, France 41979 Three Mile Island, USA 51969 Saint-Laurent A1, France 41957 Mayak, Russia 61957 Windscale, UK 5*INES event scale: Level 7 = major accident; level 6 = serious accident; level 5 = accident with wider consequences; level 4 = accident with local consequences; Level 3 = serious incident; 2 = incident; 1 = anomaly; level O = no safety significance. © by PSP Volume 21 – No 11b. 2012 Fresenius Environmental Bulletin 34352 EFFECTS OF NUCLEAR POWERPLANT ACCIDENT ON BIODIVERSITY Previous nuclear accidents showed that right after the accident happens, vast amounts of radioactive material, which are harmful to living organisms such as Iodine -131and Caesium -137 or Cesium -137, are transported into the atmosphere 3. Once the radioactive materials are released, all living organism can be influenced in several ways 3, 4. For instance people can be exposed by two ways: from the deposited material itself as external irradiation, or the inhalation of any material resuspended into the atmosphere, and the transfer of material through the terrestrial and aquatic environment to food and water inhalation of radioactive material in the air as internal irradiation 5. Most information about effects of nuclear accidents on biodiversity and human were gained from the Cherno- byl accident. On-going studies on the consequences of the Fukushima accident have been showing that the marine ecosystem will be effected more than the terrestrial eco-systems 6. Effects of NPP accidents on biodiversity can be di-vided into two main topics: physiological and genetic effects of radiation and ecological consequences of radiation 7. Physiological and genetic effects of radiation: increased morphologic, physiologic, genetic disorders; increased mutation rates and developmental abnormalities, increase in general oncological morbidity, accelerated aging, reduction in body antioxidant levels 8-12. Ecological consequences of radiation: reduced adult survival and reproduction suggests and reduction in species abundance 13-16. 58 -Accidents spread a lot. Max Planck Institute for Chemistry 12 Cites Lelevield. 59 -Max Planck Institute for Chemistry Non Government, Non Profit Organization Dedicated to Chemistry, Citing: Johannes Lelieveld PhD in Physics and Astronomy, 5-22-2012, "Probability of contamination from severe nuclear reactor accidents is higher than expected,", https://www.mpg.de/5809418/reactor_accidents 60 -Catastrophic nuclear accidents such as the core meltdowns in Chernobyl and Fukushima are more likely to happen than previously assumed. Based on the operating hours of all civil nuclear reactors and the number of nuclear meltdowns that have occurred, scientists at the Max Planck Institute for Chemistry in Mainz have calculated that such events may occur once every 10 to 20 years (based on the current number of reactors) — some 200 times more often than estimated in the past. The researchers also determined that, in the event of such a major accident, half of the radioactive caesium-137 would be spread over an area of more than 1,000 kilometres away from the nuclear reactor. Their results show that Western Europe is likely to be contaminated about once in 50 years by more than 40 kilobecquerel of caesium-137 per square meter. According to the International Atomic Energy Agency, an area is defined as being contaminated with radiation from this amount onwards. In view of their findings, the researchers call for an in-depth analysis and reassessment of the risks associated with nuclear power plants. Global risk of radioactive contamination. The map shows the annual probability in percent of radioactive contamination by more than 40 kilobecquerels per square meter. In Western Europe the risk is around two percent per year. Image Omitted The reactor accident in Fukushima has fuelled the discussion about nuclear energy and triggered Germany's exit from their nuclear power program. It appears that the global risk of such a catastrophe is higher than previously thought, a result of a study carried out by a research team led by Jos Lelieveld, Director of the Max Planck Institute for Chemistry in Mainz: "After Fukushima, the prospect of such an incident occurring again came into question, and whether we can actually calculate the radioactive fallout using our atmospheric models." According to the results of the study, a nuclear meltdown in one of the reactors in operation worldwide is likely to occur once in 10 to 20 years. Currently, there are 440 nuclear reactors in operation, and 60 more are planned. To determine the likelihood of a nuclear meltdown, the researchers applied a simple calculation. They divided the operating hours of all civilian nuclear reactors in the world, from the commissioning of the first up to the present, by the number of reactor meltdowns that have actually occurred. The total number of operating hours is 14,500 years, the number of reactor meltdowns comes to four—one in Chernobyl and three in Fukushima. This translates into one major accident, being defined according to the International Nuclear Event Scale (INES), every 3,625 years. Even if this result is conservatively rounded to one major accident every 5,000 reactor years, the risk is 200 times higher than the estimate for catastrophic, non-contained core meltdowns made by the U.S. Nuclear Regulatory Commission in 1990. The Mainz researchers did not distinguish ages and types of reactors, or whether they are located in regions of enhanced risks, for example by earthquakes. After all, nobody had anticipated the reactor catastrophe in Japan. 25 percent of the radioactive particles are transported further than 2,000 kilometres Subsequently, the researchers determined the geographic distribution of radioactive gases and particles around a possible accident site using a computer model that describes the Earth's atmosphere. The model calculates meteorological conditions and flows, and also accounts for chemical reactions in the atmosphere. The model can compute the global distribution of trace gases, for example, and can also simulate the spreading of radioactive gases and particles. To approximate the radioactive contamination, the researchers calculated how the particles of radioactive caesium-137 (137Cs) disperse in the atmosphere, where they deposit on the earth’s surface and in what quantities. The 137Cs isotope is a product of the nuclear fission of uranium. It has a half-life of 30 years and was one of the key elements in the radioactive contamination following the disasters of Chernobyl and Fukushima. The computer simulations revealed that, on average, only eight percent of the 137Cs particles are expected to deposit within an area of 50 kilometres around the nuclear accident site. Around 50 percent of the particles would be deposited outside a radius of 1,000 kilometres, and around 25 percent would spread even further than 2,000 kilometres. These results underscore that reactor accidents are likely to cause radioactive contamination well beyond national borders. 61 -Biodiversity is on the brink of global collapse – that causes extinction and is a threat multiplier. Torres 2/10 62 -Phil Torres Phil Torres is author, Affiliate Scholar at the Institute for Ethics and Emerging Technologies, and freelance writer with publications in Salon, Skeptic, the Humanist, American Atheist, The Progressive, Humanity+, and many others. His forthcoming book is called The End: What Science and Religion Tell Us About the Apocalypse (Pitchstone Publishing) , 2-10-16, "Biodiversity Loss and the Doomsday Clock: An Invisible Disaster Almost No One is Talking About," Common Dreams, http://www.commondreams.org/views/2016/02/10/biodiversity-loss-and-doomsday-clock-invisible-disaster-almost-no-one-talking-about 63 -Everywhere one looks, the biosphere is wilting — and a single bipedal species with large brains and opposable thumbs is almost entirely responsible for this worsening plight. If humanity continues to prune back the Tree of Life with reckless abandon, we could be forced to confront a global disaster of truly unprecedented proportions. Along these lines, a 2012 article published in Nature and authored by over twenty scientists claims that humanity could be teetering on the brink of a catastrophic, irreversible collapse of the global ecosystem. According to the paper, there could be “tipping points” — also called “critical thresholds” — lurking in the environment that, once crossed, could initiate radical and sudden changes in the biosphere. Thus, an event of this sort could be preceded by little or no warning: everything might look more or less okay, until the ecosystem is suddenly in ruins. We must, moving forward, never forget that just as we’re minds embodied, so too are we bodies environed, meaning that if the environment implodes under the weight of civilization, then civilization itself is doomed. While the threat of nuclear weapons deserves serious attention from political leaders and academics, as the Bulletin correctly observes, it’s even more imperative that we focus on the broader “contextual problems” that could inflate the overall probability of wars and terrorism in the future. Climate change and biodiversity loss are both conflict multipliers of precisely this sort, and each is a contributing factor that’s exacerbating the other. If we fail to make these threats a top priority in 2016, the likelihood of nuclear weapons — or some other form of emerging technology, including biotechnology and artificial intelligence — being used in the future will only increase. Perhaps there’s still time to avert the sixth mass extinction or a sudden collapse of the global ecosystem. But time is running out — the doomsday clock is ticking. 64 -One species going extinct triggers the brink. Diner ’94 Brackets 65 -David N. Diner 1994, Judge Advocate’s General’s Corps of US Army, Military Law Review, Winter, 143 Mil. L. Rev. 161, l/n, David N. 66 -In past mass extinction episodes, as many as ninety percent of the existing species perished, and yet the world moved forward, and new species replaced the old. So why should the world should be concerned now? The prime reason is the world's survival. Like all animal life, humans live off of other species. At some point, the number of species could decline to the point at which the ecosystem fails, and then humans also would become extinct. No one knows how many species the world needs to support human life, and to find out ~-~- by allowing certain species to become extinct ~-~- would not be sound policy. In addition to food, species offer many direct and indirect benefits to mankind. n68 2. Ecological Value. ~-~- Ecological value is the value that species have in maintaining the environment. Pest, n69 erosion, and flood control are prime benefits certain species provide to man. Plants and animals also provide additional ecological services ~-~- pollution control, n70 oxygen production, sewage treatment, and biodegradation. n71 3. Scientific and Utilitarian Value. ~-~- Scientific value is the use of species for research into the physical processes of the world. n72 Without plants and animals, a large portion of basic scientific research would be impossible. Utilitarian value is the direct utility humans draw from plants and animals. n73 Only a fraction of the *172 earth's species have been examined, and mankind may someday desperately need the species that it is exterminating today. To accept that the snail darter, harelip sucker, or Dismal Swamp southeastern shrew n74 could save mankind may be difficult for some. Many, if not most, species are useless to man in a direct utilitarian sense. Nonetheless, they may be critical in an indirect role, because their extirpations could affect a directly useful species negatively. In a closely interconnected ecosystem, the loss of a species affects other species dependent on it. n75 Moreover, as the number of species decline, the effect of each new extinction on the remaining species increases dramatically. n76 4. Biological Diversity. ~-~- The main premise of species preservation is that diversity is better than simplicity. n77 As the current mass extinction has progressed, the world's biological diversity generally has decreased. This trend occurs within ecosystems by reducing the number of species, and within species by reducing the number of individuals. Both trends carry serious future implications. Biologically diverse ecosystems are characterized by a large number of specialist species, filling narrow ecological niches. These ecosystems inherently are more stable than less diverse systems. "The more complex the ecosystem, the more successfully it can resist a stress. . . . like a net, in which each knot is connected to others by several strands, such a fabric can resist collapse better than a simple, unbranched circle of threads ~-~- which if cut anywhere breaks down as a whole." n79 By causing widespread extinctions, humans have artificially simplified many ecosystems. As biologic simplicity increases, so does the risk of ecosystem failure. The spreading Sahara Desert in Africa, and the dustbowl conditions of the 1930s in the United States are relatively mild examples of what might be expected if this trend continues. Theoretically, each new animal or plant extinction, with all its dimly perceived and intertwined affects, could cause total ecosystem collapse and human extinction. Each new extinction increases the risk of disaster. Like a mechanic removing, one by one, the rivets from an aircraft's wings, mankind may be edging closer to the abyss. - EntryDate
-
... ... @@ -1,1 +1,0 @@ 1 -2016-10-29 23:40:59.0 - Judge
-
... ... @@ -1,1 +1,0 @@ 1 -Adam Torson - Opponent
-
... ... @@ -1,1 +1,0 @@ 1 -Greenhill SK - ParentRound
-
... ... @@ -1,1 +1,0 @@ 1 -6 - Round
-
... ... @@ -1,1 +1,0 @@ 1 -6 - Team
-
... ... @@ -1,1 +1,0 @@ 1 -West Nelson Aff - Title
-
... ... @@ -1,1 +1,0 @@ 1 -Accidents Aff V2 Modified Advocacy - Tournament
-
... ... @@ -1,1 +1,0 @@ 1 -Meadows
- Caselist.CitesClass[5]
-
- Cites
-
... ... @@ -1,46 +1,0 @@ 1 -Framework 2 -The practices that make critical thinking possible are under assault by militarization – neo-Nazis and a politics of disposability smother critical thought and marginalized the oppressed. Thus, the role of the ballot it to vote for the debater who best combats the militarized state. Giroux 15 3 -Henry Giroux A high-school social studies teacher, positions @ Boston University, Miami University, and Penn State University. Global TV Network Chair in English and Cultural Studies at McMaster University. He has published more than 50 books and more than 300 academic articles, and is published widely throughout education and cultural studies literature, "The curse of totalitarianism and the challenge of critical pedagogy" http://philosophersforchange.org/2015/10/13/the-curse-of-totalitarianism-and-the-challenge-of-critical-pedagogy/ 4 -The forces of free-market fundamentalism are on the march ushering in a terrifying horizon of what Hannah Arendt once called “dark times.” Across the globe, the tension between democratic values and market fundamentalism has reached a breaking point.1 The social contract is under assault, neo-Nazism is on the rise, right-wing populism is propelling extremist political candidates and social movements into the forefront of political life, anti-immigrant sentiment is now wrapped in the poisonous logic of nationalism and exceptionalism, racism has become a mark of celebrated audacity and a politics of disposability comes dangerously close to its endgame of extermination for those considered excess. Under such circumstances, it becomes frightfully clear that the conditions for totalitarianism and state violence are is still with us smothering critical thought, social responsibility, the ethical imagination and politics itself. As Bill Dixon observes: The totalitarian form is still with us because the all too protean origins of totalitarianism are still with us: loneliness as the normal register of social life, the frenzied lawfulness of ideological certitude, mass poverty and mass homelessness, the routine use of terror as a political instrument, and the ever growing speeds and scales of media, economics, and warfare.2 In the United States, the extreme right in both political parties no longer needs the comfort of a counterfeit ideology in which appeals are made to the common good, human decency and democratic values. On the contrary, power is now concentrated in the hands of relatively few people and corporations while power is global and free from the limited politics of the democratic state. In fact, the state for all intents and purposes has become the corporate state. Dominant power is now all too visible and the policies, practices and wrecking ball it has imposed on society appear to be largely unchecked. Any compromising notion of ideology has been replaced by a discourse of command and certainty backed up by the militarization of local police forces, the surveillance state and all of the resources brought to bear by a culture of fear and a punishing state aligned with the permanent war on terror. Informed judgment has given way to a corporate-controlled media apparatus that celebrates the banality of balance and the spectacle of violence, all the while reinforcing the politics and value systems of the financial elite.3 cur_hegemonyhunters6203 Credit: hegemonyhunters6203. Following Arendt, a dark cloud of political and ethical ignorance has descended on the United States creating both a crisis of memory and agency.4 Thoughtlessness has become something that now occupies a privileged, if not celebrated, place in the political landscape and the mainstream cultural apparatuses. A new kind of infantilism and culture of ignorance now shapes daily life as agency devolves into a kind of anti-intellectual foolishness evident in the babble of banality produced by Fox News, celebrity culture, schools modeled after prisons and politicians who support creationism, argue against climate change and denounce almost any form of reason. Education is no longer viewed as a public good but a private right, just as critical thinking is devalued as a fundamental necessity for creating an engaged and socially responsible populace. Politics has become an extension of war, just as systemic economic uncertainty and state-sponsored violence increasingly find legitimation in the discourses of privatization and demonization, which promote anxiety, moral panics and fear, and undermine any sense of communal responsibility for the well-being of others. Too many people today learn quickly that their fate is solely a matter of individual responsibility, irrespective of wider structural forces. This is a much promoted hypercompetitive ideology with a message that surviving in a society demands reducing social relations to forms of social combat. People today are expected to inhabit a set of relations in which the only obligation is to live for one’s own self-interest and to reduce the responsibilities of citizenship to the demands of a consumer culture. Yet, there is more at work here than a flight from social responsibility, if not politics itself. Also lost is the importance of those social bonds, modes of collective reasoning, public spheres and cultural apparatuses crucial to the formation of a sustainable democratic society. With the return of the Gilded Age and its dream worlds of consumption, privatization and deregulation, both democratic values and social protections are at risk. At the same time, the civic and formative cultures that make such values and protections central to democratic life are in danger of being eliminated altogether. As market mentalities and moralities tighten their grip on all aspects of society, democratic institutions and public spheres are being downsized, if not altogether disappearing. As these institutions vanish – from public schools to health-care centers – there is also a serious erosion of the discourses of community, justice, equality, public values and the common good. One consequence is a society stripped of its inspiring and energizing public spheres and the “thick mesh of mutual obligations and social responsibilities to be found in” any viable democracy.5 This grim reality marks a failure in the power of the civic imagination, political will and open democracy.6 It is also part of a politics that strips the social of any democratic ideals and undermines any understanding of higher education as a public good and pedagogy as an empowering practice, a practice that acts directly upon the conditions that bear down on our lives in order to change them when necessary. cur-Sam Bosma Credit: Sam Bosma. At a time when the public good is under attack and there seems to be a growing apathy toward the social contract, or any other civic-minded investment in public values and the larger common good, education has to be seen as more than a credential or a pathway to a job. It has to be viewed as crucial to understanding and overcoming the current crisis of agency, politics and historical memory faced by many young people today. One of the challenges facing the current generation of educators and students is the need to reclaim the role that education has historically played in developing critical literacies and civic capacities. There is a need to use education to mobilize students to be critically engaged agents, attentive to addressing important social issues and being alert to the responsibility of deepening and expanding the meaning and practices of a vibrant democracy. At the heart of such a challenge is the question of what education should accomplish in a democracy. What work do educators have to do to create the economic, political and ethical conditions necessary to endow young people with the capacities to think, question, doubt, imagine the unimaginable and defend education as essential for inspiring and energizing the people necessary for the existence of a robust democracy? In a world in which there is an increasing abandonment of egalitarian and democratic impulses, what will it take to educate young people to challenge authority and in the words of James Baldwin “rob history of its tyrannical power, and illuminate that darkness, blaze roads through that vast forest, so that we will not, in all our doing, lose sight of its purpose, which is after all, to make the world a more human dwelling place”?7 What role might education and critical pedagogy have in a society in which the social has been individualized, emotional life collapses into the therapeutic and education is relegated to either a private affair or a kind of algorithmic mode of regulation in which everything is reduced to a desired measurable economic outcome. Feedback loops now replace politics and the concept of progress is defined through a narrow culture of metrics, measurement and efficiency.8 In a culture drowning in a new love affair with empiricism and data, that which is not measurable withers. Lost here are the registers of compassion, care for the other, the radical imagination, a democratic vision and a passion for justice. In its place emerges what Francisco Goya, in one of his engravings, termed “The Sleep of Reason Produces Monster.” Goya’s title is richly suggestive, particularly about the role of education and pedagogy in compelling students to be able to recognize, as my colleague David Clark points out, “that an inattentiveness to the never-ending task of critique breeds horrors: the failures of conscience, the wars against thought, and the flirtations with irrationality that lie at the heart of the triumph of every-day aggression, the withering of political life, and the withdrawal into private obsessions.”9 Given the multiple crises that haunt the current historical conjuncture, educators need a new language for addressing the changing contexts and issues facing a world in which there is an unprecedented convergence of resources – financial, cultural, political, economic, scientific, military and technological – that are increasingly used to concentrate powerful and diverse forms of control and domination. Such a language needs to be political without being dogmatic and needs to recognize that pedagogy is always political because it is connected to the struggle over agency. In this instance, making the pedagogical more political means being vigilant about those very “moments in which identities are being produced and groups are being constituted, or objects are being created.”10 cur_edu At the same time it means educators need to be attentive to those practices in which critical modes of agency and particular identities are being denied. For example, the Tucson Unified School District board not only eliminated the famed Mexican-American studies program, but also and banned a number of Chicano and Native American books it deemed dangerous. The ban included Shakespeare’s play The Tempest, and Pedagogy of the Oppressed by the famed Brazilian educator Paulo Freire. This act of censorship provides a particularly disturbing case of the war that is being waged in the United States against not only young people marginalized by race and class, but also against the very spaces and pedagogical practices that make critical thinking possible. Such actions suggest the need for faculty to develop forms of critical pedagogy that not only inspire and energize. They should also be able to challenge a growing number of anti-democratic practices and policies while resurrecting a radical democratic project that provides the basis for imagining a life beyond a social order immersed in inequality, degradation of the environment and the elevation of war and militarization to national ideals. Under such circumstances, education becomes more than an obsession with accountability schemes, an audit culture, market values and an unreflective immersion in the crude empiricism of a data-obsessed, market-driven society. It becomes part of a formative culture in which thoughtlessness prevails, providing the foundation for the curse of totalitarianism. At a time of increased repression, it is all the more crucial for educators to reject the notion that higher education is simply a site for training students for the workforce and that the culture of higher education is synonymous with the culture of business. At issue here is the need for educators to recognize the power of education in creating the formative cultures necessary to both challenge the various threats being mobilized against the ideas of justice and democracy while also fighting for those public spheres, ideals, values and policies that offer alternative modes of identity, thinking, social relations and politics. In both conservative and progressive discourses pedagogy is often treated simply as a set of strategies and skills to use in order to teach prespecified subject matter. In this context, pedagogy becomes synonymous with teaching as a technique or the practice of a craft-like skill. Any viable notion of critical pedagogy must grasp the limitations of this definition and its endless slavish imitations even when they are claimed as part of a radical discourse or project. In opposition to the instrumental reduction of pedagogy to a method – which has no language for relating the self to public life, social responsibility or the demands of citizenship – critical pedagogy illuminates the relationships among knowledge, authority and power.11 5 -Harms 6 -Qualified immunity doctrine in excessive force cases is a mess – two warrants. 7 -1 The clearly established clause is constantly used in excessive forces cases this creates cyclical police violence – the law never becomes clearly established allowing police to do whatever they want. Carbado 16 8 -Devon Carbado Harry Pregerson Professor of Law, University of California, Los Angeles Law School, 2016, "Blue on Black Violence: A Provisional Model of Some Causes" Georgetown Law Journal, http://georgetownlawjournal.org/files/2016/08/carbado-blue-on-black.pdf 9 -a. Qualified Immunity: Perhaps a more fundamental barrier to holding police officers accountable in the civil process is the doctrine of qualified immunity.189 That the purpose of this doctrine is to protect “all but the plainly incompetent or those who knowingly violate the law”190 is already a strong signal that the doctrine functions to protect police officers from liability. To understand the broader scope of the problem, a brief discussion of the doctrine of qualified immunity is necessary. Victims of police violence can sue police officers under Section 1983, a civil rights statute that permits plaintiffs to sue governmental officials for violating statutory or constitutional rights.191 In the excessive force context, plaintiffs typically assert that a police officer’s use of force violated the plaintiff’s Fourth Amendment right to be free from unreasonable seizures.192 Police officers can defend against such suits by asserting the defense of qualified immunity.193 Whether an officer prevails on this defense turns on whether that officer can show that (a) his/her conduct did not violate the plaintiff’s constitutional rights, or (b) assuming that his/her conduct did violate a constitutional right, that the right was not clearly established at the time the officer acted.194 With respect to whether the officer’s conduct violated the plaintiff’s constitutional rights, the standard, as in the criminal context, centers on reasonableness: whether a reasonable officer would have believed that the use of force was necessary.195 And, as in the criminal context, juries will often defer to an officer’s claim that he/she employed deadly force because he/she feared for his/her life.196 Moreover, implicit and explicit biases can inform their decision making.197 With respect to the “clearly established” doctrine, there are two problems with the standard. First, courts often avoid deciding the question of whether the officer’s conduct violated the Constitution and rule instead on whether the constitutional right in question was clearly established.198 The Supreme Court has made clear that lower courts are free to proceed in this way,199 making it relatively easy for courts to make the defense of qualified immunity available to a police officer without having to decide whether the officer violated a constitutional right.200 This avoidance compounds the extent to which the law is unsettled. And, the greater the uncertainty about the law, the greater the doctrinal space for a police officer to argue that particular rights were not “clearly established” at the time the officer acted.201 In other words, the more courts avoid weighing in on the sub stantive question of whether police conduct violates the Constitution, the more leeway police officers have to argue that their conduct did not violate a clearly established right. Consider, for example, Stanton v. Sims.202 There, the Court avoided the question of whether an officer’s entrance into a yard to effectuate the arrest of a misdemeanant violated the Fourth Amendment, but ruled that the right to avoid such an intrusion was not clearly established.203 Unless and until the Supreme Court expressly rules that, absent exigent circumstances, one has a right to be free from warrantless entry into one’s yard, courts will likely grant qualified immunity in cases involving such arrests.204 10 -That allows police get off in every instance – clearly established is super vague. Carbado 16 11 -Devon Carbado Harry Pregerson Professor of Law, University of California, Los Angeles Law School, 2016, "Blue on Black Violence: A Provisional Model of Some Causes" Georgetown Law Journal, http://georgetownlawjournal.org/files/2016/08/carbado-blue-on-black.pdf 12 -A second problem with the “clearly established” doctrine pertains to how courts apply it. According to the Supreme Court, in applying the “clearly established” standard, the inquiry is whether the right is “sufficiently clear ‘that every reasonable official would have understood that what he/she is doing violates that right.’”205 This standard creates rhetorical room for police officers to argue that not “every” reasonable officer would have understood that the right in question was clearly established.206 The standard is also, as Karen Blum observes, “riddled with contradictions and complexities.”207 Eleventh Circuit Judge Charles Wilson puts the point this way: The way in which courts frame the question, “was the law clearly established,” virtually guarantees the outcome of the qualified immunity inquiry. Courts that permit the general principles enunciated in cases factually distinct from the case at hand to “clearly establish” the law in a particular area will be much more likely to deny qualified immunity to government actors in a variety of contexts. Conversely, those courts that find the law governing a particular area to be clear ly established only in the event that a factually identical case can be found, will find that government actors enjoy qualified immunity in nearly every context.208 When one adds the difficulties of the “clearly established” standard to the other dimensions of the qualified immunity doctrine, it becomes clear that the qualified immunity regime erects a significant doctrinal hurdle to holding police officers accountable for acts of violence. 13 -2 Double reasonableness is provided in excessive force cases – this makes compensation impossible. Pittman 12. 14 -Nathan R. Pittman, UNINTENTIONAL LEVELS OF FORCE IN § 1983 EXCESSIVE FORCE CLAIMS William and Mary Law Review William and Mary Law Review May, 2012 William and Mary Law Review 53 Wm. and Mary L. Rev. 2107 15 - 16 -The Court’s decision to employ an “objective reasonableness” standard in qualified immunity and substantive Fourth Amendment analyses has created a system that many scholars argue has distorted the application of laws designed to vindicate violations of constitutional rights. One of the first commentators to notice this trend was Michael Fayz, who argued after Graham that “when the use of force is unreasonable-in-fact, yet deemed legally reasonable and immune, substantive constitutional protections have been sacrificed to the purposes and powers of the Court.”162 This doubling up of reasonableness standards—the potential for reasonable unreasonableness—undermines the ability of a plaintiff to vindicate his or her rights under the Fourth Amendment. Under the modern doctrine of qualified immunity, the full analysis of an excessive force claim would proceed as follows. First, the court would determine whether, under the Fourth Amendment, the defendant’s actions were objectively reasonable under the circumstances.163 If the defendant’s actions were not objectively reasonable, the court would then determine whether the defendant’s belief that his or her objectively unreasonable action was reasonable under the circumstances was itself objectively reasonable.164 For good reason, this scheme has been termed “nonsensical.”165 In effect, defendants would have two opportunities to prove that their conduct was objectively reasonable, with the standard of reasonableness becoming less stringent the second time around. 17 -This allows the police to claim ignorance of the law creating more police violence. Pittman 12 18 -Nathan R. Pittman, UNINTENTIONAL LEVELS OF FORCE IN § 1983 EXCESSIVE FORCE CLAIMS William and Mary Law Review William and Mary Law Review May, 2012 William and Mary Law Review 53 Wm. and Mary L. Rev. 2107 19 -Qualified immunity has distorted other values in the legal system and provided perverse incentives to law enforcement officers. First, as Evan Mandery argues, qualified immunity “departs from the commonly accepted maxim that all citizens are to be held strictly liable for knowledge of the law.”138 That is, law enforcement officers can escape liability by being ignorant of the law. Of course, one might respond to Mandery’s criticism by pointing to Harlow’s requirement that ignorance is excused only when a reasonable person would not have known.139 Mandery’s stronger criticism is that because qualified immunity holds officers to a low standard for their knowledge of the law,140 they have no incentive to learn it and thus provide enforcement more consistent with the Constitution.141 Mandery argues that this creates particular problems in the context of excessive force cases, in which victims of police misconduct are often powerless to mitigate that conduct because the level of care that victims take to comply with the law is often irrelevant.142 In response, Mandery suggests that strict liability might be applied to § 1983 cases, a beneficial move that would remove uncertainty about which rights the statute will vindicate and would fully expunge subjective criteria from liability.143 The public policy justifications of qualified immunity144 would, of course, require that strict liability be packaged with a mandatory indemnification scheme or a respondeat superior theory of liability for municipalities,145 and Mandery admits that such packaging has been explicitly rejected by the Court.146 20 -Qualified immunity in excessive force cases is used for police militarization. AELE 7 21 -Non Profit Police Training Center and Law Journal, July 2007, "Civil Liability for SWAT operations" AELE, http://www.aele.org/law/2007LRJUL/2007-07MLJ101.pdf 22 -A S.W.A.T. team or unit gets called into action in response to an extraordinary situation, where it is believed that the normal law enforcement response may be inadequate to deal with the circumstances. Lawsuits over the use of SWAT often are based on arguments that the utilization of such force was an overreaction to the circumstances faced, that the level of force and display of force presented in fact enhanced the danger, leading to deaths or injuries that might otherwise not have occurred, or that officers on the SWAT team were not adequately trained to deal with the specific circumstances, including hostage negotiations or dealing with mentally disturbed individuals, another circumstance which frequently arises. In one such case, Estate of Bing v. City of Whitehall, No. 05-3889, 456 F.3d 555 (6th Cir. 2006), the court ultimately ruled that police officers on the scene, including S.W.A.T. team members, were entitled to qualified immunity for surrounding the home of a man who had fired shots into the air and ground nearby, entering the home forcibly without a warrant, and using pepper gas and a flashbang in an attempt to flush him out. Assuming that the use of a second flashbang, which burned down the house, was excessive, as the plaintiff argued, the court found that it still did not violate any "clearly established right." Factual disputes about whether the suspect was still armed and was threatening officers at the time they shot and killed him, however, barred qualified immunity for the officers on a claim that the use of deadly force was excessive. The case involved a man in Whitehall, Ohio who fired a gun into the air and into the ground near his home one evening, resulting in a number of witnesses to this phoning the police. Upon arriving on the scene, the officers were informed that the man had retreated into his home. The officers, backed up by a S.W.A.T. team, surrounded the house, attempted to communicate with the man, and subsequently tried to force him outside using pepper gas. Eventually, the S.W.A.T. team invaded the house, and killed the man. During the raid, police used a 102 flashbang device, which also burned the house down. The decedent's estate and his brother brought a federal civil rights lawsuit under the Fourth and Fourteenth Amendments against the city, the police department, and individual officers. The lawsuit claimed that the police violated the decedent's clearly established rights when they entered the home without a warrant, used excessive force by employing pepper gas and flashbang devices, unreasonably used deadly force when they shot and killed him, and unreasonably destroyed property by burning down the house. The trial court denied qualified immunity to the defendant officers, finding that there were disputed issues of fact that required a trial. A federal appeals court reversed in part. It found that the officers were entitled to qualified immunity for breaking the house's front door, seizing the man inside through an encirclement of his house without a warrant, using the pepper gas and the flashbang devices, entering his home without a warrant, and destroying the house. It upheld the denial of qualified immunity on the excessive force claim. The appeals court reasoned that the officers acted lawfully in effecting a "de facto house arrest" of the man by surrounding the house, despite not obtaining a warrant, because his firing of shots in the neighborhood created a "dangerous emergency." For that same reason, the officers did not need a warrant to enter the house, and the use of pepper gas and a flashbang device as attempts to force the suspect outside were reasonable. The appeals court, assuming for purposes of the appeal, without holding, that the use of a second flashbang device that set the house on fire violated the decedent's constitutional right to be free from excessive force, found that the right, in this context, was "not clearly established." Accordingly, the defendant officers were entitled to qualified immunity on all those claims. 23 -This culminates into a police state – qi silences protests against police and justifies killings. Carter 15 24 -Tom Carter WSWS Legal Correspondent, a lawyer, 11-12-2015, "US Supreme Court expands immunity for killer cops," World Socialist Website, https://www.wsws.org/en/articles/2015/11/12/pers-n12.html 25 -With the death toll from police brutality continuing to mount, the US Supreme Court on Monday issued a decision expanding the authoritarian doctrine of “qualified immunity,” which shields police officers from legal accountability. When a civil rights case is summarily dismissed by a judge on the grounds of “qualified immunity,” the case is legally terminated. It never goes to trial before a jury and is never decided on its constitutional merits. In March of 2010, Texas Department of Public Safety Trooper Chadrin Mullenix climbed onto an overpass with a rifle and, disobeying a direct order from his supervisor, fired six shots at a vehicle that the police were pursuing. Mullenix was not in any danger, and his supervisor had told him to wait until other officers tried to stop the car using spike strips. Four shots struck Israel Leija, Jr., killing him and causing the car, which was going 85 miles per hour, to crash. After the shooting, Mullenix boasted to his supervisor, “How’s that for proactive?” The Luna v. Mullenix case was filed by Leija’s family members, who claimed that Mullenix used excessive force in violation of the Fourth Amendment, part of the Bill of Rights. The district court that originally heard the case, together with the Fifth Circuit Court of Appeals, denied immunity to Mullenix on the grounds that his conduct violated clearly established law. The Supreme Court intervened to uphold Mullenix’s entitlement to immunity—a decision that will set a precedent for the summary dismissal of civil rights lawsuits against police brutality around the country. This is the Supreme Court’s response to the ongoing wave of police mayhem and murder. The message is clear: The killings will continue. Do not question the police. If you disobey the police, you forfeit your life. So far this year, more than 1,000 people have been killed by the police in America. Almost every day, there are new videos posted online showing police shootings, intrusions into homes and cars, asphyxiations, beatings and taserings. Last week, two police officers in Louisiana opened fire on Jeremy Mardis, a six-year-old autistic boy, and his father Chris Few. The boy’s father had his hands up during the shooting and is currently hospitalized with serious injuries. His son succumbed to the police bullets while still buckled into the front seat of the car. The Supreme Court’s decision reflects the fact that in the face of rising popular anger over police killings, the entire political apparatus—including all of the branches of government—is closing ranks behind the police. This includes the establishment media, which has largely remained silent about Monday’s pro-police Supreme Court decision. The police operate with almost total impunity, confident that no matter what they do, they will have the backing of the state. Two weeks ago, a South Carolina grand jury refused to return an indictment against the officer who was caught on video killing 19-year-old Zachary Hammond. This follows the exoneration of the police who killed Michael Brown in Ferguson, Missouri, Eric Garner in New York City and Tamir Rice in Cleveland. The Obama administration’s position regarding the surge of police violence was most clearly and simply articulated by FBI director James Comey in a speech on October 23. “May God protect our cops,” Comey declared. He went on to accuse those who film the police of promoting violent crime. Meanwhile, in virtually every police brutality case that has come before the federal courts, the Obama administration has taken the side of the police. On Monday, the Supreme Court went out of its way to cite approvingly an amicus curiae (friend of court) brief filed by the National Association of Police Organizations (NAPO), which defended Mullenix. With this citation, notwithstanding its ostensible role as a neutral arbiter and guarantor of the Constitution, the Supreme Court sent a clear signal as to which side it is on. During the imposition of de facto martial law in Ferguson last year, NAPO issued statements vociferously defending Michael Brown’s killer, labeling demonstrators as “violent outsiders,” and denouncing “the violent idiots on the street chanting ‘time to kill a cop!’” “Qualified immunity” is a reactionary doctrine invented by judges in the later part of the 20th century to shield public officials from lawsuits. As a practical matter, this doctrine allows judges to toss out civil rights cases without a jury trial if, in the judge’s opinion, the official misconduct in question was not “plainly incompetent” or a “knowing violation of clearly established law.” Over recent decades, the doctrine has been stretched to Kafkaesque proportions to shield police officers from accountability. In the landmark case of Tennessee v. Garner (1985), the Supreme Court held that it violates the Constitution to shoot an “unarmed, nondangerous fleeing suspect,” and required an imminent threat of death or serious bodily injury before the police could open fire. But the Supreme Court in its decision on Monday dismissed this language as constituting a “high level of generality” that was not “particular” enough to “clearly establish” any particular constitutional rights. Since cases that are dismissed on the grounds of qualified immunity do not result in decisions on the constitutional issues, this circular pseudo-logic ensures that no rights will ever be “clearly established.” It also ensures that, instead of the democratic procedure of a jury trial, cases involving the police will be decided by judges. The Supreme Court issued Monday’s decision without full briefing or oral argument, designating it “per curiam,” i.e., in the name of the court, not any specific judges. Justice Antonin Scalia filed a concurring opinion, displaying his trademark sophistry. According to Scalia, Mullenix did not use “deadly force” within the meaning of the Supreme Court’s prior cases, since he was shooting at a car, not a person. (Four bullets struck Leija, but none of the six shots struck the engine block at which Mullenix was supposedly aiming.) Justice Sonia Sotomayor filed the sole dissent, noting that this decision “renders the protections of the Fourth Amendment hollow,” and sanctions a “shoot first, think later” approach to policing. However, Sotomayor wrote that she would have used a “balancing” analysis instead, in which a “particular government interest” would need to be “balanced” against the use of deadly force. This “balancing” rhetoric mirrors the Obama administration’s justifications for assassination and domestic spying, according to which national security is balanced against democratic rights. The Bill of Rights itself—that old, yellow, forgotten piece of paper—does not make itself contingent on the subjective mental states of police officers, “clearly established law,” or the “balancing” of “government interests.” America confronts a massive social crisis. Decades of endless war and occupations abroad, the degradation of wages and living conditions at home, the enrichment of a tiny layer of financial criminals at the expense of the rest of the society, rampant speculation and corruption at the highest levels—these factors contribute to mounting social tensions and the danger, from the standpoint of the ruling class, of the growth of social opposition. Such opposition can already be seen, in its earliest stages, in the struggle by autoworkers against the sellout contract being imposed by the United Auto Workers union. Like the tyrant who proposes to solve the problem of hunger by imposing a hefty fine on everyone who starves, the Supreme Court’s decision Monday confirms that the entire social system has nothing to offer by way of a solution to the crisis except more of the same. The abrogation of democratic rights, torture, military commissions, drone assassinations, unlimited surveillance, the lockdown of entire cities, internment camps, beatings, murder, martial law, war—this is how the ruling class plans to deal with the social crisis. Notwithstanding the epidemic of police violence, the flow of unlimited cash and military hardware to police departments from the Department of Homeland Security and the Pentagon continues unabated. The buildup of the police as a militarized occupation force operating outside the law, pumped up and ready to kill, must be seen as a part of preparations by the ruling class for mass repression and dictatorship in response to the growth of working class opposition. Tom Carter 26 - 27 -Solvency 28 -Thus the plan: The United States will eliminate the reasonableness and clearly established clauses from qualified immunity doctrine. 29 -The plan is key to cleaning up the semantic jumble of qualified immunity – it functionally eliminates it. That allows increased lawsuits and department reform which change police’s behavior. Hassel 9 30 -Diana JD Rutgers, “Excessive Reasonableness”, The Trustees of Indiana University Indiana Law Review 2009 Indiana Law Review 43 Ind. L. Rev. 117 31 -The current regime poses at least three questions in resolving qualified immunity in an excessive force case: 1) whether the facts as alleged by the plaintiff establish an unreasonable use of force; 2) whether the unreasonableness of that use of force was clearly established at the time of the defendant’s actions; and 3) whether an objectively reasonable official would have known that his actions violated the clearly established right. The incoherency of this regime becomes most acute when a court attempts to answer the third inquiry – whether the reasonable official would have known that his their actions violate a clearly established right.126 Given that it has already determined that the amount of force used was unreasonable it must now somehow apply another level of reasonableness to the facts. To alleviate that problem, the qualified immunity standard, at least in the excessive force context, should become a purely legal question – does the determination that the defendant’s actions violate the Fourth Amendment represent a new development in the law. Rather than three questions, the court will resolve only two: 1) whether the facts as alleged by the plaintiff establish an unreasonable use of force; and 2) whether a new legal standard has been applied by the court. This reformulation would provide critical protection for the defendant from being held responsible for predicting novel developments in the law. This concern, after all, was one of the primary motivating forces behind the adoption of qualified immunity.127 The new standard would also make the qualified immunity question purely a legal one, thus eliminating confusion between the roles of the judge and the jury. It is the second reasonableness inquiry that creates questions of fact in a qualified immunity analysis – a court could address purely as a legal matter whether it is adopting new law while omitting the confusing and unnecessary second inquiry into reasonableness. Of course, determining whether new law has been developed is not a simple task. As Chaim Saiman has pointed out, law created by courts is not framed as the articulation of new black letter rules, but rather by the application of precedent to a particular set of facts.128 Notwithstanding these difficulties, however, certain kinds of decisions could be relatively clearly identified as creating new legal standards. For example, Graham itself, which announced for the first time that the Fourth Amendment would be the framework in which seizures made with excessive force are analyzed, represented a break with the past and an articulation of new standards. Similarly, an analysis which explicitly repudiates or overrules prior cases would also be a new development in the law. A decision which applied a well established general standard to a new set of facts would likely not be developing new law. Only in those game changing moments when a police officer’s behavior is being evaluated by a genuinely new standard would qualified immunity come into play to protect a police officer caught in between old and new constitutional standards. Since articulations of genuinely new law are rare, the result of such a reformulated qualified immunity standard would be that qualified immunity would rarely be granted in excessive force cases. One result might be that government officials would more often be found liable for unconstitutional acts. This might well have a beneficial impact on the behavior of police officers and the training they receive. More likely, however, is that cases will be resolved on the basis of the Fourth Amendment rather than because of the qualified immunity defense. It is quite possible that defendants would not lose appreciably more § 1983 cases, only that the basis for a defendant’s success would be the requirements of the Fourth Amendment rather than qualified immunity. Requiring that the Fourth Amendment, rather than qualified immunity, do the work of determining which police behaviors should be sanctioned and which excused, will lead to more clarity for the guidance of police officers, and also more open understanding by the public of the range of permissible police behavior. The elimination of the obfuscation provided by qualified immunity may make it more possible to have a constructive discussion concerning the appropriate use of police force and the remedies for abuses of that force. Reforming the legal regime to provide a more meaningful deterrent to police violence can start by simplifying and making more coherent the rules applicable to such claims. 32 -Meta review of literature proves – police are scared of lawsuits – they’ll change their behavior. Ferdik 13 33 -Frank V. Ferdik Department of Criminology and Criminal Justice University of South Carolina, August 2013, "Perception is Reality: A Qualitative Approach to Understanding Police Officer Views on Civil Liability" International Police Executive Symposium, Geneva Centre For the Democratic Control of Armed Forces, Coginta – For Police Reforms and Community Safety, http://www.coginta.org/uploads/documents/817bd907a32ad935c3d563655f76658580c75497.pdf 34 -Though there is ample research concerning the prevalence, cost and impact of this policy, relatively few studies have investigated the perceptions officers have regarding liability. Of the few, Scogin and Brodsky (1991) found that 9 percent of the officers they surveyed felt that their fear of being held civilly liable reached a point of irrationality. Officers expressed their risk management precautions in terms of “treating people fairly” and “going by the book of procedures...the department provides.” The authors concluded by stating that “law enforcement candidates have real concerns about work-related lawsuits” (Scogin and Brodsky, 1991, p. 45). These findings were replicated by Kappeler (2006) who found that 7 50 percent of 220 police cadets in a statewide training academy were worried about civil liability, and 31 percent thought they worried to excess. Female officers showed less anxiety over litigation, even when controlling for age, race, education, years of experience and job assignment (Kappeler, 2006). Garrison (1995) surveyed 50 law enforcement officials from state, municipal and university agencies throughout Pennsylvania and found that 28 percent of respondents agreed that “the idea that a police officer can be sued bothers me.” Garrison (1995) also found that state police officers were generally more “hostile to the idea of civil liability” and less likely to believe “that it was a deterrent to police misconduct” than were university and municipal police officers. A survey of 658 sworn police officers from 21 agencies across the U.S. found that 15 percent ranked civil liability third among the top ten most serious challenges they face on the job (Stevens, 2000). In a larger study of Cincinnati police officers performed by Hughes (2001), it was found that while most officers had not been sued, they reported knowing a colleague who had been involved in a litigation claim for occupational behavior. Curiously, though 45 percent of the police reported that while civil liability impedes effective law enforcement, a greater majority also reported that fear of being sued did not register with them when stopping citizens 35 -Lawsuits mean either departmental reform occurs, or victims are paid – empirics. Feuer 16 36 -Alan Feuer been a staff writer at The New York Times since 1999. He has written about prisons, the Mafia, baseball, steakhouse waiters, pigeon racers, firefighters, bartenders, and single mothers., 8-16-2016, "In Police Misconduct Lawsuits, Potent Incentives Point to a Payout," New York Times, http://www.nytimes.com/2016/08/17/nyregion/police-misconduct-lawsuit-settlements.html?_r=1andregister=google) 37 -Scott Rynecki, who handled the lawsuit involving the death of Akai Gurley, with Mr. Gurley’s domestic partner, Kimberly Ballinger, right. Credit Bebeto Matthews/Associated Press When lawsuits against the police are settled, like the one announced this week in which New York City agreed to pay $4 million to the family of Akai Gurley, people tend to focus on the amount of money changing hands. Sometimes overlooked are the institutional reforms embedded in the deals, and the difficult decisions made by plaintiffs and their lawyers in trading a full public airing of the facts for the recovery of damages. In many police misconduct cases, the victims and their families are people of limited means for whom a six-figure check could be life-changing. At the same time, lawyers said, those who file, and settle, such suits belong to what might be called a community of the wronged, and often have a strong desire to tell their stories or force the system to change. “Frequently, plaintiffs in these cases are badly damaged and want or even need compensation,” said Barry Scheck, a lawyer who helped negotiate the $9 million settlement for Abner Louima, a Haitian immigrant who was sexually assaulted by the police with a broomstick inside a Brooklyn station house in 1997. “But you have to trade that off sometimes with their aspirations to expose what happened, and to find solutions.” Mr. Louima’s suit, which was filed against the city and its main police union, was a rare example of litigation that produced enormous monetary damages and real alterations to policing policy. When the settlement was reached in 2001, Mr. Louima said that he had dropped his three-year battle because he was convinced that the city and the union had started to improve the ways the Police Department trained, monitored and disciplined its officers. Ultimately, the decision of whether to settle a suit or to air the facts of the case, hoping to both win a judgment and secure reform, is up to the client, said Scott Rynecki, who handled the suit involving Mr. Gurley, an unarmed man killed two years ago by an officer on patrol in a Brooklyn public housing project. “Our primary job is to get our clients” — in this case, it was Mr. Gurley’s domestic partner, Kimberly Ballinger, and their daughter — “a decent recovery,” Mr. Rynecki said. “If the recovery is fair, we have an obligation not to go forward just to ‘go forward.’” Mr. Gurley was killed by a police officer in 2014. Mr. Rynecki said it was also important to create a public record and push for structural change. As part of his negotiations with the city, he said, he urged officials to improve training at the Police Academy in areas like firearms handling and emergency medical care. “I have made repeated calls for this, both in public and in private, with politicians and on TV,” he said. “It’s a constant mantra. We have the greatest police force in country, but that doesn’t mean it can’t be improved.” In Mr. Gurley’s case, as in some others, litigation was preceded by an extensive criminal trial which produced a detailed narrative about everything that had happened. Sometimes, the revelatory nature of a criminal proceeding can persuade a plaintiff, like Ms. Ballinger, that she does not need her day in civil court. But sometimes, even a long criminal trial can leave the record incomplete. Howard Hershenhorn, a lawyer who represented the family of Amadou Diallo, a Guinean immigrant who was shot 41 times by the police in 1999, said he “had no choice but to fully litigate the civil case” because the officers who had killed Mr. Diallo were acquitted and the story of his client’s death was never fully told. Working with his partners, Mr. Hershenhorn took numerous depositions during the case’s discovery phase, unearthing information that never emerged fully at the criminal trial. Much of it concerned the Street Crimes Unit, a plainclothes patrol in the Police Department that employed the officers who shot Mr. Diallo and was eventually disbanded. “We never would have settled the case without assurances from the right people that that would happen,” Mr. Hershenhorn said. “The unit was on its way to being disbanded because of information that we produced in discovery and that, frankly, the city didn’t know.” Since by definition plaintiffs in these cases have suffered the apparent trauma of personal injury or the death of a loved one, there are powerful incentives to take a settled payout and not relive it all at trial. 38 - 39 - 40 -UV 41 -Debate should deal with questions of real-world consequences—ideal theories ignore the concrete nature of the world and legitimize oppression. Curry 14 42 -Tommy J. Curry Professor of Philosophy @ Texas AandM, “The Cost of a Thing: A Kingian Reformulation of a Living Wage Argument in the 21st Century,” 2014 43 -Despite the pronouncement of debate as an activity and intellectual exercise pointing to the real world consequences of dialogue, thinking, and (personal) politics when addressing issues of racism, sexism, economic disparity, global conflicts, and death, many of the discussions concerning these ongoing challenges to humanity are fixed to a paradigm which sees the adjudication of material disparities and sociological realities as the conquest of one ideal theory over the other. In “Ideal Theory as Ideology,” Charles Mills outlines the problem contemporary theoretical-performance styles in policy debate and value-weighing in Lincoln-Douglass are confronted with in their attempts to get at the concrete problems in our societies. At the outset, Mills concedes that “ideal theory applies to moral theory as a whole (at least to normative ethics as against metaethics); since ethics deals by definition with normative/prescriptive/evaluative issues, it is set against factual/descriptive issues.” At the most general level, the conceptual chasm between what emerges as actual problems in the world (e.g.: racism, sexism, poverty, disease, etc.) and how we frame such problems theoretically—the assumptions and shared ideologies we depend upon for our problems to be heard and accepted as a worthy “problem” by an audience—this is the most obvious call for an anti-ethical paradigm, since such a paradigm insists on the actual as the basis of what can be considered normatively. Mills, however, describes this chasm as a problem of an ideal-as-descriptive model which argues that for any actual-empirical-observable social phenomenon (P), an ideal of (P) is necessarily a representation of that phenomenon. In the idealization of a social phenomenon (P), one “necessarily has to abstract away from certain features” of (P) that is observed before abstraction occurs. This gap between what is actual (in the world), and what is represented by theories and politics of debaters proposed in rounds threatens any real discussions about the concrete nature of oppression and the racist economic structures which necessitate tangible policies and reorienting changes in our value orientations. 44 -Engagement with morality decreases morality. Posner 5 45 -The Problematics of Moral and Legal Theory, Richard A. Posner ~Chief Judge, United States Court of Appeals for the Seventh Circuit; University of Chicago Law School.~, Harvard Law Review, Vol. 111, No. 7 (May, 1998), pp. 1637-1717 46 -If some moral principle that you read about in a book and that may have appealed to your cognitive faculty collides with your preferred, your self-advantaging, way of life, you have only to adopt an alternative moral-ity or, if you're bold enough, an antimorality (like that of Nietzsche, who famously attributed the morality of "good" people to their will to power) that does not contain the principle; and then you will be free from any burden of guilt. Do you find Kantian strictures against lying irksome? Then read Nyberg. Better yet, identify with one of the great liars of history, Odysseus for example. The better read you are in philosophy or literature, and the more imaginative and analytically supple you are, the easier you will find it to reweave your tapestry of moral beliefs so that your principles allow you to do what your id tells you to do. Not knowledge, but ignorance, is the ally of morality. The medieval Roman Catholic Church recognized this when it told its priests not to ask parishioners at confession about specific sexually deviant practices, lest the priests give them ideas. To be confident that instruction in moral reason improves people's behavior you would have to agree with Socrates that people are naturally good and do bad things only out of ignorance. Who believes that, and on what evidence? - EntryDate
-
... ... @@ -1,1 +1,0 @@ 1 -2016-12-02 17:08:21.0 - Judge
-
... ... @@ -1,1 +1,0 @@ 1 -Jeff Joseph - Opponent
-
... ... @@ -1,1 +1,0 @@ 1 -Cypress Bay SD - ParentRound
-
... ... @@ -1,1 +1,0 @@ 1 -7 - Round
-
... ... @@ -1,1 +1,0 @@ 1 -1 - Team
-
... ... @@ -1,1 +1,0 @@ 1 -West Nelson Aff - Title
-
... ... @@ -1,1 +1,0 @@ 1 -1AC Alta R1 - Tournament
-
... ... @@ -1,1 +1,0 @@ 1 -Alta
- Caselist.RoundClass[4]
-
- Cites
-
... ... @@ -1,1 +1,0 @@ 1 -2
- Caselist.RoundClass[5]
-
- Cites
-
... ... @@ -1,1 +1,0 @@ 1 -3 - EntryDate
-
... ... @@ -1,1 +1,0 @@ 1 -2016-10-28 20:13:18.0 - Judge
-
... ... @@ -1,1 +1,0 @@ 1 -Ryan Fink - Opponent
-
... ... @@ -1,1 +1,0 @@ 1 -Immaculate MC - Round
-
... ... @@ -1,1 +1,0 @@ 1 -2 - Tournament
-
... ... @@ -1,1 +1,0 @@ 1 -Meadows
- Caselist.RoundClass[6]
-
- Cites
-
... ... @@ -1,1 +1,0 @@ 1 -4 - EntryDate
-
... ... @@ -1,1 +1,0 @@ 1 -2016-10-29 23:40:57.0 - Judge
-
... ... @@ -1,1 +1,0 @@ 1 -Adam Torson - Opponent
-
... ... @@ -1,1 +1,0 @@ 1 -Greenhill SK - Round
-
... ... @@ -1,1 +1,0 @@ 1 -6 - Tournament
-
... ... @@ -1,1 +1,0 @@ 1 -Meadows
- Caselist.RoundClass[7]
-
- Cites
-
... ... @@ -1,1 +1,0 @@ 1 -5 - EntryDate
-
... ... @@ -1,1 +1,0 @@ 1 -2016-12-02 17:08:18.0 - Judge
-
... ... @@ -1,1 +1,0 @@ 1 -Jeff Joseph - Opponent
-
... ... @@ -1,1 +1,0 @@ 1 -Cypress Bay SD - Round
-
... ... @@ -1,1 +1,0 @@ 1 -1 - RoundReport
-
... ... @@ -1,2 +1,0 @@ 1 -1AC Police State 2 -1NC Virtue Ethics NC Plan Flaw Berlant Turn - Tournament
-
... ... @@ -1,1 +1,0 @@ 1 -Alta
- Caselist.RoundClass[2]
-
- EntryDate
-
... ... @@ -1,0 +1,1 @@ 1 +2016-09-18 14:22:31.0 - Judge
-
... ... @@ -1,0 +1,1 @@ 1 +Nigel Ward - Opponent
-
... ... @@ -1,0 +1,1 @@ 1 +Holy Cross TL - Round
-
... ... @@ -1,0 +1,1 @@ 1 +6 - Tournament
-
... ... @@ -1,0 +1,1 @@ 1 +Greenhill
- Caselist.RoundClass[3]
-
- EntryDate
-
... ... @@ -1,0 +1,1 @@ 1 +2016-09-18 14:22:33.0 - Judge
-
... ... @@ -1,0 +1,1 @@ 1 +Nigel Ward - Opponent
-
... ... @@ -1,0 +1,1 @@ 1 +Holy Cross TL - Round
-
... ... @@ -1,0 +1,1 @@ 1 +6 - Tournament
-
... ... @@ -1,0 +1,1 @@ 1 +Greenhill