Saturday, August 31, 2019

Police Brutality Essay Essay

Introduction: Studies has shown that police are more likely to abuse blacks rather than whites and this is caused by racial profiling. But through the history of police brutality, police brutality was first used after a police officer was described beating a civilian in 1633. Police brutality is the abuse of force and it is usually through physical. But there are other ways to abuse which are verbally and sometimes psychologically and this is done by a federal or state authorities which are the police officers. The history of police brutality has been a cycle and the phrases are actually violence, corruption and improve on what is wrong. These has been a cycle for many years through police brutality. Police brutality exists in many countries and not only in the US. African American are always targeted as â€Å"bad† people and this also proves that there are inequality within the black community and the world. There are also certain misconducted forms of brutality but some of these are reall y common in our society which are racial profiling, corruption, false arrest and inserting fear into civilians. There are many other cases that were actually targeted mainly on blacks and this causes unfair justice. Police’s agenda is to basically fight and protect civilians as well as being the peace keepers and never being the one that harm any civilians without proper approval or warrant to an arrest or to an even greater extend which is physically, verbally or psychologically harming the opposed civilian on the mistake he/she has done. Police are the ones that set an image towards the society so that people can actually follow them and think they are doing the right thing, however it is a really disturbing matter knowing that police does racial profiling especially compared to whites and blacks. And therefore, this research paper is about police will most likely be more crucial to blacks compare to whites and this is known as racial profiling. Police brutality and racism in the US Police brutality and racism has played a big role in degrading the safeness of US as well as degrading the reputation of the authorities through these problems, there are significantly huge amount of statistics reports on police brutality. Even though these cases are brought up to court, out of  5986 reports only 33% went through conviction and 64% received prison sentences. American police officers have used lethal weapons to kill more than terrorist did since the Vietnam War. And at least once a year, there is always a person beaten by a Police Officer. And this is always shown as abusive of authority. Although committing a crime is illegal, police have no rights to abuse their rights through physical or verbal actions. Although physical abuse brings physical pain and such but verbal abuse has actually proved that it would be more harmful to that victim. As in one of the cases, a police officer actually insulted the victim till the victim killed himself out of anger. Words can actually mean a lot towards a human being and police officers think that they won’t be charged with any offense through insulting verbally instead of physical abuse can actually cause problem towards these police officers. According to a research, every year about 261 police officers are involved with police brutality and only about 27 percent of these victims are involved in law suits. In many cases, majority of these victims are actually African Americans that were abused by the law that are supposed to protect them. And the research also shows that there is a result of 382 deaths out of 5986 reports. Other than the statistics, there is also a few groups of people which are elderly, drug addicts, female and weak people. Police would take advantage of this to brutalize them while inserting fear in them through threatening for an example, police officers might threaten them about their family members and if they were to report anything the police officer would arrest or treat their family members the same way they were being treated and this would definitely insert fear into the victims. That is why some cases were reported as fake issues because victims themselves do not want to admit the truth being worried of being physically or verbally abused again. Overall, if a police officer was found abusing a victim, their rights as a police would probably be revoked and only prison sentence up to 14 months averagely and this is unfair to all the other crime offenders as well as the victims because 14 months is a really short time which these victims might be abused once again when these police officers are discharged from their duty and/or the prison. Racial profiling in the US among police officers There are a lot of known cases of police brutality especially on blacks on racial profiling. But according to an article in 2012, a black person is killed by a security officer every 28 hours. As it was also stated in this article, that African-Americans are about 13.1% of the nation population but it has nearly covered 40% of the prison’s population. Also, blacks sometimes do sell drugs and is the same as white but they have a higher percentage of getting arrested for drugs than whites and this is racial profiling. Black offenders also definitely receive a longer sentence compare to white offenders. Other than that, most of the blacks killed were unarmed, which according to the report, 44% of blacks was killed even though there was no sign of weapon. 27% deaths are claimed that the suspect had a gun but there was no prove to this. 2% had small weapons such as knifes, big scissors and cutters or any other similar weapons, and only about 20% had guns or deadly weapons. Most off icers that killed blacks claim that they were afraid and they were trying to protect themselves so they have been force to open fire. These police officers open fire if they feel like they are being threaten, for an example, the suspect running away from the cops, driving towards to cops or getting something from their waist. Police officers do not conclude if the suspect does have a weapon or not and yet just use deadly force to solve the issue. In one of the known cases of an African American getting shot by a police officer, the suspect name is Oscar Grant 22 year old, and was shot by Johannes Mehserle a police officer. The police officer claimed that Grant had a gun even though Grant was subdued by other officers which is not justified because Oscar Grant was already being subdued by other officers and yet Johannes Mehserle still fired his weapon at Oscar Grant. A few other examples of African American being assaulted without justified reasons are Rodney King, Sean Bell, and many other more. Rodney King was drunk on that night on March 2, 1991. He was speeding on the freeway and that was when the police officers attempted to pull him over, but Rodney King resisted. Once they manage to get Rodney King out of the vehicle, a group of officers tried to subdue Rodney King and used taser as well. Rodney King was kicked at the head and also was beaten with nightsticks. But lucky enough, King’s injuries wasn’t really serious but it left him with a couple of bruises and also a facial fractured bone. Sean Bell was killed by a police undercover team which fired 50 times at the car Bell and his friends  were riding in, it happened outside a strip club because he was having his bachelor’s party on that night. A police officer actually overheard that the friend talking about getting his gun, so in order to prevent the shooting from happening, the police officers opened fired at the car. And that was when Sean Bell passed away immediately on the scene. The detectives were not charged guilty of manslaughter on that night. And these are some of the cases African American’s are actually facing and this is why US should enforce a heavier rule on racial profiling. Actions of police brutality on selected victims through racial profiling The actions that were used on blacks were more crucial compare to whites such as verbal persuasion, unarmed physical force, force using non-lethal weapons, force using impact weapons and deadly force. There was a case which verbal persuasion actually lead the victim to committing suicide out of anger that has stated and it was cause by a police, these African Americans were often called nicknames like â€Å"nigger†, â€Å"negro† and many other nicknames. This can cause an impact on how they feel, it will never be shown through physical but always inside them. That is why verbal persuasion is really negative towards a victim. Unarmed physical force can cause external damage but actually, also internal because you would feel the depression on African American are seen as a minority and they know that their race will be discriminated against. As seen in some cases, African American was treated more crucially than whites, in one of the case, the black refused to go into the officer’s lunchroom and the next thing he knows is his head smashing through a plate glass window. If it was a white, it is really obvious that the police would allow him to stay outside the lunchroom and wait for other procedures. Non-lethal weapons that were used on blacks are more crucial, blacks were Taser multiple times which causes a great pain to them and in one of the cases recently, the victim was Taser to death. Force using impact weapons has also been an issue towards African American, in Walmart US, an African American went to pick up a bb rifle and he was just waving the gun around, after that the police came and ask him to disarm the gun as not knowing that was just a bb rifle. Regardless, the disagreement from the police, the police shouldn’t have fired a fatal shot on the victim and why  is the victim shot holding a bb gu n in a bb gun store. Conclusion In conclusion, the research has shown that police are more likely to be crucial towards blacks rather than whites and this is caused by racial profiling. As from the research above, we can conclude that police has been crucial towards African American. But we should also improve the systems that US is using which is actually people’s mind set. A simple thing can become a great problem, especially in racism police should not be bias about a crime which is done by a certain race but putting all the races together as one. Police’s job description is to fight justice and bring peace to civils and not being the one breaking the law. It will never be a good image towards civils. Solutions can be implemented through federal authorities, by ensuring that abuses such as torture, excessive amount of force will not happen and these officers that had broken the law should be accounted for and be brought to justice. Like some of the cases, the jury would say that the police is at his/her rights and only have a prison sentence up to 14 months average. This encourages police to not be afraid because it would only be a short time for sentence and be daring to do what is right. This would also influence the new police candidates that assaulting would not get them into a really big problem. The federal authorities should also implement solutions such as recording the actions a police officer used through a camera or being supervised by the team leader. Street cameras are really important for this matter which they can always refer back to the scene and watch how the victim was treated by police officers and this would definitely make US a better place instead of discriminating the minorities which is the African American but these solution applies to all the races but specifically for African American in order for them to be able to live equally like all the other races without any misjudgment or discrimination through racial profiling. References 99 Percent Of Police Brutality Complaints Go Uninvestigated In Central New Jersey: Report. 2014. 99 Percent Of Police Brutality Complaints Go Uninvestigated In Central New Jersey: Report. [ONLINE] Available at: http://www.huffingtonpost.com/2014/01/07/police-brutality-new-jersey-report_n_4555166.html. [Accessed 26 October 2014]. 4 Unarmed Black Men Have Been Killed By Police in the Last Month | Mother Jones. 2014. 4 Unarmed Black Men Have Been Killed By Police in the Last Month | Mother Jones. [ONLINE] Available at: http://www.motherjones.com/politics/2014/08/3-unarmed-black-african-american-men-killed-police. [Accessed 26 October 2014]. How Often are Unarmed Black Men Shot Down By Police?. 2014. How Often are Unarmed Black Men Shot Down By Police?. [ONLINE] Available at: http://www.dailykos.com/story/2014/08/24/1324132/-How-Often-are-Unarmed-Black-Men-Shot-Down-By-Police#. [Accessed 26 October 2014]. Ferguson police committed human rights violations during Michael Brown protests | Daily Mail Online. 2014. Ferguson police committed human rights violations during Michael Brown protests | Daily Mail Online. [ONLINE] Available at: http://www.dailymail.co.uk/news/article-2806085/Ferguson-police-committed-human-rights-violations-Michael-Brown-protests-Amnesty-International-claims.html. [Accessed 26 October 2014]. Police brutality | Law Teacher. 2014. Police brutality | Law Teacher. [ONLINE] Available at: http://www.lawteacher.net/criminology/essays/police-brutality.php. [Accessed 26 October 2014]. The Color of Justice – Constitutional Rights Foundation. 2014. The Color of Justice – Constitutional Rights Foundation. [ONLINE] Available at: http://www.crf-usa.org/brown-v-board-50th-anniversary/the-color-of-justice.html. [Accessed 26 October 2014]. Presentation to Hearing on Police Brutality & Misconduct – Richie Perez.

Friday, August 30, 2019

Speech About Premarital Pregnancy

Assalamu’alaikum wr. Wb Good day Ladies and Gentleman Pregnancy is a desire of every woman. Do you agree? Why is pregnancy important? Is family support also important? How’s the impact to economics life? Every family wants children in their life. It’s natural and a biological thing. Every pregnancy needs process. Knowledge about pregnancy is important to make us understand the process better. These days, there are so many cases about premarital pregnancy. In Indonesia, the number of this case is getting high in every year, especially for adolescent group. So today, I will tell you about what factors causing the premarital pregnancy.What’s premarital pregnancy in bahasa? There are so many factors causing premarital pregnancy. Can you mention it one by one? What is your reason? Okay, I will tell you. We can see it from the social aspect, health aspect, and also from the technological aspect. The first is from social aspect. Social aspect includes social inte raction with family, friends, and society. From those social interactions, interaction with family is the most important thing. From this interaction, morality of everyone is formed. Beside that, interaction social between friends and society are also very important.If we can’t interact with people in a good way, it establishes a negative behavior. The second is health aspect. Everyone wants a healthy life. There are so many people who fall to premarital pregnancy due to lack of health, both physical and spiritual. Both of those things, affect to our psychological condition. If we can’t take care of our health, it’s not possible for our mental becomes weak. Deviant behavior then arises from this incident. Free sex, for example. The last is technological aspect. Tehcnology changes our lifestyles. Many people don’t care about their surroundings which caused by technology.In Indonesia itself, technological developments is moving too fast. But it doesnâ€℠¢t counterbalanced by the improved quality of human resources. The government should make a wisdom about this technological developments. If they don’t, morality of the nation will be damaged, which causes the deviant behavior. In conclusion, pregnancy requires a clear process. This one is very important to avoid ourself from the premarital pregnancy. Because of that, we need to do the social interaction in a good way with our family, friends and also society. Beside that, we should take care of our health, both physical and spiritual. Speech About Premarital Pregnancy Assalamu’alaikum wr. Wb Good day Ladies and Gentleman Pregnancy is a desire of every woman. Do you agree? Why is pregnancy important? Is family support also important? How’s the impact to economics life? Every family wants children in their life. It’s natural and a biological thing. Every pregnancy needs process. Knowledge about pregnancy is important to make us understand the process better. These days, there are so many cases about premarital pregnancy. In Indonesia, the number of this case is getting high in every year, especially for adolescent group. So today, I will tell you about what factors causing the premarital pregnancy.What’s premarital pregnancy in bahasa? There are so many factors causing premarital pregnancy. Can you mention it one by one? What is your reason? Okay, I will tell you. We can see it from the social aspect, health aspect, and also from the technological aspect. The first is from social aspect. Social aspect includes social inte raction with family, friends, and society. From those social interactions, interaction with family is the most important thing. From this interaction, morality of everyone is formed. Beside that, interaction social between friends and society are also very important.If we can’t interact with people in a good way, it establishes a negative behavior. The second is health aspect. Everyone wants a healthy life. There are so many people who fall to premarital pregnancy due to lack of health, both physical and spiritual. Both of those things, affect to our psychological condition. If we can’t take care of our health, it’s not possible for our mental becomes weak. Deviant behavior then arises from this incident. Free sex, for example. The last is technological aspect. Tehcnology changes our lifestyles. Many people don’t care about their surroundings which caused by technology.In Indonesia itself, technological developments is moving too fast. But it doesnâ€℠¢t counterbalanced by the improved quality of human resources. The government should make a wisdom about this technological developments. If they don’t, morality of the nation will be damaged, which causes the deviant behavior. In conclusion, pregnancy requires a clear process. This one is very important to avoid ourself from the premarital pregnancy. Because of that, we need to do the social interaction in a good way with our family, friends and also society. Beside that, we should take care of our health, both physical and spiritual.

Thursday, August 29, 2019

Bi-Lingual Education

Bi-Lingual Education Essay Bilingual EducationEducation is very important. There use to be a time when you didn’t have to go to school. When it was only important for men to have an education. Times have really changed. Now it is crucial for everyone in our society to have an education. Survival is the main reason: a cohesive society is another. Our schools today need to keep Bilingual education as a tool for teaching: not only for the sake of our society but also for the sense of our culture. Bilingual education in our schools is crucial: but still there is talk about banning the use of foreign language in the instruction of our young children. We have to work to change that kind of attitude. We have to proceed from the assumption that bilingual education is a sound educational proposition for all children and that it addresses the needs of all the constituencies of education. Now more than ever the words of Thomas Jefferson ring with special meaning: in 1977, in a letter to his nephew, Jefferson said: â€Å"Bestow great attention on Spanish and endeavor to acquire an accurate knowledge of it. Our future connections with Spain and Spanish America will render that language a valuable acquisition. The ancient history of that part of America, too, is written in that language†. (qt. in A Relook ’66). Hispanic leaders should plan an initiative to help Hispanic youths do better in school. Its a coming-together as a community to deal with a very pressing issue. The organizations should be composed of public officials, students, educators, administrators, and business people and should try to determine the biggest problems facing Latino students in their community. These groups need to work together to develop a statewide agenda. Hispanic students, according to some studies, lag behind other students in classroom performance; have the highest dropout rate of any ethnic group in the country; and, according to federal data, are less likely to pursue higher learning(Tucson â⠂¬â„¢66). We as a society, need to have a school system that prepares our students for higher education if that is their choice. Society needs to work together to change the educational process for Latino students. Consider these numbers, which we drew from As A Relook at Tucson ’66 states† Minority groups are being shortchanged by more than 200,000 teaching jobs in the public elementary and secondary schools of the nation. In 1972, the enrollment of the nation’s public schools was 44.6 million. As a relook at Tucson ’66 states, the number of English speakers in the Western Hemisphere is only slightly larger than that of Spanish speakers. By the year 2000 the number of Spanish speakers will be far greater than the number of english speakers. Statistics indicate that the United States is now one of the major Spanish-American countries. One statistical example: If the figures on illegal Mexican aliens are correct, that means that every year the United States adds another city the size of Albuquerque and Tucson combined. Or, put it another way, it adds another state larger in population than Wyoming and Alaska combined†(a relook at Tucson 14). The policy of most governments toward bilingualism in the home is and long has been one of neglect. A few countries actively encourage it, especially if the second (non-community) language is the more important language in the country or in the world, or if the minority (community) language is the language of a group given special consideration under the law. Many countries, which have recently been colonies, for example, encourage their young people to learn the language of their former Mother Country, because bilingualism of this type is important in international trade and politics. However, we could find castles full of research and still very little is being done in public schools to improve and enforce bilingual education. We have to use the research being conducted about bilingual edu cation and improve bilingual education. Some public schools want to stop bilingual education, saying that its detrimental to students but they dont put any consideration in improving it, or educating themselves on the needs of not just Latino but all children. All bilingual children deserve further discussion on the issues of culture, immigration, ethnicity and adjustment. READ: Free Grapes of Wraths: Steinbeck's Language Essay Truly bilingual workers, proficient in English and a second language, will be more valuable and marketable as global trade continues to grow. With these facts in mind, some states are launching a visionary effort to develop a dual-language work force. The idea is to convince local school districts to offer a second language beginning at the prekindergarten level and encourage employers to help adults learn another language. Spanish is an obvious second-language choice for many because of the rapid growth of Hispanics in our country. The relationship with Mexico and Latin America will grow stronger if businesses take advantage of their position and opportunity. More than 22,500,000 of our countries population already speaks Spanish http://www.docuweb.ca/SiSpain/english/language/worldwid.html). However, officials cite a growing demand for more Spanish-speaking professionals. The future work force would be better positioned to build international connections if most professionals had a second language. Students would gain a better understanding of the world by learning another language. Pursuing a dual-language work force is a sound idea that will boost the countries’ economy and personally benefit its individuals. We as a society should encourage local school districts, businesses and civic-minded groups to embrace the effort. We must try to build a society were human diversity is promoted and not destroyed. The key to program improvement is not in finding a program that works for all children and all localities, or finding a program component (such as native language instruction) that works as some sort of magic bullet, but rather finding a set of program components that works for the children in the community of interest, given the goals, and resources of that community. The best bilingual education programs include all of these characteristics: ESL instruction, sheltered subject matter teaching, and instruction in the first language. Non-English-speakin g children initially receive core instruction in the primary language along with ESL instruction. As children grow more proficient in English, they learn subjects using more contextualized language (math and science) in sheltered classes taught in English, and eventually in mainstream classes. In this way, the sheltered classes function as a bridge between instruction in the first language and in the mainstream. In advanced levels, the only subjects done in the first language are those demanding the most abstract use of language (social studies and language arts). Once full mainstreaming is complete, advanced first language development is available as an option. Gradual exit plans, such as these, avoid problems associated with exiting children too early (before the English they encounter is comprehensible) and provide instruction in the first language where it is most needed. These plans also allow children to have the advantages of advanced first language development. A common argu ment against bilingual education is the observation that many people have succeeded without it. This has certainly happened. In these cases, however, the successful person got plenty of comprehensible input in the second language, and in many cases had a de facto bilingual education program. For example, Rodriguez (1982) and de la Pe?a (1991) are often cited as counter-evidence to bilingual education. Rodriguez (1982) tells us that he succeeded in school without a special program and acquired a very high level of English literacy. He had two crucial advantages, however, that most limited- English-proficient (LEP) children do not have. First, he grew up in an English-speaking neighborhood in Sacramento, California, and thus got a great deal of informal comprehensible input from classmates. Many LEP children today encounter English only at school; they live in neighborhoods where Spanish prevails. In addition, Rodriguez became a voracious reader, which helped him acquire academic lang uage. Most LEP children have little access to books. READ: Customer Service In Abc Stores EssayRandom assignment to treatment and control groups, as in medical experiments, is the highest quality research design because it increases the confidence in the conclusion that any differences between the groups after a period of treatment can be attributed to that treatment. The results from the five studies in which subjects were randomly assigned to bilingual and control programs favor bilingual education even more strongly. The estimated benefit of bilingual programs on all test scores in English according to these studies with random assignment is .26 of a standard deviation. The positive effect on reading scores is .41 of a standard deviation among the studies with random assignment. And the improvement in scores measured in Spanish is .92 of a standard deviation in the studies with random assignment to treatment and control groups. All of these estimated benefits of bilingual education from studies with random assignment are extremel y unlikely to have been produced by chance. The fact that the studies of bilingual programs with random assignment, the highest quality research design, have even stronger results greatly increases the confidence in the conclusion that bilingual education positively affects educational attainment. In sum, the NRC report finds that on average, bilingual education programs are more effective than English-only programs. However, there are many other important factors that influence student outcomes. There is much more work left to do by the schools if we are to enable LEP students to achieve at high academic levels. Improvement would have to focus on teachers, teaching, academic content and standards, accountability, school-wide leadership, program integration, parent involvement-and effective use of the native language to assure high level and meaningful learning for all students from the time they enter school. Proposition 227 removes an important tool use of the native language fr om the hands of educators it would only serve to make even more difficult the challenges of school improvement. A society with no education cannot compete in the modern world. We as a society need to fight to keep Bilingual education as a teaching tool in the schooling system. Social Issues

Wednesday, August 28, 2019

Reflaction Paper (Earth Science) Essay Example | Topics and Well Written Essays - 500 words

Reflaction Paper (Earth Science) - Essay Example I found the discussions about the tectonic plates to be fascinating. I especially liked the parts about the shifting of the tectonic plates and the natural disaster results. I would like to know more about these phenomena, and how to avoid the injuries and catastrophic damage that occurs when these shifts happen. How global warming will affect the next generation? With the increasing usage of fossil fuels worldwide, I think the effects of global warming are the most troublesome and threatening to mine and the next generation. Global warming is affecting the temperatures, the biology and botany of the planet, the air quality, the water levels, the plant life and the populations around the world. This increasingly dangerous event is contributing to the widespread emergencies of drought and famine, flooding and polar ice melts. Glaciers are melting every day. The rain forest is decreasing each day. The beaches are eroding. The rains are increasing, with monsoons and hurricanes. The wind currents are producing terrible tornadoes, all because of the global warming effect. ` Most disturbing of all, is the rapidly increasing death toll caused by mosquitoes that breed from the pools of stagnant water. In Africa last year alone, over 1 million people died of dengue fever. It is expected that dengue fever, carried by a mosquito, will be the world’s next endemic.

Tuesday, August 27, 2019

An International Relations - The Soviet Unions Invasion Case Essay

An International Relations - The Soviet Unions Invasion Case - Essay Example Despite the harshness of its land and its multi ethnic society, it is one of the most historically attractive land for conquest purposes.   It had been invaded by armies from Persia, Greece, Macedonia under Alexander the Great, Arab hordes, the Mongols and other warriors from central Asia.   In the 19th century, Great Britain and Russia competed for control of Afghanistan with Britain successfully invading it in 1839-42 and in 1878. Despite the harshness of its land and its multi ethnic society, it is one of the most historically attractive land for conquest purposes.   It had been invaded by armies from Persia, Greece, Macedonia under Alexander the Great, Arab hordes, the Mongols and other warriors from central Asia.   In the 19th century, Great Britain and Russia competed for control of Afghanistan with Britain successfully invading it in 1839-42 and in 1878. But it was in late 1979 that a significant event occurred and various countries’ interests in Afghanistanâ₠¬â€some overt and some covert—took hold. The situation began on December 27, 1979 with Russia’s invasion of Afghanistan. This was to set off a chain reaction that would involve dozens of other countries, both American and Russian cold war proxies. Afghanistan would effectively become Russia’s Vietnam—but Russia would pay a higher price than the United States. Throughout the war, from 1979 to 1989, countries such as the USA, Great Britain, Saudi Arabia, Iran, Pakistan, China and Egypt were drawn to the conflict.

Monday, August 26, 2019

Special Education Essay Example | Topics and Well Written Essays - 3500 words

Special Education - Essay Example The term low-incidence disability is used to refer to these individuals, because the occurrence of such disabilities is less than 1% of the general population (Horner, Albin, Todd & Sprague, 2006). The numbers of students with such disabilities are accordingly less. But the important consideration for these students is the support that is required to help them participate in the community and to live a decent life similar to that of other citizens. Such students will need support for mobility, communication, self-care and learning (Horner, et al., 2006). While these students may have the capacity to learn, they must have lifelong support as well. Because legislation provides for the education of all students, these individuals must be educated to the levels of their abilities (No Child Left Behind Act of 2001). There's a need for providing them an opportunity to function without the stereotypes that exist about their potential. Students who can learn language acquisition skills can be taught the standards as required in the functional academics standards. There are three levels of language for students with severe disabilities: pre-symbolic, early symbolic and expanded symbolic (Horner, et al., 2006). Because some students are not able to respond to words and pictures, there is also a stage called non-symbolic. Non-symbolic communication can involve the use of technology to attract a student's attention. Daily routines can be established to interest students in books on tapes, artwork, writing and drawing centers. One way to teach students is with picture "reading." Picture communication boards are available to provide a means for students to indicate preferences of their needs and wants. Picture boards can be individualized so that specific students use their own boards to show the teachers what they want to do or what they need. This provides the teacher with direct information about the student's needs, rather than having to guess about the problem at hand. Because of their exceptionalities, students with low-incidence disabilities are usually educated in a setting designed for their support. Without this special setting, these students may not receive the appropriate educational program required. Many school districts have created facilities to provide educational opportunities for students between the ages of 16 and 21. Certified teachers must serve this population. Without appropriate training, many special education teachers find it difficult to teach such students. Also, opportunities for mastery of objectives by the students are limited because of their disabilities. Sustained efforts on the part of the teachers results in good achievement for the students in the classroom. The problem for teachers is to find ways and means to modify their teaching techniques and students' behaviors to produce learning. There must be a strictly defined plan for all students who are included in this group through the required Individualized Educati onal Plan (IEP). Also, there must be some methodology for intervening to produce mastery of standards. Students in this group are often difficult to teach and to control. Sometimes their undesirable behavior prevents them from learning. The primary problem

Sunday, August 25, 2019

Operations Management Issues and Proposed Solutions Term Paper

Operations Management Issues and Proposed Solutions - Term Paper Example The illustrated non-communication and cooperation by the networks and applications team is a recipe for disaster. This is in terms of failures in coordination of vital information on the performance of new products off the production line. There is an increased need to have more integration within work domains because of the high technical content of the subject that is interlinked and interconnected through IT (Brocke 198). A failure by one department translates to a failure by the whole production team because all departments are dependent on one another for success. This requires proper process monitoring and management to ensure comprehensive process management by basing these decisions on calculations of economic effects (Brocke 282). This should be done in an interactive manner ensuring that not only are the employees involved, but also the managers and stakeholders are involved to guarantee harmony in attitude towards the company’s goals. Perspective. The proper and eff ective use and application of IT in operations management is faced with numerous challenges that if not addressed sufficiently lead to erosion of quality operations. This is a concern that has been voiced by quality assurance departments within and without the company. This is because the effect of the breakdown in communication has been reflected in the quality of service and products produced from the two above mentioned departments. The nature of CAG Inc.’s business operations require excellent logistical support to ensure their services and products are efficiently delivered to their customers. Process management in the company is dependent upon the success of its IT in terms of application and utilization. Evaluation of the company’s communication capabilities and their... This essay stresses that changing the organizational structure of the company especially in production from vertical integration to horizontal integration will create both new opportunities and challenges for the company. In the context of this paper, changing from a vertical structural group to horizontal integration will work towards improving communication barriers within the organization. Horizontal organization empowers employee to make their own decisions and collaboration occur seamlessly. This paper makes a conclusion that advances in information technology have created more complexity and with it increased complications in defining costs and risks. CAG claims to be a technology company which employs high calibre and a technically skilled workforce. One of the proposed strategies is to consider building the system monitoring product in house with resources from both Information systems and technology groups. In order to proceed in this direction, the management needs to understand specific business processes and take into account strategic goals, external partners, and required systems support—all of which deserve thorough investigation. They also need to evaluate common business factors—such as project and business validation—before choosing the right solution approach. There are some inherent advantages to develop in house product like leveraging skilled workforce internal to the company as they tend to be cheaper due to IT department being a su nk cost.

Investment Theory, Rational and Irrational Essay

Investment Theory, Rational and Irrational - Essay Example For example, a man may instantly fall in love with a woman and propose to marry her, solely moved by the physical beauty of the woman; but this same man wouldn't invest in a company solely inspired by looking at the rich and luxuriant office premises of that company. He would definitely make further enquiries before he decides to take any step. In economics, or while making any kind of profit and loss decisions in general, we see men at their rational best. Nonetheless, human beings are still good old Homo Sapiens and the much anticipated rise of Homo Economicus never really took place. We make mistakes, we come under the sway of our emotions, we give in to our momentary whims often enough and later come to regret them as often enough. There are differences between person to person of course. Some of us are more intelligent, practical, cool-headed and experienced while arriving at decisions, while many others may not be as rational and practical. All in all, though, there has been fo und out to be a significant degree of irrationality and inconsistency at play when people make economic decisions. A hybrid branch of economics and psychology called behavioural finance has evolved to study the element of irrationality in the process of decision making; it endeavours to better understand and explain how emotions and cognitive errors influence people when they are making investment-related or other kinds of monetary decisions. But, in fact, behavioural economics consists of theories and empirical investigations into human response to risk, and as such its insights are relevant to any field where decision making is involved and a significant aspect of risk is present. A basic, and almost commonsensical, finding in this field of study is that people tend to be generally more risk-averse than generally thought of. In 1979, Daniel Kahneman and Amos Tversky propounded their "Prospect Theory," studying human behaviour in relation to risk. In essence what they have found out was that, contrary to the dictates of logic that were taken for granted in the standard expected utility theory of neo-classical economics, people placed different weights on gains and losses and on different ranges of probability. Translated in simple terms, this means that individuals are generally much more distressed by prospective losses than they are happy by equivalent gains. To give a more concrete measure to this rather subjective tendency, some economists have arrived at the conclusion that the difference is almost twice, i.e., people perceive the loss of 1 twice as painful as the pleasure derived from the gain of 1. But there is an interesting twist to this observation. I t has been found that faced with a sure gain, individuals become risk-averse, while faced with a sure loss they become more willing to take risk. For example, between a situation of winning 10 for certain, and winning 20 or nothing each with a 50% chance - it has been shown that most people would go for the former. In a real-life situation, faced with a sure gain of 10, people become risk-averse and are less likely to go for 20 with only a 50%

Saturday, August 24, 2019

Factors affecting learners behaviour in Gauteng Special schools Essay

Factors affecting learners behaviour in Gauteng Special schools - Essay Example The challenges of intellectual disabilities, especially in their early stages, have become a priority because of the initial intangibility of the problems. Indeed, the multifaceted and multilateral aspects of intellectual disabilities are not only complex by nature, but they also need socio-psychological interventions to understand and interpret. In addition, the behavioral problems of pupils with intellectual disabilities is an important issue that needs to be looked at from the wider perspective of social development. While the special schools are doing commendable work for these learners, the increasingly worsening behaviour of pupils with developmental disabilities has become a major concern. Special teaching methods to promote learning are needed, as are behaviour plans both to monitor and to assess behaviour, and to develop coping strategies for both teacher and student so that learning can occur. Thus, the research would focus on identifying factors that affect learnersâ€℠¢ behaviour in the special schools, especially in Gauteng, South Africa. 2.Keywords: severe intellectual disability, special school, inclusive education, support system, learning disability, social model of discussion. 3.Background Intellectual disability can be described as a ‘learning difficulty that is characterized by limitations in various skill areas. These may include limitations in self-care, daily living, social interaction, judgment and self-direction (IHC Inc;Philosophy and Policy 1996:p5). Some forms of intellectual disability, i.e. severe autism, become evident in early childhood. Other forms take longer, and may manifest at school age (Notbohm, 2005, preface). In contemporary times, the concept of disability has moved beyond the constraints of the medical terminology and has embraces a socially relevant stance, keeping the needs of the learners as the main objective in all their policies and plans. Terzi (2004) believes that the social model is a powerful and imp ortant reminder for people at large to face issues of inclusion vis-a-vis persons with disabilities. Inclusion of the disadvantaged population, he believes, is a fundamental as well as a moral issue. World Health Organization (2005) reports that people with disabilities are important contributors to society, and that allocating resources to their rehabilitation would be an investment. Hence, measures that support their integration into mainstream society become highly pertinent issues within the development agenda of nations. The inclusion of disabled students can be broadly described as efforts to increase the participation of children with disability in the school by expanding course curricula to incorporate their needs (Booth & Ainscow, 1998). Thus, inclusion ensures that students with special needs are provided with opportunities to imbibe education and become capable of contributing to the society as a whole. In inclusion, students with special needs are integrated into the mai nstream school without much changes within the school environment (Minto, 2007; Mittler, 2000). The special schools in South Africa have shown great determination to promote education amongst children and adults with disabilities. There have been significant reforms in the education system with the National Education Policy Act of 1996 and the South

Friday, August 23, 2019

Are equity markets efficient Assignment Example | Topics and Well Written Essays - 750 words

Are equity markets efficient - Assignment Example Therefore, the allocatively efficiency is determined by utilising a very complicated economic model2. Financial literatures have also eluded that, apart from other factors in global and local market, operational and informational efficiency have a very essential role in shaping market allocative efficiency. For instance, if some investors have realised that some dominant investors in the market have essential information on the market trend, then the possibility of demanding a higher rate of returns on asset is relatively high. The liquidity in assert prices have a considerable role in shaping allocative efficiency. Based on the available information, it is factual to state that the existing microstructures finance does not provide specific question on the nature and profitability in the market. Consequently, equity markets are in most cases inefficient. Moreover, the level of market efficiency depends on the degree of operational and information efficiency. The allocation of funds i n any project depends on the available information regarding the productivity and worth of the project or investment. Very few investors develop interest to invest on projects that have limited rewards on their investments. Moreover, dominant investors in modern market control and manage operation and productivity of specific market. The dominance of market by prominent investors, therefore, increases the rate of inequity in modern market. Moreover, in an inequitable market, most decision make formulated and implemented by individuals who have personal interest in the market3. Operational efficiency Operational efficiency is the evaluation of cost incurred in the transfer of funds from savers to investors. Therefore, operational efficiency is used to define the entire transaction cost in financial sector. In an ideal market, the transaction cost in the market should reflect the marginal cost of offering services to market participants4. Moreover, the management and execution of oper ational efficiency is in most cases based on the liquidity of a specific market. However, modern market has proved to be inefficient due inefficient mechanisms that can necessitate investors to transact their business in a reasonable size without paying huge transaction cost. Searchers and financial theorists have as well claimed that sophisticated investors and entrepreneurs invest in markets with many liquidity-based investors in order to hide their trades. This, therefore, means that the level of informational efficiency is associated with the level of operational efficiency. The amount of information available regarding to the prices in the market determines the level of liquidity in the market. The association of the amount of resources in the market with liquidity level in the market explain the level of inequity in modern market efficient5. Informational efficiency The assert market is presumed to be informational efficient if the prices of asset have totally incorporated the required information on fundamental values. The efficient of the markets is, therefore, defined by the price information that is available to market participants. However, the market informational efficient is to some extent weakened by inclusion of past prices in current prices. The incorporation of past prices in new prices rules out the employment of technical trading rules and regulations in making excess return6. A market is in â€Å"semi-strong form of efficient†

Thursday, August 22, 2019

Ethics - Morality Essay Example for Free

Ethics Morality Essay 5. FAIRNESS. Ethical executives strive to be fair and just in all dealings. They do not exercise power arbitrarily nor do they use overreaching or indecent means to gain or maintain any advantage nor take undue advantage of another’s mistakes or difficulties. Ethical executives manifest a commitment to justice, the equal treatment of individuals, tolerance for and acceptance of diversity. They are open-minded; willing to admit they are wrong and, where appropriate, change their positions and beliefs. A person who is caring exhibits the following behaviors: * Expresses gratitude to others * Forgives others * Helps people in need * Is compassionate A person who is fair exhibits the following behaviors: * Is open-minded and listens to others * Takes turns and shares * Does not lay the blame on others needlessly * Is equitable and impartia A person who is trustworthy exhibits the following behaviors: * Acts with integrity * Is honest and does not deceive * Keeps his/ her promises * Is consistent * Is loyal to those that are not present * Is reliable * Is credible * Has a good reputation . FAIRNESS Fairness is a tricky concept. Disagreeing parties tend to maintain that there is only one fair position their own. But while some situations and decisions are clearly unfair, fairness usually refers to a range of morally justifiable outcomes rather than discovery of one fair answer. Process A fair person uses open and unbiased processes for gathering and evaluating information necessary to make decisions. Fair people do not wait for the truth to come to them; they seek out relevant information and conflicting perspectives before making important decisions. Impartiality Decisions should be unbiased without favouritism or prejudice. Equity It is important not to take advantage of the weakness, disadvantage or ignorance of others. Fairness requires that an individual, company, or society correct mistakes, promptly and voluntarily. 5. CARING Caring is the heart of ethics. It is scarcely possible to be truly ethical and not genuinely concerned with the welfare others. That is because ethics is ultimately about our responsibilities toward other people. Sometimes we must hurt those we care for and some decisions, while quite ethical, do cause pain. But one should consciously cause no more harm than is reasonably necessary. Charity Generosity toward others or toward humani heerfulness The quality of being cheerful and dispelling gloom Generosity Liberality in giving or willingness to give Helpfulness The property of providing useful assistance or friendliness evidence by a kindly and helpful disposition PERSONAL RESPONSIBILITY Another basic customer right involves our taking personal honesty and responsibility for the products and services that we offer. There’s probably no issue that will more seriously affect our reputation than a failure of responsibility. Many ethical disasters have started out as small problems that mushroomed. Especially in service businesses, where the ‘‘products’’ are delivered by individuals to other individuals, personal responsibility is a critical issue.

Wednesday, August 21, 2019

Different Power Factor Correction Engineering Essay

Different Power Factor Correction Engineering Essay Different power-factor correction methods are reviewed, as well as the back ground to the power-factor. Problem is arising in modern electrical distribution systems due to the connection of rapidly increasing numbers of non-linear electronic loads. The basic principles of harmonic generation and limitation in power systems are first discussed. The main part presents a critical review of commonly used power-factor correction techniques that have been identified in a literature review, and highlights the advantages and disadvantages of these techniques. After the analysis of methods and their working principles, the development of the most promising systems such as the boost-type PFC converters is considered. Finally, a project plan is proposed for the next phase of the dissertation work. This will involve investigating the operation, dynamic control and performance of the most promising systems by conducting a theoretical study and setting up and running a number of simulation models using the MATLAB/SIMULINK software tools. Key-words: Power factor correction, harmonic mitigation, PFC converters Contents List of Abbreviations and Principle Symbols Abbreviations: AC Alternating Current APF Active Power Filter CCM Continuous Conduction Mode DC Direct Current DCM Discontinuous Conduction Mode DF Distortion Factor FFT Fast-Fourier analysis IGBT Insulated Gate Bipolar Transistor PF Power Factor PFC Power Factor Correction PWM Pulse Width Modulation RMS Root Mean Square THD Total Harmonic Distortion TDD Total Demand Distortion Principle Symbols: Power Factor Distortion Factor Displacement Factor h Harmonic contents RMS value of the line-current fundamental component RMS value of the line-current harmonic components Total RMS value of the line-current 1. Introduction It is now clearly visible from power systems journals and general literatures that power-factor correction is now an important research topic in the power systems area. As non-linear power electronic systems are increasingly being connected to power systems in greater quantities as well as capacities for such applications as power quality control, adjustable speed drives, uninterruptible power supplies, renewable energy-source interfacing, and so on [1] [2]. The power quality regulators of those systems are highly concerned now, because some of their drawbacks, such as harmonics generation and reduced power factor can spoil their advantages [3]. Power electronic systems are effective because their high efficiency and rapidly adjustable output. However, when processing and controlling the input electric energy suitable for users [4], power electronic systems often operate at a low power-factor, and that may cause serious problems to power system operators by reducing distribution comp onent RMS current capacity and to other users on the same network by distorting the sinusoidal supply voltage seen by other user connected at the same point of common coupling as a heavy electronic or power electronic load.. Of all power line disturbances, harmonics are probably the most serious one for power users because they exist under steady state conditions. This literature review considers harmonic generation prediction of power electronic systems and examines the effectiveness of harmonic mitigation methods. The boost-type power factor correction converters will be taken as the core power factor correction method for future research. The existing publications arising from research in this area and their conclusions have set a good foundation for this report. Results in this report will be based on a theoretical study and simulation studies using software MATLAB/SIMULINK power-factor correction system models which be developed. 2. Background This section provides discussion on the fundamental principles of power-factor correction, including definitions of power-factor terms and a consideration of the common standards which affect how harmonics controlled in power system. Also, the harmonic generation prediction of ideal power electronic systems is discussed at the end of this section 2.1 Important definitions and objective of power-factor correction The power factor (PF) is the ratio of the real power to the apparent power [5] and gives a measure of AC supply utilization on how efficient that the energy is supplied and can be converted into effective work output. The definition of power factor is as shown below: (2.1) In the definition, the value of the power factor is always between 0 and 1, and can be either inductive or capacitive. That means average power is always lower than apparent power. The reason is harmonic components and phase-displacement angle,. Hence, the power factor equivalent can be described as below: (2.2) is termed the (current) distortion factor (DF) and represent the harmonic components in the current and relative to wave shape [6]. DF is defined as the ratio of the fundamental current component to the RMS current value [4]. is termed the displacement factor and defined as the current and voltage waveform phase angle [6]. Displacement factor has unity value for in-phase current and voltage. The increase of displacement angle will cause larger reactive current in the power system [4]. Hence, the objective of power-factor correction is to decrease the current distortion or harmonic content and increase the displacement factor or bring the current in phase with the voltage. The closer power factor is to the unity value, the higher efficiency and lower energy loss. And the power system will operate at a lower supply voltage. Another commonly used index for measuring the harmonic content of a waveform applied for current distortion level is total harmonic distortion, THD. THD is the distortion current as a percentage of the fundamental current. The equation of THD is given by: or (2.3) In AC supply utilizations, power factor,, can be expressed in terms of THD and the displacement factor: (2.4) With these equations, it is easy to see that high THD leads to low power factor and even damaging of the power network. THD and power factor will be used together in the following work as important index in measuring performance of the harmonic mitigation techniques. 2.2 Effects and limitation of harmonic distortion on power system In any power conversion process, to get high efficiency and low power loss are important for two reasons: the cost of the wasted energy and the difficulty in removing the heat generated due to dissipated energy [4]. The performance of power output efficient is defined by several factors. The power factor and harmonic distortion are the most important ones. References [8] [9] show the main issues of harmonics within the power system include the possibility of them exciting series and parallel resonances which cause a further increase of harmonic levels, low efficiency caused in generation, transmission, and utilization of electric energy, increasing thermal losses in the electrical components and shortening their useful life and causing malfunction of motors and other components in the power system. Those effects can be divided into three general categories: Thermal stress, Insulation stress and Load disruption [10]. Those represent effects on increasing equipment losses and thermal losses, increased value of current drawn from the power system and insulation stress and failure to action and malfunction of some electrical devices and systems. The IEEE Standard 519-1992 recommended harmonic current limits with an additional factor, TDD. This is very same as THD except the distortion factor is expressed by load current instead of fundamental current magnitude [11]. Hence, the equation of TDD is given by: (2.5) Therefore, IEEE Standard 519-1992 limitation for harmonic current in power system expressed with TDD is shown below: Maximum harmonic current distortion in percent of Individual harmonic order (Odd harmonics) TDD 4.0 2.0 1.5 0.6 0.3 5.0 20 7.0 3.5 2.5 1.0 0.5 8.0 50 10.0 4.5 4.0 1.5 0.7 12.0 100 12.0 5.5 5.0 2.0 1.0 15.0 >1000 15.0 7.0 6.0 2.5 1.4 20.0 Even harmonics are limited to 25% of the odd harmonics limits above. Table 2.1 IEEE 519-1992 Standard for harmonic current limits [12]. Also, there are limitations for power system harmonic voltage and power factor regulation, like IEC 61000-3-2 standard. The methods for power factor correction should not cause disturbances for other aspects of performance. 2.3 Harmonics generation in power electronic systems Power electronic systems may naturally operate at low power-factor due to large harmonic generation and phase shifting in controlled devices like controlled rectifiers. Understanding characteristics of the harmonic current is essential for harmonic mitigation research. Based on the form on the two sides, converters can be divided into four categories [4] including: 1. AC to DC (rectifier) 2. DC to AC (inverter) 3. DC to DC 4. AC to AC Power electronic systems always draw high quality of low frequency harmonic current from the utility and hence cause problems for other users. Take an ideal single-phase diode bridge rectifier as example, the total harmonic distortion can be up to 48.43% [4] and the 3rd harmonic current can be as large as one third of the fundamental current. If a non-linear load is considered, the displacement factor will fall down from unity value and cause a decrease of power factor. This is surely over the harmonic standards limitation and needs to be corrected. Theoretically, Rectifiers and choppers output DC and draw a fundamental AC source current and large low frequency harmonic content. On the other hand, inverters output low frequency AC and supply fundamental current and harmonic content usually at higher frequency. Harmonic contents can be reduced by harmonic mitigation techniques and hence increase power factor. Take Fourier analysis result diagram of single-phase diode bridge rectifier and PWM control Buck converter as example. (a) (b) Figure 2.1 Fourier analysis diagram for input current of (a) single-phase diode bridge rectifier and (b) PWM control Buck converter. 2.4 Software tools for harmonic mitigation evaluation To filtering harmonic current in the power system, the frequency of harmonic contents is essential. However, in practice, the harmonic frequency is not absolutely equal to the theoretical value and that makes analysis of harmonic frequencies very difficult. The reason is stray inductance and capacitance in the system and reverse recovery time and forward voltage drop of non-ideal devices [1]. To analyze harmonic contents, appropriate software can be helpful. In this project, the software chosen to help analyzing harmonic current drawn by power electronic systems is MATLAB/SIMULINK. Taking the three-phase diode bridge rectifier as an example, a simulation model can be established as shown below. In the model, a three-phase 50Hz AC power supply is used for a resistive load and most devices are not ideal. The model is followed by the diagram of input current waveform and frequency spectrum of AC input current. Values of each order harmonic content and total THD are given by Fast-Fourier (FFT) analysis in powergui analysis tools. With the help of Fourier analysis, the performance of harmonic mitigation techniques can be evaluated and compared quickly. Figure 2.2 Simulation model for three-phase diode bridge rectifier. Figure 2.3 Waveform of rectifier input current (phase A). Figure 2.4 Frequency spectra of AC input current of three-phase rectifier. 3. Power Factor Correction Techniques After tens of years developing and improving, various types of power factor correction techniques or harmonic mitigation techniques can be chosen to solve power factor problem. Those techniques can be divided into five categories [11] [13] as shown below: 1. Passive filters Passive filters can improve power factor with low cost and reduce high frequency harmonics effectively. However, they are always in large size and cannot vary flexibly with system changes [4] [14]. If tuning reactors are not used, parallel resonance may occur in operation [15]. 2. Active filters Active filters improve power factor and provide stable output even under varying supply condition, and reduce harmonics in the output current effectively and efficiently [4] [16]. These, however, always requires much higher costs and the harmonic currents they injected may flow into other system components [13] [14]. 3. Hybrid systems Hybrid active filters combine active and passive filters together in various forms [17]. Hence they can reduce initial and running costs and improve performance of the filter [11] [13]. Smaller filter inductor, smaller dimension, light weight and better filter performance hybrid system take advantages of both passive and active filters [18]. However, the complexity of operation is the main drawback of hybrid systems. 4. Phase multiplication Increasing the pulse number of power converters can raise the lowest harmonic order generated by the converter [2]. Typically, 6-pulse converter has the lowest harmonic order of 5 [1]. When rising pulse number to 12, the lowest harmonic order can increase to 11. As value of harmonic current are ideally proportional to fundamental current value [4], the amount distortion of the power system can de reduce to a low level. On the other hand, the effectiveness of this technique is based on balanced load [13] which rarely happens in practice. 5. PWM PWM converters have much better performance compared to traditional converters like diode rectifiers and square-wave control inverters [4]. As a control strategy improvement, PWM harmonic mitigation technique can even used with some devices for traditional converters and hence get broad application prospect [11]. However, the topology complexity and difficult on designing controllers [19] makes the use of PWM is limited. The objective of these techniques is to make the input current nearly a pure sinusoidal waveform and hence to improve the power factor in electrical supply system. All these five techniques are discussed separately in the following work. 3.1 Passive filters Passive filters have widely been used to absorb harmonics generated by the power electronic systems, primarily due to their simplicity, low cost and high efficiency [20]. Passive filters are always consists inductors, capacitors and damping resistors [21]. The objective of the passive filter is to stop the flow of the harmonic current from disturbing power system, either by preventing them with the usage of series filters or diverting them to a shunt path [9] [11]. That is the different between series filter and shunt filter, too. Series filters can be tuned LC system or only a single inductor in the system. Parallel inductance and capacitance are tuned to provide low impedance for fundamental frequency current and high impedance for a selected frequency current, always high level harmonic current. The series tuned filters are simple and reliable to use. The circuit configuration can be shown as below. Figure 3.1 Series LC tuned filter. The series tuned filters are always used as input filter for power electronic systems. However, a big drawback limits the using. If the series tuned filter is used in a VSI system as the input filter for the inverter, several order harmonic current need to be filtered, 5th, 7th, and so on. Each order harmonic current required an individual filter, and hence the size of the system can be intolerable. On the other hand, shunt filter have much more types including shunt-tuned filter, double-band pass filter and 1st, 2nd and 3rd -order damped filters. Also, broadband filters are good solution for filtering wide range of harmonics [22]. The circuit configurations of these widely used passive filters are like shown below. (a) (b) (c) (d) Figure 3.2 Typical harmonic filters: (a) Single-tuned filter (b) Double tuned filter (c) High-pass parallel filter (d) C-type high-pass filter [5] [27]. A few single tuned filters cope with large level harmonic contents and a high-pass (2nd order) filter filtering high frequency harmonics is the typical model for shunt passive filters and can get better characteristic than series filters [24]. Take the single tuned filter as example, single-tuned filter also called the band-pass filter as only a selected frequency of current can pass in low impedance. The tuning frequency of the single-tuned filter could be: (3.1) And at this frequency, the impedance of the filter is: (3.2) where s is the Laplace operator, L represents value of inductance and C represents the capacitance value. However, mostly passive filters can only filtering 30% of harmonic current in the power system [23] and can not match IEEE 519-1992 standard well. Even the broadband filter, which can filter a range of harmonic contents and reduce system THD to approximately 10%, the resonance caused by the filter and the big size of inductor and capacitor still limit the usage of the filter. So we can get the list of advantages and disadvantages for passive filters shown in table 3.1. Advantages Disadvantages Effectively for filtering high frequency harmonics Low availability for low frequency harmonic filtering Very low cost and reliable Bulky devices and inflexible devices parameters Simple structure Individual branch is necessary for each dominant harmonics in the system High probability resonance Table 3.1 List of passive filter performances [4] [14] [25] [29]. 3.2 Active power filters (APF) The basic idea of an active filter is to compensate current or voltage disturbance so as to reduce the reactive power electronic systems drawn from the power system [23]. The active filters using in power system are not the same as what we use in electronic circuits. The active filters conventional means combined operational amplifiers and passive components like inductors and capacitors, and always been used in electronic circuits operating under low voltage. That is the beginning of the active compensation applications and came out earlier than active filters using in power systems. The active filters which are used in power system for active power compensation and harmonic compensation are always called Active Power Filter (APF) [30]. The active in APF means the filters are act as power sources or generators and provide compensation currents which have opposite phase angle with the harmonic currents in power system [30]. Similarity between electronic circuit active filters and pow er system active filters are the requirement of external power supply. The active filters which are talked in the following parts are all means APF. With the active power filters, the compensation for reactive power and for harmonic current can de done at the same time, hence efficiency on harmonic compensation and also dynamic response are all be improved [23]. The trend of active power filters began in 1970s and was introduced by Mr. Akagi. The incentive for active filters is the inductor is not appropriate to use under high frequency, so the trend is to replace the inductor with active components. As the harmonic contents in the power system various frequently, fast response of active filters required a good control strategy to make active filters smarter and faster. But more complex devices and sophisticated control strategy are required, that all makes active filters more expensive and hard to use [26]. Active filters can also be classified by converter type as shunt-type active filters and series-type active filters. The diagrams of two basic types of active filters are shown below. The other way to classify active filters is the phase number of filters which will be discussed later. (a) (b) Figure 3.3 Diagrams of (a) Shunt-type active filter and (b) Series-type active filter [11] [28]. Series active filters are good at compensate voltage harmonics and capacitive, voltage-source loads. When applied to an inductive or current-source load, a low impedance parallel branch is necessary. Similarly, shunt active filters are always used with inductive, current-source loads and high current distortion conditions. Sometimes over current condition occurs with the use of shunt-type active filters [31]. Typical working principle of the active power filter is: 1. Detection. The sensor detects the waveform of the instantaneous load current and feedback to the controller, which is typically a digital processing block. 2. Analysis. Load current is always high distortion current including fundamental current and many orders of harmonic current. The processor must distinguish the fundamental current with the harmonic currents and give out the information including frequency, value, and phase angle of harmonic contents, so as to control the power source inverter providing opposite phase current of harmonic current. 3. Compensation. The power source inverter draws current from individual DC voltage supply and converting to required current to cancel harmonic currents. Like the diagram shown below. Figure 3.4 Diagram of compensation characteristics [31]. Hence, we can draw a conclusion of advantages and disadvantages of active power filters shown in the table below. Advantages Disadvantages High compensation efficiency and high ability on harmonic compensation Low reliability with sophisticated control system and devices Small size components Difficult to construct a large rated current source with a rapid current Fast action on harmonic current variation makes good dynamic response High initial costs and running costs No resonance causing Complex control strategy and controllers are necessary Suitable for widely supply and load conditions, like unbalanced power supply Table 3.2 List of active power filter performances [13] [22] [30] [31]. 3.3 Hybrid systems Hybrid filters comes from the idea to combine the advantages of both passive filters and active filters together hence to get brilliant performance on harmonic mitigation [17]. Combine passive filters and active filters can significantly reduce costs and improve the compensation characteristics in the power system. Also, various types of hybrid systems of passive and active filters can get better performance than only passive or active filters. Like the reference [18] and [20], small rating active power filter and passive filter connected in serial or shunt type. Smaller filter inductor, smaller dimension, light weight and better filter performance hybrid system take advantages of both passive and active filters [18]. However, as the basement of the hybrid power filters are always active power filters, the initial costs and control complexity is still big disadvantages of hybrid systems. 3.4 Phase multiplication The purpose of phase multiplication is to increase the pulse number of the converter and hence to increase the harmonic order and frequency [4]. The low frequency harmonics can be mitigated effectively and phase multiplication technique does not cause serious resonance and other bad effects on power system performances [13]. The practical application of phase multiplication technique, the multipulse converters, have the ability to draw low distortion current from power source and generate DC current with low level ripple [32]. Typically, 6-pulse converter has the lowest harmonic order of 5 [1]. When rising pulse number to 12, the lowest harmonic order can increase to 11. As value of harmonic current are ideally proportional to fundamental current value [4], the amount distortion of the power system can de reduce to a low level. Also, the multipulse thyristor converters can output various value current by controlling the thyristor fairing angle () [32]. The drawbacks of phase multiplication technique are mostly the contradiction between the cost and output characteristic. If controlled output is required, the multipulse converter should contain at least 12 switching devices and that can be a big amount of costs. On the other hand, multipulse converter only use diodes may operate on low efficiency [11]. 3.5 PWM PWM (Pulse Width Modulation) is a modern control technique for power electronic systems. PWM converters have much better performance compared to traditional converters like diode rectifiers and square-wave control inverters [4]. Like the phase multiplication technique, PWM control can raise the frequency of harmonic contents of current so as to reduce the effect caused by harmonics. Also, converters using PWM control can have high efficiency and small size. With all these advantages, PWM control absorbed great concern in modern power conversion systems. However, the topology complexity and difficult on designing controllers [19] makes the use of PWM is limited. 3.6 Power factor correction converter Power factor correction (PFC) converter is a typical active power factor correction method. As a mature technique for power factor correction, PFC converters have been widely used in power electronic systems to achieve high power factor (PF) and low harmonic distortion [33]. PFC convener forces the input current follow the input voltage, which makes the input current drawn from power supply nearly in a unity power factor [34]. The Boost-type PFC converters are the most used topology which have many advantages, such as low level ripple in the input current, high power factor, small size and simple circuit structure [35]. A typical circuit diagram of Boost-type PFC converter is as shown below from reference [36]. Figure 3.5 Typical circuit diagram of Boost-type PFC converter [36]. As we seen in the diagram before, conventional PFC converter consists two main stages [33] [37]: Power factor correction stage. This stage is combined with a diode rectifier and a DC/DC converter and used to correct power factor of the input current drawn from the power system. The most used type of chopper is Boost chopper. Also, the new Buck and Cuk type PFC converters are increasingly being used now. The switching working principle can be divided into two types, DCM and CCM. 2. DC/DC converter The chopper here is used to convert the power output voltage and current match the users demand. Since choppers only drawn low distortion power from supply, the typical filter on the utilization end is always a passive filter. This is the working principle for conventional PFC converters, the two-stage DCM/CCM Boost-type PFC converter. However, this type of PFC converter has some disadvantages and need to be improved [33]-[39]: 1. Stage number Individual control system and switching devices are required for each stage of PFC converter, hence increasing the costs of the whole system and cause some other problems, such as power density, transmission efficiency and control response [38]. Also, the design of control system can be a challenge. A new one-stage PFC converter topology has been introduced to power factor correction research area. The circuit diagram is as shown in figure 3.6 [36]. The combination of the power factor correction converter and the forward converter may bring many advantages point as below [36]: 1. High power factor correction performance 2. Reduced value of ripple in the DC output 3. Low initial cost and running cost 4. High efficiency and easy control system And so on. Figure 3.6 Circuit diagram of single stage PFC converter [36]. 2. Converter type Like shown in figure 3.6, Buck converter is increasingly being used in PFC converters. Also, Cuk converter and other type of choppers are becoming good choice for PFC converters [36]-[39]. The Buck type PFC converter was rarely used since its high input current distortion. However, with the characteristic improving of the Buck type PFC converter, it can reach good performance with specific dual mode duty cycle control scheme [36]. The main advantage of Buck type PFC converter is easy to reduce the stage number to one stage. 3. Devices and control strategy One of the most important aims in the design of power electronic systems is the reduction of the size of the passive devices, since it allows increase on the power density and the reduction in the initial and running cost. As inductor and capacitor are still using in the PFC converter, the reduction of them can be very important [33] [37]. However, the improvement of devices must base on the developing of the control strategy [37]. With a good detect and control system, the size of the inductor and capacitor can be reduced while the harmonic content can still meet the requirement [33]. The further analysis and improvement of PFC converter based on this literature review will be an important work in the last stage of project. 4. Conclusion This literature review provides a critical study on power factor issues and power factor correction techniques. A theoretical review of power factor definitions and harmonic generation by power electronic systems are presented at the beginning of the paper. The performance of five basic types of harmonic mitigation techniques has been discussed with the support of many previous research publication and their results. The PFC converter is chosen as the promising system for power factor correction after the analysis and comparison. The simulation model establishment and simulation comparison of power factor correction techniques will be important works for the next period of the project. Also, design rules and guidance of PFC converters will be designed in the next period, too.

Tuesday, August 20, 2019

Monitoring Therapeutic Drugs: Strategies

Monitoring Therapeutic Drugs: Strategies This article provides an introduction into some of the current techniques and assays utilised in Therapeutic Drug Monitoring (TDM) TDM is a multi disciplinary function that measures specific drugs at intervals to ensure a constant therapeutic concentration in a patient blood stream. The selection of an analytical technique for TDM involves a choice between immunoassay and chromatography technique. Once the methodology has been chosen, there are also numerous options available within these categories including FPIA, EMIT, KIMS, HPLC and nephelometric immunoassay. An overview of each method is given and its processing of drugs. The future outlook in the methodology involved in TDM is also explored and discussed. INTRODUCTION Therapeutic drug monitoring (TDM) is a multi disciplinary function that measures specific drugs at selected intervals to ensure a constant therapeutic concentration in a patient blood stream. (Ju-Seop Kang Min Hoe Lee) The response to most drug concentrations is therapeutic, sub-therapeutic or toxic and the main objective of TDM is to optimize the response so the serum drug concentration is retained within the therapeutic range. When the clinical effect can be easily measured such as heart rate or blood pressure, adjusting the dose according to the response is adequate (D.J. Birkett et al). The practice of TDM is required if the drug meets the following criteria: Has a narrow therapeutic range If the level of drug in the plasma is directly proportional to the adverse toxic If there is appropriate applications and systems available for the management of therapeutic drugs. If the drug effect cannot be assessed by clinically observing the patient (Suthakaran and C.Adithan) A list of commonly monitored drugs is given in table 1. The advances in TDM have been assisted by the availability of immunoassay and chromatographic methods linked to detection methods. Both techniques meet the systemic requirements of sensitivity, precision and accuracy. Within both methods are many numerous options and will be further explored in this title. Ideally the analytical method chosen should distinguish between drug molecules and substances of similar composition, detect minute quantities, be easy to adapt within the laboratory and be unaffected by other drugs administrated. An overview of the current analytical techniques and future trends in TDM is emphasised in this title and its role in laboratory medicine. NEPHLEOMETRIC IMMUNOASSAY AND its USE IN TDM Immunoassays play a critical role in the monitoring of therapeutic drugs and a range of techniques in which the immunoassay can be existed exist. Nephleometric immunoassays are widely used for TDM and are based on the principle of hapten drug inhibition of immunoprecipitation. The precipitation is measured using nephelometric principles that measure the degree of light scattering produced. In some cases Turbidmetry principles can be applied to measure precipitation via the amount of transverse light. In nephleometric immunoassays, if the drug molecule is a monovalent antigenic substance, a soluble immunocomplex is formed. However if the drug molecule is a multivalent antigenic substance, whereby two drug moieties are conjugated to a carrier protein, the conjugate reacts with the antibody to form an insoluble complex. The insoluble complex may compose of numerous antigens and antibodies, thus scattering the light. Therefore nephleometry of turbidmetry techniques are required to measure the reaction. In respect to this principle precipitation inhibition of a drug can be measured. The test sample (serum) is introduced to a fixed quantity of polyhaptenic antigen and anti drug antibody. The serum drug antigen competes with polyhaptenic antigen for binding to the anti drug antibody. Any free drug present in the sample inhibits the precipitation between the antibody and polyhaptenic antigen. Therefore the drug concentration ids indirectly proportional to the formation of precipitate whi ch is quantified by a nephelometer. The more polyhaptenic antigen present, the more precipitate is formed until the maximum is encountered. Further addition of antigen causes a reduction in the amount of precipitate formed due to antigen excess. The use of nephelometric immunoassay for TDM is termed competitive due to the competitive binding for the sites on the antibody by the antigen. It also distinguishes the drug assay system from the conventional nephleometric immunoassay for proteins. Variations of this assay exist including: The use of saliva or CSF may be used as an alternative to serum. Both alternative matrixes contain less light scattering molecules and so a larger volume of sample is used in order to compensate. Turbidmetric methods may also be applied to quantitative immunoprecipation . turbidmetric analysis is preformed at a lower wavelength and similarly detects immunoprecipation like nephelometric techniques. End point analysis of immunoprecipitaion is commonly employed, however rate analysis is also applicable. Addition of formaldehyde blocks further precipitation and is utilised in end point analysis. Agglutination inhibition immunoassay can also be detected by nephelometric immunoassay systems in which the drug or hapten is directly linked onto the surface of the particle and is generally suitable for low serum drug concentration while precipitation inhibition detects concentration above 1ug/ml If homologus and heterologus drug concentrations are utilized for antibody and polyhaptenic antigen preparations, sensitivity and specificity may be increased. Polyclonal and monoclonal antibodies may be employed in this assay. The use of monoclonal antibodies removes any interference caused by antibody cross reactivity. Choosing a hybrid cell with the most desirable antibody is difficult and therefore is most likely to be less sensitive than the use of polyclonal antibodies Overall the nephelometric immunoassay is an excellent assay system for TDM. Advantages over other assay systems include its simplicity, speed and low cost. It is a homogenous method that requires no separation steps or isotopes. Only two reagents are required in limited amounts as if the antibody to antigen ratio is not optimum, the sensitivity is decreased. This is due to the formation of less precipitate in the absence of drug. In the presence of a drug, inhibition is less efficient. The sensitivity of the assay depends on antibody hapten binding, however it yields high specificity. Therefore nephelometric precipitation inhibition immunoassays are a novel technique in the clinical practice of TDM. (Takaski Nishikawa Vol 1, 1984) FLUORESCENCE POLARIZATION IMMUNOASSAY AND its USE IN TDM Fluorescence polarization immunoassay(FPIA) is a widely used 2 step homogenous assay that is conducted in the solution phase and is based on a rise in fluorescence polarization due to the binding of the fluorescent labelled antigen with antibody. The first step of the immunoassay involves the incubation of the serum sample with none labelled anti drug antibody. If the patient sample contains drug molecules, immune complexes will form between antibody and antigen. The second stage of this assay involves the addition of a flourscein labelled antigen (tracer) into the mixture(.Jacqueline Stanley 2002) The purpose of the flourscein tracer is to bind on any available sites on the drug specific antibody for detection purposes. If the first stage occurred in which the anti drug antibody formed a complex with the drug from the sample, less or no antigen binding sites will be available for the tracer to bind to. Consequently a higher proportion of the flourscein tracer is unbound in the solut ion. If the sample contains no drug an antigen, Step 1 does not occur and the anti drug antibodies will bind the flourscein antigen tracer. In this assay the degree of polarization is indirectly proportional to the concentration of drug present. (: Chris Maragos 2009) Fluorescence polarization is calculated to determine the concentration of drug present. Fluorscein labelled molecules rotate when they interact with polarised light. Larger complexes rotate less then smaller complexes and therefore remain in the light path. When the large immune complex is labelled with a fluorescent tracer, it is easily detected once present in the light path. If no drug was present in the sample, the availability of binding sites on the antibody entices the fluorscein tracer to bind, restricting its motion resulting in a higher degree of polarisation, Thus it is easy to identify that polarization is indirectly proportional to the concentration of drug present. The benefit of utilising FPIA in TDM includes the elimination of processed to separate bound and free labels, an indicator that this assay is time efficient. An unique feature of this assay is that the label used is a flurophore and the analytical signal involves the measurement of the fluorescent polarizatio n. ( Jacqueline Stanley 2002) A standard curve is constructed to determine the concentration of drug present and is easily reproducible due to the stability of the reagents utilized and the simplicity of the method. However FPIA has some limitations and is prone to interference from light scattering and endogenous fluorescent tracers in the samples. To help overcome these limitations variations on the technique is employed including: Use of a long wavelength label The fluorscein tracers utilized produce adequate signals, however light scattering events can interfere with these signals. The use of a long wavelength label permits extended fluorescence relaxation times which may be more sensitive for the detection of high molecular weight antigens on drugs. Use of CE-LIF The use of capillary electrophoresis with laser induced fluorescence detection enhances the sensitivity of this method. This competitive FPIA separates free and antibody bound tracers and utilizes LIFP as a detection system.( David S. Smith Sergei A 2008) Overall FPIA has proven to be a time and cost effective, accurate and sensitive technique in TDM and remains one of the most promising methods in this clinical field. ENZYME MULTIPLIED IMMUNOASSAY TECHNIQUE AND its USE IN TDM Enzyme Multiplied Immunoassay Technique (EMIT) is an advanced version of the general immunoassay technique utilising an enzyme as a marker. EMIT is a 2 stage assay that qualitatively detects the presence of drugs in urines and quantitatively detects the presence of drugs in serum.( David S. Smith Sergei A )Both the competitive and non-competitive forms of this assay are homogenous binding based that rapidly analyze microgram quantities of drug in a sample. in the competitive assay, the patient sample is incubated with anti drug antibodies. Antibody antigen reactions occur if there is any drug present in the sample. The number of unbound sites indirectly correlates with the drug concentration present. The second step involves the addition of an enzyme labelled specific drug which will bind to available binding sites on the antibody inactivating the enzyme. A enzyme widely used in EMIT assays is Glucose 6 Phosphate Dehydrogenase which primarily oxidises the substrate added (Glucose 6 Phosphate). The co-factor NAD+ is also reduced to NADH by the active enzyme. Any enzyme drug conjugate that is unbound remains active, therefore only in this case , can the oxidation of NAD+ to NADH occur. An increase in absorbance photometrically @ 340nm correlates with the amount of NADH produced. (Jacqueline Stanley 2002) A non competitive format of this assay also exists, where by drug specific antibodies are added in excess to the sample resulting in antigen antibody interactions if the drug is present. A fixed amount of enzyme drug conjugate is then added which occupy any unbound sites present on the antibody. The active enzyme that is unbound oxidised NAD+ to NADH indicating presence of free enzyme conjugate and subsequently drug molecules present. (chemistry.hull.ac.uk/) EMIT technology is becoming increasingly popular as a method to monitor therapeutic drug levels. Drugs monitored using this technique includes anti asthmatic drugs, anti epileptic drugs and cardio active drugs. Radioimmunoassay work on the same principle as competitive EMIT with the exception of the use of a radio isotope as a marker. Gamma radiation is emitted from the marker leading to a high level of sensitivity and specificity. As it uses radio isotopes it is not the most cost effective in todays modern environment. MICROPARTICLE IMMUNOASSAY AND its USE IN TDM Microparticle agglutination technology uses latex microparticles and plays a leading role in TDM in the quantitative measurement of carbarbapenzaine, phenytoin, theophylline and phenybarbital. Kinetic movement of microparticles in solution (KIMS) is a homogenous assay and is based on the principle of competitive binding between microparticles via covalent coupling. When free drug exists in the patient sample, it will bind to the antibody present. As a result the microparticle antigen complex fail to bind with the antibody and the formation of a particle aggregate does not occur. Micro particles in solution fail to scatter light causing a low absorbance reading. If the patient sample is negative for the drug, the micro particle drug complex binds to the antibodies. The complex that is formed upon binding blocks the transmitted light and causes light scattering resulting in increasing absorbance readings. Hence the degree of light scattering is inversely related to the concentration of drug present. Light scattering spectroscopy improves the sensitivity and quantitation of particle based immunoassays, thus making KIMS a highly sensitive and accurate technique in TDM. Its popularity has developed throughout the years for many reasons. Reagents required for this assay are in expensive and have high stability. KIMS is a universal assay and can be preformed on a variety of analyzers. The assay has minimal interference as a change of absorbance is measured as a function of time while absorbance readings of interfering substances do not alter with time.( Frederick P. Smith, Sotiris A. Athanaselis) CHROMATOGRAPHY AND its USE IN TDM For many years liquid chromatography has been linked to detection systems and its application in TDM is becoming incredibility popular. Liquid chromatography was initially employed in response to difficulties arising in Gas Chromatography (G.C) due to heat instability and non specific adsorption on surfaces. High Performance Liquid chromatography is the main chromatography technique utilized for TDM. Thin Layer Chromatography (T.L.C) and Gas Chromatography are other alternatives, however have limitations that suppress their use in TDM. A derivatization step must be performed for highly polar and thermo liable drugs for G.C to be successful. TLC has a poor detection limit and is unable to detect low concentration of drug present. HPLC has revolutionized the monitoring of TDM with rapid speed and sensitivity of analysis and can separate a wider variety of drugs compared to GC and TLC. For this reason, HPLC is considered the most widely adaptable chromatographic technique when coupled w ith UV detection and Mass Spectrophotometry for TDM.( Phyllis R. Brown, Eli Grushka) BASIC PRINCIPLES IN HPLC HPLC is a separation technique performed in the mobile phase in which a sample is broken down into its basic constituents. HPLC is a separation technique that employs distribution differences of a compound over a stationary and mobile phase. The stationary phase is composed of a thin layer created on the surface of fine particles and the mobile phase flows over the fine particles while carrying the sample. Each component in the analyse moves through the column at a different speed depending on solubility in the phases and on the molecule size. As a result the sample components move at different paces over the stationary phase becoming separated from one another. Drugs that are localised in the mobile phase migrate faster as to those that are situated in the stationary phase. The drug molecules are eluted off the column by gradient elution. Gradient elution refers to the steady change of the eluent composition and strength over the run of the column. As the drug molecules elute of HPL C is linked to a detection system to detect the quantity of drug present in the sample. Detection systems include mass spectrophotometry and UV detection. (Mahmoud A. Alabdalla Journal of Clinical Forensic Medicine) DETECTION SYSTEMS USED IN HPLC FOR TDM Detection of HPLC with a diode array ultraviolet detector has proved to be a sustainable application system in the identification after HPLC analysis. The use of UV detection allows the online possession the compounds UV spectra. These detection system absorb light in the range of 180-350nm. UV light transmitted passes through a sensor and from that to the photoelectric cell. This output is modified to appear on the potentiometric recorder. By placing a monochromatoer between and light source and the cell, a specific wavelength is created for the detection , thus improving the detectors specificity. A wide band light source can also be used as an alternative method. In this case the light from the cell is optically dispersed and allowed to fall on the diode array.( Mahmoud A. Alabdalla Journal of Clinical Forensic Medicine) HPLC can also be coupled to a mass spectrophotometer as a detection method. Mass spectrophotometry (MS) elucidates the chemical structure of a drug. Sensitivity of this technique is observed as it can detect low drug concentration in a sample. Specificity of this method can be futher enhanced by Tandem mass spectrophotometric analysis. This involves multiple steps of mass spectrophotometry. This is accomplished by separating individual mass spectrometer elements in space or by separating MS phases in time. (Franck Saint-Marcoux et al) FUTURE TRENDS IN TDM METHODOLOGY AGILENTS 1200 HPLC MICRO CHIP Agilents 1200 HPLC micro chip technology combines microfliudics with an easy use interface that confines the HPLC procedure tot his dynamic chip. The micro chip technology integrates analytical columns, micro cuvette connections and a metal coated electro spray tip into the chip to function as a regular HPLC analyzer. The compact chip reduces peak dispersion for a complete sensitive and precise technique. The microchip comes complete with an integrated LC system into sample enrichment and separation column. The operation of the chip is well defined and manageable upon insertion into the Agilent interface which mounts onto the mass spectrophotometer. The built in auto sampler loads the samples and the sample is moves into the trapping column by the mobile phase. Gradient flow from the pump moves the sample from the trapped column to the separation column. The drug is separated the same as the convention methods however reduced peak dispersion does produce better separation efficiency than the conventional method. This form of technology is currently in use in the United States but has not developed outside of the U.S(http://www.agilent.com) PHYZIOTYPE SYSTEM This is the latest application on the market for the treatment and monitoring of drugs associated with metabolic disorders. The PhyzioType system utilizes DNA markers from several genes coupled with biostatisical knowledge to predict a patients risk of developing adverse drug reactions. (Kristen k. Reynolds Roland Valdes) AMPLICHIP CYP450 TEST The Amplichip CYP450 Test is a new technology that has revolutionised the TDM of anti psychotic drugs. This test has been approved by the FDA in 2006 but is not currently in use in laboratories in Ireland. This test is used for the analysis of CYP2D6 and CYP2C19 genes, both of which have an influence in drug metabolism. The function of this test is to identify a patient genotype so their phenotype is calculated. Based on the patient phenotype, a clinician determines the type of therapeutic strategy he/she will commence (Kristen k. Reynolds Roland Valdes) DISCUSSION This paper illustrates the increasing role of immunoassay and chromatography techniques in the clinical laboratory routine monitoring of therapeutic drugs. Before an analytical technique is introduced into TDM it must meet the requirements of sensitivity, accuracy and specificity needed for most TDM applications. The methodology of TDM in todays clinical setting revolves around the use of immunoassays and chromatography techniques. A range of immunoassays was discussed revolving around their principle and advantages and limitations. The majority of immunoassays utilised in the TDM are homogenous based for rapid analysis and efficient turn around time for drug monitoring. Most immunoassays involved in TDM are based on the same principle of competitive binding for antibody. The factor that distinguishes each immunoassay is the detection methods used. Detection methods discussed in this reviewed include nephelometric techniques, flourscein labels, enzyme labels and the use of micro part icles. Each method relies on different detection principles as discussed, however characteristics common to all methods include accuracy, sensitivity and specificity. The methodologies discussed also are time and cost efficient, both essential in laboratory assays. Chromatographic techniques are also discussed with HPLC providing the most impact to TDM. Gas and thin layer chromatography are other chromatographic techniques, however neither can be utilised in TDM due to the limitations both techniques hold against TDM. . HPLC is a rapid sensitive method for the quantitation of drugs in a sample and for this reason is the most widely adaptable chromatographic technique applied in TDM. Like all chromatographic techniques drugs are separated based on the interaction of the drug with the stationary phase which determines the elution time. Detection methods primarily used are UV detection and mass spectrophotometry The final thought on this overview of TDM was an insight into the future of its methodology and applications .Future and approved methods are discussed given a brief outline on each. The constant development of methodologies and techniques in this area of TDM are ongoing constantly keeping the area of TDM one of the most fastest and interesting in clinical medicine. Literature Review: The Impact Of Legalized Abortion Literature Review: The Impact Of Legalized Abortion The publication of the controversial paper on legalised abortion and its affect on the rate of crime by Levitt and Donohue (2001) has resulted in widespread condemnation from a variety of sources, for example, Joseph Scheidler, executive direction of the Pro-Life Action league, described the paper as so fraught with stupidity that I hardly know where to start refuting it Crime fell sharply in the United States in the 1990s, in all categories of crime and all parts of the nation. Homicide rates plunged 43 percent from the peak in 1991 to 2001, reaching the lowest levels in 35 years. The Federal Bureau of Investigations (FBI) violent and property crime indexes fell 34 and 29 percent, respectively, over that same period. (Levitt, 2004) In his journal The impact of Legalized abortion on crime Levitt attempts to offer evidence that the legalization of abortion in 1973 was the chief contributor to the recent crime reductions of the 1990s. Levitts hypothesis is that legalized abortion may lead to reduced crime either through reductions in cohort sizes or through lower per capita offending rates for affected cohorts. The smaller cohort that results from abortion legalization means that when that cohort reaches the late teens and twenties, there will be fewer young males in their peak crime years, and thus less crime. He argues that the decision in Roe v Wade constitutes an abrupt legal development that can possibly have an abrupt influence 15-20 years later when the cohorts born in the wake of liberalized abortion would start reaching their peak crime years. In essence, Levitt puts forward the theory that unwanted children are more likely to become troubled adolescents, prone to crime and drug use, than wanted children are. As abortion is legalized, a whole generation of unwanted births are averted leading to a drop in crime two decades later when this phantom generation would have grown up. To back up this point, Levit t makes use of a platform from previous work such as (Levine et al 1996) and (Comanor and Phillips 1999) who suggest that women who have abortions are those most likely to give birth to children who would engage in criminal activity. He also builds on earlier work from (Loeber and Stouthamer-Loeber 1986) who concludes that an adverse family environment is strongly linked to future criminality. Although keen not to be encroach into the moral and ethical implications of abortion, Levitt, through mainly empirical evidence is able to back up his hypothesis by concluding that a negative relationship between abortion and crime does in fact exist, showing that an increase of 100 abortions per 1000 live births reduces a cohorts crime by roughly 10 per cent and states in his conclusion that legalized abortion is a primary explanation for the large drops in crime seen in the 1990s. One of the criticisms that can be levied against this study is its failure to take into consideration the effect other factors may have had in influencing crime rates during the 1980s and 1990s, such as the crack wave. Accounting for this factor, the abortion effect may have been mitigated slightly. Also Levitts empirical work failed to take into account the greater number of abortions by African Americans who he distinguishes as the race which commit the most amount of violent crime, and his evidence fails to identify whether the drop in crime was due to there being a relative drop in the number of African Americans. The list of possible explanations for the sudden and sharp decrease in crime during the 1990s doesnt stop at Levitts abortion/crime theory and Levitt himself in his 2004 paper identifies three other factors that have played a critical role. The first is the rising prison population that was seen over the same time period, and (Kuziemko and Levitt, 2003) attribute this to a sharp rise in incarceration for drug related offences, increased parole revocation and longer sentences handed out for those convicted of crimes, although there is the possibility of a substitution effect taking place where punishment increases for one crime, potential criminals may choose to commit alternative crimes instead. There are two ways that increasing the number of person incarcerated could have an influence on crime rates. Physically removing offenders from the community will mean the avoidance of any future crime they may plausibly commit during the time they are in prison known as the incapacitation affect. Also there is the deterrence effect, through raising the expected punishment cost potential criminals will be less inclined to commit a crime. As criminals face bounded rationality, expected utility gained from crime will have an effect on the amount of time spent devoted to crime. (Becker, 1968). A study conducted by Spelman (2000) examined the affect the incarceration rate would have on the rate of crime and finds the relationship to have an elasticity measure of -0.4 which means that an increase in the levels of incarceration of one percent will lead to a drop in crime of 0.4%. In Economic models of crime such as Becker (1968), improvements in the legitimate labor market make crime less attractive as the return earned from legitimate work increases. Using this model, the sustained economic growth that was seen in the 1990s (Real GDP per capita grew by almost 30% between 1991 and 2001 and unemployment over the same period fell from 6.8 to 4.8 percent) could be seen as a contributing factor to the drop in crime witnessed and many scholars (such as) have come to that conclusion. However, the improved macroeconomic performance of the 90s is more likely to be relevant in terms of crimes that have financial gains such as burglary and auto theft and does not explain the sharp decrease seen in homicide rates. Also, the large increase in crime seen in the 1960s coincided with a decade of improving economic growth, further corroborating the weak link between macroeconomics and crime (Levitt, 2004). One other explanation for the drop in crime and the most commonly cited reason can be seen in the growing use of police innovation, and an adoption of community policing. The idea stemmed from the broken window theory, which argues that minor nuisances, if left unchecked, turn into major nuisances (Freakonomics) The main problem with the policing explanation is that innovative police practices had been implemented after the crime rate had already began declining, and perhaps more importantly, the rate of crime dropped in cities that had not experienced any major changes in policing (Ouimet, 2004).