Thursday, October 31, 2019

What were the major foreign policy issues of the 1950s Justify your Essay

What were the major foreign policy issues of the 1950s Justify your response by addressing the significance of each. (Do not just list them) - Essay Example The Truman Doctrine was created by President Truman in 1947. Although this policy was created before the 1950’s, the substance of the Truman Doctrine helped create the Containment Policy. The Truman Doctrine basically â€Å"provided an ideological shield to permit U.S. aid to pro-capitalist, and presumably anti-communist, nations† (Bacevich, 2007: 8). This allowed for the U.S. to become partially involved in Vietnam in 1950, and then escalate to an outright war. The Marshall Plan was created in 1947. This policy was created before the 1950’s, but helped post-WWII countries rebuild their economies in an effort to stop communism from spreading farther into Western Europe. Billions of American dollars were spent on economic support to help countries return to their economy before the war. It also served to unify the Western European countries and the United States as allies. The Americans offered money from the Marshall Plan (knowing the Soviets would not accept), but as expected the Soviets refused. This refusal created the division in Western and Eastern Europe (Hook and Spanier, 2006: 59). The Iron Curtain fell in Europe, but Western Europe was economically stable through the Marshall Plan. The Truman Doctrine and the Marshall Plan had common goals of stopping communism. These measures were the foundation for the Containment Policy. The Containment Policy was a policy of stopping communism at all cost. If this meant setting up a puppet government under U.S. influence, so be it. The main goal was to fight communism, not consider what was best for the local population. The U.S. could not imagine a world of peaceful coexistence with the Soviets. Communism was a threat to the foundation of democracy according to the U.S. government. The Truman Doctrine and the Marshall Plan had stopped the flow of communism, but when China fell

Tuesday, October 29, 2019

Assignment - Brain Research Example | Topics and Well Written Essays - 1000 words

- Brain Research - Assignment Example In determining if any links exist between brain function and learning ability, researchers have long hoped to be able to recommend certain curricular changes to help better reach students based upon their unique brain functions (Sousa & Tomlinson, 2011). The intent of this brief paper, therefore, is to identify the recent discoveries in the field of neuroscience, apply those to the learning process and differentiation, and to develop teaching strategies to accommodate this new information. Recent Discoveries in Neuroscience Recent years have bought some interesting new innovations in the field of neuroscience. In 2012, for example, researchers were able to begin isolating certain thoughts as they occurred in the brain. It is now possible to identify specific words and thoughts in the brain, isolate them, and be able to observe up to two different conscious thoughts at the same time. This is a sense of multitasking, and neuroscience now makes it possible to learn how this process occu rs in the human brain, as well as that of certain animals closely related to us, such as the chimpanzee. This thought process is similar to the computer, which creates even more possibilities of using neuroscience for technological, as well as educational, advancements in the future. The new discoveries, as mentioned, are having great impact on technology. In 2012, by way of example, neuroscience helped a human in a vegetative state for more than 12 years be able to communicate once again. The individual was trapped inside his own body, in pain, but unable to communicate or even move. By mapping the brain, the individual was able to begin communicating to doctors through brain mapping techniques discussed in the previous paragraph. Because of this, he was able to communicate for the first time in 12 years, providing great hope that neuroscience would be able to help bring brain mapping to the forefront of communication in the future. The Learning Process and Differentiation Let us c onsider individuals with dyslexia. In years past, it was often difficult to diagnose such individuals. They were simply considered low performing readers. As such, these individuals were often frustrated because they could not determine for themselves why they could perform well in most others areas of life, yet reading became so difficult. By implementing neuroscience in the equation, however, the learning process was enhanced as educators were able to determine what exactly was taking place in people whose brain simply reversed the letters in certain words. Once this was determined, it became possible to work within the disorder, developing strategies of differentiation, all the while enabling the individual to begin to read effectively and limit their frustration (Goswami, 2006, p. 408). Neuroscience has truly revolutionized the learning process and the way that differentiation is utilized within the classroom. Differentiation has long been used in many classroom as a way of prov iding all students in the class with the same material, but delivered in multiple ways. Educational theorists over the years have discovered that some students process information, construct ideas, and make sense of new concepts in different ways. Some of these methods have long been a mystery to many teachers, yet differentiation was utilized in an attempt to reach as many

Sunday, October 27, 2019

Preferential Trade Agreements (PTAs) Effect on Exchange Rate

Preferential Trade Agreements (PTAs) Effect on Exchange Rate Brent J. Sackett Referee Report 3: Copelovitch, M. S., Pevehouse, J. C. (2013). Ties that Bind? Preferential Trade Agreements and Exchange Rate Policy Choice. International Studies Quarterly, 57(2): 385-399 Summary This paper assesses the effect of preferential trade agreements (PTAs) on exchange rate policies. When a country joins a PTA, the government’s ability to employ trade protection is constrained. This increases incentives to maintain fiscal and monetary autonomy in order to manipulate its domestic political economy. One way to do this is by implementing a flexible exchange rate policy. The authors argue that a PTA with a nation’s â€Å"base† country (the country to whom they have traditionally fixed their currency, or a country where they have extensive trade ties), makes a country less likely to adopt a fixed exchange rate. In addition, this paper argues that countries who have signed a base PTA will also tend to maintain an undervalued exchange rate level. Using an original data set of 99 countries from 1975 to 2004, the authors find empirical support for their argument. Evaluation My overall impression of this article is positive. In fact, I would say this article will be excellent after a few methodological problems are corrected. The paper clearly identifies a research question and provides an important insight that expands our understanding of exchange rate policy. However, I will present some comments and recommendations for improvement. Comment 1 (Theory and Causal Mechanism) In general, the theory and hypotheses are clearly presented and easy to understand. However, one part of the theoretical link between PTAs and exchange rate policy is missing and should be discussed more thoroughly. This may simply be a matter of terminology, or it may indicate a missing link in the causal chain. The authors assert that â€Å"PTAs generally commit members to more extensive free trade (2).† This seems to indicate the causal mechanism behind the story: PTAs tie the hands of governments who want to employ trade protection, so they resort to exchange rate policy instead of tariffs or other means. However, PTAs are not all the same in the way they constrain behavior regarding trade protection (Baccini, Dà ¼r, Elsig Milewicz, 2011, Kucik, 2012). While the authors note substantial cross-national variation in PTA participation, the discussion of variation in the PTAs themselves is inadequate. PTAs are not homogenous and actually vary substantially. Baccini et al. and Kucik both explain that variation in PTA design and implementation goes far beyond simple â€Å"free-trade† protections to include intellectual property, investments, enforcement, and even significantly differing tariff levels and exemptions. Is the paper’s theory based on free-trade commitments generally or PTAs specifically? In footnote 9 on page 4, the authors state that GATT/WTO membership had no influence on exchange rate choice even though in theory it should constrain trade policy choice in the same way a PTA does. This leads to some confusion about the causal mechanism that needs to be clarifi ed. What exactly is the causal mechanism within PTA participation and why does it fail in other commitments to free trade? In addition, I would like to know if the large variation in PTA design effects the causal mechanism. These questions need to be answered to clarify the argument. I have a second concern regarding the assumptions behind the theory. For the causal mechanism to work, the nation must feel pressure to comply with trade restrictions in the PTA. Otherwise, there is no incentive to use exchange rate policy to circumvent the PTA. However, others research has shown that compliance with international agreements is not straightforward and the intention to comply cannot be assumed (Simmons, 1998). Some nations may join PTAs with no intention to comply at all. Others may sign a PTA because they already intended to behave in accordance with the free trade commitments anyway. In either case, the causal mechanism of the paper is undermined. If Simmons and others are correct, a PTA may not provide the restraint the authors assume it does. Although a thorough discussion of compliance is not necessary, I would like to see it mentioned at least briefly. Both of these comments lead to some concerns about the data. Comment 2 (Data) I have two comments regarding the data. The first is a concern about potential measurement errors that follows from my questions about the causal mechanism. The primary explanatory variable BasePTA uses the PTA dataset based on Mansfield et al. (2007). However, the data include significant heterogeneity in the likely causal mechanism (free trade commitments) that is not measured properly. Kucik notes that: â€Å"At one end of the design spectrum, roughly 25% of all PTAs grant their members full discretion over the use of escape clauses, imposing very few if any regulations relating to the enforcement of the contract’s flexibility system. At the other end, no less than 27% of PTAs place strict limits on (or entirely forbid) the use of flexibility (2012, 97).† If this is true, a highly flexible PTA may actually be similar to an observation without a PTA at all. A more refined measurement of the causal mechanism than simple PTA participation may be needed. My second concern regarding the data is related to selection effects. Countries do not join PTAs randomly. For example, democracies are more likely to participate in PTAs (Mansfield, Milner, and Rosendorff, 2002). In addition, there may be other unobserved reasons that individual countries decide to enter into PTAs especially with their base country. I would like to see a more detailed discussion regarding selection effects and perhaps some statistical method to test for it such as a Heckman model. Comment 3 (Methodology) Two problems with endogeneity in the models need to be address. One of the primary dependent variables, Undervaluation, is calculated using GDP per capita (5) to control for the fact that non-tradable goods tend to be cheaper in poorer countries. This is problematic when GDP per capita is also used as an explanatory variable in models 3 and 4 as shown in Table 4. A model using the same variable on both sides of the equation potentially causes problems. This is especially problematic considering the limitations of the other variable capturing the concept of undervaluation REER. According to the authors, REER fails to capture the concept at all! REER â€Å"†¦does not actually indicate whether a currency is over- or undervalued†¦ (5).† It only measures changes in the exchange rate relative to the baseline year. The variable Undervaluation was added to correct this shortcoming, but is hampered by endogeneity. The combination of these two factors may be why the findings about exchange rate levels are not definitive. Another form of endogeneity sneaks into the authors’ model. Beaulieu, Cox, Saiegh (2012) illustrate that GDP per capita and regime types are endogenous. High levels of GDP per capita may simply be an indication of long term democratic government. When both variables are included in models predicting exchange rate policy, the resulting coefficients may be incorrect. The models reported in Tables 2 4 include both GDP per capita (log) as well as democracy (POLITY2) and result in inconsistent levels of statistical significance for both variables. This endogeneity should be addressed using a proxy or other methods. I also have a minor concern with omitted variable bias. Bernhard, et al. (2002) emphasize that Exchange rate policy and Central Bank Independence (CBI) cannot be studied in isolation. They have potentially overlapping effects and measurements of both need to be included in a model explaining monetary policy. I recommend incorporating an additional variable that measures CBI. My final concern with methodology has to do with the operationalization of the concept of democratic institutions. The authors briefly note that domestic political institutions influence exchange rate policy. Specifically, the nature of the electoral process and interest group influence can result in variations in exchange rate policy (for example, Moore Mukherjee 2006; Mukherjee, Bagozzi, and Joo 2014). In addition, Bearce (2014) shows that democracies manipulate exchange rate policy to appease domestic groups without regard to PTAs. To control for this, the authors use the Polity2 variable and two export composition variables. However the composite measurement of democracy fails to account for the variation in political institutions (such as parliamentary systems) that have been found causal in influencing exchange rate policy. In addition, the variables Mfg Exports and Ag Exports fail to account for an interest group’s ability to influence policy. To fully control for demo cratic institutions, the authors need to identify the relevant democratic institutions and use a variable to capture those institutions. The Polity2 composite is inadequate. Comment 4 (Discussion and Implications): My first comment about the discussion is positive. I think the model extension to capture the interaction effects between BasePTA and Base Trade is excellent and insightful. In particular, Figure 1 is very well done and clearly illustrates this effect. However, the rest of the discussion of the findings is overshadowed by the data and methodological problems. In particular, the comment about the â€Å"noisy (12)† nature of the findings regarding exchange rate levels seems like a cop-out. I would rather see the methodology strengthened instead of excuses (although to be fair, exchange rate levels are indeed noisy). Smaller issues The general structure of the paper is solid and the writing is clear, but I have some comments regarding minor issues that could improve the impact of this paper. Comment 1 (Primary Dependent Variable discussion): I am concerned by the comment that the potential measurements of the dependent variable (Exchange Rate Regime) differ in methodology and yield â€Å"†¦ quite different classifications across countries and over time (5).† This caused a red flag and left me concerned initially. Valid and reliable measurement of this variable is essential to properly test the hypothesis. I recommend rewording this and explaining more simply from the start why this variation exists and why it does not threaten the model. Comment 2 (Inflation Variable discussion) The inflation variable (6) uses two sources to account for missing observations (World Bank and IMF). I am concerned that the measurement methodology may not be exactly the same and could introduce bias when the observations are combined. A brief sentence or two covering the compatibility between the two sources would eliminate this concern. Recommendation to the editor Revisions required: This paper will make a strong contribution to the literature with some revisions. My biggest concern has to do with the causal mechanism and how the concept is captured in the primary explanatory variable. Explaining this in more detail and addressing the other issues will make this paper ready for publication. References Beaulieu, E., Cox, G. and Saiegh, S. (2012). Sovereign Debt and Regime Type: Reconsidering the Democratic Advantage. International Organization, 66(04): 709-738 Baccini, Leonardo, Andreas Dà ¼r, Manfred Elsig and Karolina Milewicz (2011). â€Å"The Design of Preferential Trade Agreements: A New Dataset in the Making†, WTO Staff Working Paper ERSD-2011-10 Bearce, David (2014). A Political Explanation for Exchange-Rate Regime Gaps. The Journal of Politics, 76(1): 58–72 Bernhard, William, J. Lawrence Broz, and William Roberts Clark (2002). The Political Economy of Monetary Institutions. International Organization, 5: 693-723 J Lawrence Broz and Seth Werfel (2014). Exchange Rates and Industry Demands for Trade Protection. International Organization, 68(02):393–416 Kucik, Jeffrey (2012). The Domestic Politics of Institutional Design: Producer Preferences over Trade Agreement Rules. Economics Politics 24(2):95–118 Mansfield, Edward, Helen Milner, and Jon Pevehouse. (2007). Vetoing Co-operation: The Impact of Veto Players on Preferential Trade Agreements. British Journal of Political Science 37: 403–432. Mansfield, Edward, Helen Milner, and Peter Rosendorff (2002). Why Democracies Cooperate More: Electoral Control and International Trade Agreements International Organization, 56(3): 477-513 Moore, Will and Bumba Mukherjee (2006). Coalition Government Formation and Foreign Exchange Markets: Theory and Evidence from Europe. International Studies Quarterly, 50(1):93–118 Mukherjee, Bumba, Benjamin Bagozzi, and Minhyung Joo (2014). Foreign Currency Liabilities, Party Systems and Exchange Rate Overvaluation. IPES Conference Paper 1–44 Simmons, Beth (1998) Compliance with International Agreements. Annual Review of Political Science 1:75-93

Friday, October 25, 2019

SWOT, PEST, Product Lifecycle, Boston Matrix and the Ansoff Matrix: Mar

SWOT, PEST, Product Lifecycle, Boston Matrix and the Ansoff Matrix: Marketing Models Analysis Marketing strategies/models In this objective I will be analysing the different marketing models and evaluating their reliability. The marketing models I will evaluate will be SWOT and PEST analysis, the product life cycle, the Boston Matrix and the Ansoff Matrix. SWOT and PEST analysis In the previous objective, I analysed SWOT and PEST of Cadbury. These enabled me to gain insight into the external and internal influences that may arise which may either be beneficial or cause problems for the launch of my product. Product life cycle The product life cycle shows the sales of a product over time. To be able to market a product, Cadbury must be aware of the product life cycle of its products. The cycle can be demonstrated as below: Introduction Following planning and development, the product is introduced onto the market. This stage includes characteristics such as: Low initial sales, due to limited knowledge and no consumer loyalty Heavy promotion to build brand image and consumer confidence Losses (low profits at best) due to heavy development and promotion costs Limited distribution levels, but high stockholding for the manufacturer Growth At this stage, consumer knowledge and loyalty has grown, and the company increases sales and begins to make profits. There may be a growing number of competitors who may introduce similar products or adapt their price and promotion policies. Maturity The maturity phase is where the profits and sales reach their peak. Profits are being maximised, but the firm has to fight to defend its market position. Sales are maintained by promotion, customer loyalty and product differentiation through alternations such as new packaging. At the end of this stage, the market becomes saturated. Decline This stage is where total sales fall for the company. To make up for this, the company may reduce prices, cutting into its profit margin. This is the end of the product and its life cycle. The table below shows examples of where some of Cadbury’s products lie in the product life cycle. Stage Example Introduction Snaps Growth Under 99 calorie range (Dairy milk) Maturity Dairy Milk, Twirl, Flake Decline Fuse The table shows that most of Cadburys products ... ... to get new people to try the product and existing customers to buy more. The company should therefore use market expansion. In the decline stage, the company should try to re-launch the product, which would be using product or market expansion. Market penetration could be used if a successful product was being re-launched to increase the company’s market share, but this would not work if the product were a dog. The marketing models can be influenced other factors and research. Cadbury’s competitors may affect the company’s use of the Ansoff Matrix. The model is used to analyse the strategic direction of a product, and if a product was placed in the market expansion, which has medium risk strategy, and competitors also released a similar product in this section, there will be a higher risk strategy, which will affect the product’s performance and position in both the Boston matrix and the product life cycle. My questionnaire told me there was a gap in the market for my product, and my SWOT analysis reinforced this. This then tells me that my product should do well as a question mark, in the introduction stage of the product life cycle and as product expansion. SWOT, PEST, Product Lifecycle, Boston Matrix and the Ansoff Matrix: Mar SWOT, PEST, Product Lifecycle, Boston Matrix and the Ansoff Matrix: Marketing Models Analysis Marketing strategies/models In this objective I will be analysing the different marketing models and evaluating their reliability. The marketing models I will evaluate will be SWOT and PEST analysis, the product life cycle, the Boston Matrix and the Ansoff Matrix. SWOT and PEST analysis In the previous objective, I analysed SWOT and PEST of Cadbury. These enabled me to gain insight into the external and internal influences that may arise which may either be beneficial or cause problems for the launch of my product. Product life cycle The product life cycle shows the sales of a product over time. To be able to market a product, Cadbury must be aware of the product life cycle of its products. The cycle can be demonstrated as below: Introduction Following planning and development, the product is introduced onto the market. This stage includes characteristics such as: Low initial sales, due to limited knowledge and no consumer loyalty Heavy promotion to build brand image and consumer confidence Losses (low profits at best) due to heavy development and promotion costs Limited distribution levels, but high stockholding for the manufacturer Growth At this stage, consumer knowledge and loyalty has grown, and the company increases sales and begins to make profits. There may be a growing number of competitors who may introduce similar products or adapt their price and promotion policies. Maturity The maturity phase is where the profits and sales reach their peak. Profits are being maximised, but the firm has to fight to defend its market position. Sales are maintained by promotion, customer loyalty and product differentiation through alternations such as new packaging. At the end of this stage, the market becomes saturated. Decline This stage is where total sales fall for the company. To make up for this, the company may reduce prices, cutting into its profit margin. This is the end of the product and its life cycle. The table below shows examples of where some of Cadbury’s products lie in the product life cycle. Stage Example Introduction Snaps Growth Under 99 calorie range (Dairy milk) Maturity Dairy Milk, Twirl, Flake Decline Fuse The table shows that most of Cadburys products ... ... to get new people to try the product and existing customers to buy more. The company should therefore use market expansion. In the decline stage, the company should try to re-launch the product, which would be using product or market expansion. Market penetration could be used if a successful product was being re-launched to increase the company’s market share, but this would not work if the product were a dog. The marketing models can be influenced other factors and research. Cadbury’s competitors may affect the company’s use of the Ansoff Matrix. The model is used to analyse the strategic direction of a product, and if a product was placed in the market expansion, which has medium risk strategy, and competitors also released a similar product in this section, there will be a higher risk strategy, which will affect the product’s performance and position in both the Boston matrix and the product life cycle. My questionnaire told me there was a gap in the market for my product, and my SWOT analysis reinforced this. This then tells me that my product should do well as a question mark, in the introduction stage of the product life cycle and as product expansion.

Thursday, October 24, 2019

Energy Requirements In Post Combustion Environmental Sciences Essay

Recently there has been increased involvement in C gaining control engineerings. There are a figure of factors act uponing this increased consciousness. There is increased credence that important decreases in CO2 emanations are required to avoid earnestly impacting the planetary clime, these decreases are improbable to be achieved through decreases in planetary energy demand. Therefore capturing CO2 before it enters the ambiance becomes a feasible option to cut down emanations. Post-combustion CO2 gaining control ( PCC ) engineering is a promising engineering that has possible to significantly cut down CO2 emanations from big point beginnings such as power workss. The chief advantage that station burning gaining control engineerings have over other gaining control methods is that bing power workss can be retrofitted with the engineering leting for a more immediate decrease in C emanations than is possible with the other possible engineerings. This is an of import consideration as the typical lifetime of a coal fired power works is 25 old ages which means that merely PCC can efficaciously turn to emanations from most of the universes presently runing power Stationss. However, PCC incurs higher energy punishments than pre-combustion gaining control engineerings and because there are non sufficient fiscal and legislative punishments for CO2 emanations PCC has yet to be demonstrated on a full graduated table footing and hence these energy costs can merely be quantified on a theoretical footing. Coal holds the largest portion of worldwide electric power production by a broad border, accounting for 40 % of universe energy supply in 2008. With this figure merely expected to somewhat diminish to 37 % by 2035 [ 1 ] . Because of coals laterality of the energy production sector and the higher C emanations associated with the combustion of coal we will concentrate on the energy efficiencies associated with using PCC to these workss. Modern coal fired power workss operate by using powdered coal. This coal is assorted with air and so fire in a boiler. The steam generated is used to turn a turbine generator and the waste burning gases are released to the ambiance. These gases consist chiefly of nitrogen plus H2O and CO2. Additional merchandises, depending on the pureness of the coal used, can include sulphur dioxide and N oxides. A typical powdered coal power works emits about 743 g/kWhr of CO2 [ 2 ] . As CO2 typically merely accounts for 12.5-12.8 % of the entire flue gas volume the separation of this from the other constituents is non a simple undertaking and requires energy input to accomplish.Minimum Energy RequirementThe thermodynamic lower limit specific energy demand for CO2 gaining control is shown in Figure. If an mean provender gas mole fraction of 12 % is taken so we can see that about 20 % extra energy is required in order to accomplish 100 % CO2 separation. Figure: Minimum specific energy demand for separation as a map of molar fraction in the provender gas for different fractional remotion ( T= 313 K ) [ 3 ] . In add-on to being separated from the remainder of the fluke gases the CO2 besides needs to be compressed from atmospheric force per unit areas to force per unit areas of typically 15 MPa, which are more contributing for station burning storage or transit. The minimal energy demand in order to accomplish a compaction from 0.1MPa at a temperature of 313 K to 15 MPa is 0.068 kWh/kg CO2. Figure shows the minimal energy demand for separation both with and without compaction procedure, presuming a gas mole fraction of 12 % . If we take the Siemens system for PCC as a criterion ; it removes 90 % of CO2 [ 4 ] from the flue gases. This represents 0.114 kWh/kg CO2 theoretical lower limit energy demand. Figure: Minimum specific energy demand for CO2 gaining control and compaction ( 12 % molar fluke gas concentration ) as a map of fractional CO2 remotion: separation merely and separation with compaction to 15 MPa [ 3 ] .CO2 Absorption ProcessThere are a figure of different methods being developed to divide CO2 from the other end product flue gases. Currently absorption procedures appear to be the taking engineering so they will be the focal point of this treatment. Figure shows a typical schematic for a station burning CO2 soaking up procedure. First, the fluke gases are passed through a ice chest, which is required to cut down ammonium hydroxide release in the absorber and diminish the volume of the flue gases. A fan is so required to pump the gas through the absorber which contains the chemical absorbents. The absorbent stuff which now contains the chemically bound CO2 is pumped to the desorber via a lean-rich heat money changer. The desorber regenerates the chemical absorbent by utilizing an addition in temperature ( 370-410 K ) and pressures between 1 and 2 bara. Heat is besides supplied to the re-boiler to keep regeneration conditions for the chemical absorbent which means the procedure incurs an extra energy punishment as the heat is required for steam production which acts as a denudation agent to divide the CO2 from the chemical absorber. The steam is recovered and fed back into the stripper while the extremely pure CO2 gas ( & A ; gt ; 99 % pureness ) leaves the compressor. The absorber chemical, which has had the CO2 removed is fed back into the absorber [ 3 ] . Figure: Schematic of typical station burning gaining control procedure [ 5 ] . Clearly this procedure involves a serious energy punishment as the extra procedures add much greater losingss to the system than the theoretical lower limit energy demands calculated earlier. Table shows the important works efficiency punishment which is the cost of the C gaining control procedure. This efficiency bead is due to increasing resource ingestion per unit of electricity produced and additions in chilling H2O ingestion per unit of electricity produced. Power works and gaining control system type Internet works efficiency without CCS Internet works efficiency with CCS CCS Energy PenaltyAdditional energy input per cyberspace kWh end productDecrease in net kWh end product for a fixed energyinput.Existing subcritical Personal computer, post-combustion gaining control 33 % 23 % 43 % 30 % New supercritical Personal computer, post-combustion gaining control 40 % 31 % 29 % 23 % Table: Valuess for cyberspace pulverised coal power works efficiencies with and without CCS [ 6 ] . This lessening in efficiency means that more fuel is required in order to bring forth the same sum of electricity as before the PCC procedure was added. From Table it can be seen that newer, more efficient workss suffer lower energy punishments when PCC is applied. The bing subcritical powdered coal works a 43 % addition in energy input per kWh end product compared with 29 % for a new supercritical pulverised coal works. Thermal energy demands are the most important factor in the increased energy demands and are the chief challenge confronting efforts to diminish these losingss.Thermal Energy RequirementsChemical soaking up is normally used in industry to take gases and drosss from high value merchandises like H or methane. The issue that arises in using this engineering to the power coevals sector is that it consequences in much larger decreases in efficiencies. while taking H2S from H for illustration may merely take 2.5 % [ 2 ] of the energy content of the H, this loss is much lar ger in power coevals as antecedently shown.Binding Energy RequirementThe heat which is required to interrupt the bond between the CO2 and the absorbent is an of import factor to be taken into consideration. This can be reduced by the usage of aminoalkanes as they can possess a lower binding energy for CO2. Absorbent material Heat of soaking up ( GJ/tonnes CO2 ) MEA-H2O 1.92 DGA-H2O 1.91 DIPA-H2O 1.67 DEA-H2O 1.63 AMP-H2O 1.52 MIDEA-H2O 1.34 TEA -H2O 1.08 Water 0.39 Table: Typical Heat of Absorption for Common Liquid Absorbents [ 7 ] . Table shows the values for heat of soaking up for the most normally used liquid absorbents. MEA-H2O possesses the highest value for adhering energy to the CO2. If this value could be reduced the sum of energy which would be required to divide the CO2 from the absorbent could be significantly decreased. Future developments in chemical absorbents could see the debut of hydrogen carbonate formation, which has been shown to hold the lowest binding energy of any chemical absorbent [ 3 ] taking to important lessening in the energy punishments encountered by the system.Heating of Absorbent in DesorberThe energy consumed by the absorbent heating up in the stripper can be reduced by take downing the heat money changer attack temperature and diminishing the volume of dissolver flow through the desorber. This can be achieved through the usage of 2nd coevals sterically hindered aminoalkanes. This has possible to duplicate the molar capacity of the absorbent. This could take to a bead in energy d emand from 1.2 GJ/tonne CO2 to 0.8 GJ/tonne CO2 which represents two tierces of the first coevals demands. Further betterments in these countries could finally take to 0.08 GJ/tonne CO2 which is predicted for 4th coevals aminoalkanes and attack temperatures [ 3 ] .Reflux RatioDepriving steam in the desorber has to drive the CO2 through the desorption procedure and supply the heat demand of the overall desorber and releases this heat when condensed and this heat is lost in the chilling H2O. Typically the reflux ratio achieved, expressed as H2O/tonnes CO2, is 0.7. This can be improved through the usage of absorbents that posses a higher Carbon dioxide to H2O ratio at the desorber issue. With a 0.1 ratio seen as possible for 4th coevals absorbents.Entire Thermal Energy Requirement ReductionsTable shows how these factors could diminish the thermic energy demand as new coevalss of chemical absorbents are introduced. Decreases in entire thermic energy demand of up to 80 % may be possible if these engineerings can be implemented. Procedure Generation Status G1 G2 G3 G4 Binding Energy ( MJ/kmol CO2 ) 80 70 55 30 Desorber attack temperature ( K ) 15 10 5 3 Solvent Flow ( m3/tonnes CO2 ) 20 10 8 4 Reflux Ratio ( metric tons H2O/tonnes CO2 ) 0.7 0.6 0.4 0.1 Entire Thermal Energy Requirement ( GJ/tonnes CO2 ) 4.56 3.31 2.29 0.95 Table: Possible thermic energy demand betterments [ 3 ] .Power RequirementsPower is required to drive a figure of facets of the PCC procedure: Fan power demand which is determined by the flow rate required and per centum remotion of CO2 sought. Liquid absorbent pump power. Affected by the degree of absorptive regeneration and other such procedures Compaction power demands which depend on the CO2 belongingss and the degrees of compaction required. Current coevals power demand is 0.154 MWh/tonnes CO2 with the mentality for power economy outlined in Table. Procedure Generation Status G1 G2 G3 G4 Entire Power ( MWh/tonnes CO2 ) 0.154 0.138 0.122 0.105 Table: Possible power demand betterments [ 3 ] .DecisionWhile involvement and investing in research in the country of PCC has increased in recent times the procedure is still in the really early phases of development and at the minute the energy costs involved in using this engineering to char discharged power workss make it highly inefficient and economically impracticable. Table shows that in all cases PCC can take to enormous lessenings in the sum of CO2 which emanating from coal fired power workss. However, first coevals PCC engineerings lead to a 40 % lessening in the works efficiency ensuing in 65 % addition in coal ingestion to bring forth the same sum of electricity. PCC Generation Status G1 G2 G3 G4 Efficiency with no gaining control ( % ) 35 41 46 50 CO2 Emission ( No gaining control ) ( metric tons CO2/MWh ) 0.928 0.792 0.706 0.650 Efficiency with 90 % gaining control ( % ) 21.2 31.6 39.7 45.8 CO2 Emission ( with gaining control ) ( metric tons CO2/MWh ) 0.153 0.103 0.082 0.071 Increase in Coal usage due to Capture ( % ) 65 30 16 9 Table: Overall mentality for PCC [ 3 ] . Because these engineerings are in the really early phases of developments there is a immense range for efficiency betterments in both the thermic energy required and the power demands for the procedure. It is seen as an accomplishable end that as engineering is developed that PCC could ensue in every bit small as a 4.2 % lessening in overall works efficiency and a 9 % addition in coal ingestion. These decreases are cardinal to the future use of PCC engineering as if it is non economically feasible for the procedure to be used it will ne'er be adopted.

Wednesday, October 23, 2019

“Night” by Elie Wiesel Essay

Elie Wiesel, a famed author and survivor of the Holocaust stated quite simply that anyone who witnessed a crime, and did nothing to stop it is just as guilty as the one committing it. Elie Wiesel learned a lot about man’s nature by surviving the Holocaust, but his statement about a bystander being just as guilty as the actual criminal is wrong. People are responsible for there own actions, and it is not fair to blame someone for a crime they did not commit, whether they could have done something to stop it or not. During the Holocaust there were over 6 million people persecuted, but there were many more silent bystanders who were unable to do anything because they feared for their lives. It is human nature to look after your own wellbeing and those closest to you, and many people felt if they tried to do something to stop the persecution of Jews it would endanger them in one way or another. In some cases somebody can witness a horrible atrocity, but have no power to stop it. Elie wrote in his book about how he and his fellow Jews were forced to watch the hanging of a young and innocent child by the S.S. The Jews that witnessed the hanging of the boy were all silent bystanders who, according to Elie, should be punished in the same manner that the executioner was. This shows how wrong Elie’s judgment is. The Jews were unable to do anything to help the boy for fear of their own lives, people cannot be blamed for their most fundamental and primitive instinct which is self preservation. Elie Wiesel experienced a lot of pain and suffering during the Holocaust, but the silent bystanders cannot be punished the same way the actual criminal is no matter what the circumstance is. If Elie truly believes that a silent bystander is just as guilty as a criminal, then that would mean that he is guilty of hanging a young innocent boy and deserves to be killed or sent to prison. Although it’s easy to see where Elie’s statement is coming from and why he chose to make it, it is clear that he made his statement more out of emotion than actual logic. I disagree with his judgment because silent bystanders do not always have the power to stop or intervene with the crime without endangering themselves.