Thursday, November 28, 2019

Male Or Female Essays - Abortion, Gender Studies, Law,

Male Or Female? Overall, the rights and status of women have improved considerably in the last century; however, gender equality has recently been threatened within the last decade. Blatantly sexist laws and practices are slowly being eliminated while social perceptions of women's roles continue to stagnate and even degrade back to traditional ideals. It is these social perceptions that challenge the evolution of women as equal on all levels. In this study, I will argue that subtle and blatant sexism continues to exist throughout educational, economic, professional and legal arenas. Women who carefully follow their expected roles may never recognize sexism as an oppressive force in their life. I find many parallels between women's experiences in the nineties with Betty Friedan's, in her essay: The Way We Were - 1949. She dealt with a society that expected women to fulfill certain roles. Those roles completely disregarded the needs of educated and motivated business women and scientific women. Actually, the subtle message that society gave was that the educated woman was actually selfish and evil. I remember in particular the searing effect on me, who once intended to be a psychologist, of a story in McCall's in December 1949 called A Weekend with Daddy. A little girl who lives a lonely life with her mother, divorced, an intellectual know-it-all psychologist, goes to the country to spend a weekend with her father and his new wife, who is wholesome, happy, and a good cook and gardener. And there is love and laughter and growing flowers and hot clams and a gourmet cheese omelet and square dancing, and she doesn't want to go home. But, pitying her poor mother typing away all by herself in the lonesome apartment, she keeps her guilty secret that from now on she will be living for the moments when she can escape to that dream home in the country where they know what life is all about. (See Endnote #1) I have often consulted my grandparents about their experiences, and I find their historical perspective enlightening. My grandmother was pregnant with her third child in 1949. Her work experience included: interior design and modeling women's clothes for the Sears catalog. I asked her to read the Friedan essay and let me know if she felt as moved as I was, and to share with me her experiences of sexism. Her immediate reaction was to point out that Betty Friedan was a college educated woman and she had certain goals that never interested me. My grandmother, though growing up during a time when women had few social rights, said she didn't experience oppressive sexism in her life. However, when she describes her life accomplishments, I feel she has spent most of her life fulfilling the expected roles of women instead of pursuing goals that were mostly reserved for men. Unknowingly, her life was controlled by traditional, sexist values prevalent in her time and still prevalent in the nin eties. Twenty-four years after the above article from McCall's magazine was written, the Supreme Court decided whether women should have a right to an abortion in Roe v. Wade (410 U.S. 113 (1973)). I believe the decision was made in favor of women's rights mostly because the court made a progressive decision to consider the woman as a human who may be motivated by other things in life than just being a mother. Justice Blackmun delivered the following opinion: Maternity, or additional offspring, may force upon the woman a distressful life and future. Psychological harm may be imminent. Mental and physical health may be taxed by child care. There is also a distress, for all concerned, associated with the unwanted child, and there is the problem of bringing a child into a family already unable, psychologically and otherwise, to care for it. In other cases, as in this one, the additional difficulties and continuing stigma of unwed motherhood may be involved. (See Endnote #2) I feel the court decision of Roe v. Wade would not have been made in 1949. Even in 1973, it was a progressive decision. The problem of abortion has existed for the entire history of this country (and beyond), but had never been addressed because discussing these issues was not socially acceptable. A

Sunday, November 24, 2019

Difference Between Chemistry and Chemical Engineering

Difference Between Chemistry and Chemical Engineering Although there is overlap between chemistry and chemical engineering, the courses you take, degrees, and jobs are quite different. Heres a look at what chemists and chemical engineers study and what they do. Chemistry vs Chemical Engineering in a Nutshell The big difference between chemistry and chemical engineering has to do with originality and scale. Chemists are more likely to develop novel materials and processes, while chemical engineers are more likely to take these materials and processes and upscale them to make them larger or more efficient. Chemistry Chemists initially obtain bachelor degrees in science or arts, depending on the school. Many chemists pursue advanced degrees (masters or doctorate) in specialized areas. Chemists take courses in all major branches of chemistry, general physics, math through calculus and possibly differential equations, and may take courses in computer science or programming. Chemists typically take core courses in the humanities, too. Bachelor degree chemists usually work in labs. They may contribute to RD or perform sample analysis. Masters degree chemists do the same type of work, plus they may supervise research. Doctoral chemists do and direct research or they may teach chemistry at the college or graduate level. Most chemists pursue advanced degrees and may intern with a company before joining it. Its much more difficult to get a good chemistry position with a bachelors degree than with the specialized training and experience accumulated during graduate study. Chemical Engineering Most chemical engineers go with a bachelors degree in chemical engineering. Masters degree a popular, while doctorates are rare compared with chemistry. Chemical engineers take a test to become licensed engineers. After obtaining enough experience, they may continue to become professional engineers (P.E.). Chemical engineers take most of the chemistry courses studied by chemists, plus engineering courses and additional math. The added math courses include differential equations, linear algebra, and statistics. Common engineering courses are  fluid dynamics, mass transfer, reactor design, thermodynamics, and process design. Engineers may take fewer core courses, but commonly pick up ethics, economics, and business classes. Chemical engineers work on RD teams, process engineering at a plant, project engineering, or management. Similar jobs are performed at the entry and graduate level, although masters degree engineers often find themselves in management. Many start new companies. Job Outlook for Chemists and Chemical Engineers There are numerous job opportunities for both chemists and chemical engineers. In fact, many companies hire both types of professionals. Chemists are the kings of lab analysis. They examine samples, develop new materials and processes, develop computer models and simulations,  and often teach. Chemical engineers are the masters of industrial processes and plants. Although they may work in a lab, youll also find chemical engineers in the field, on computers, and in the boardroom. Both jobs offer opportunities for advancement, although chemical engineers have an edge because of their broader training and certifications. Chemists often pick up postdoctoral or other training to expand their opportunities.

Thursday, November 21, 2019

Rebuttal essay Example | Topics and Well Written Essays - 750 words

Rebuttal - Essay Example Affirmative action in essence means giving preferential treatment to diverse groups in society either for academic or employment purposes. These policies are targeted to combat discrimination that has plagued American societies for centuries. Without a doubt, America is referred as a â€Å"melting pot† as many individuals come from diverse backgrounds. While affirmative action is a bold attempt to atone the sins of past decades, affirmative action needs to be eliminated since it leads to preferential treatment, lowers standards for performance, and leads to reverse discrimination. The author argues that affirmative action insist that diversity in college will produce a more nurturing environment. Although that may be true to a certain degree, it does not diminish the fact that a diverse classroom is derived from different opinions, not necessarily from a race context. It is wrong to assume that diverse classroom will promote more ideas since race has little to do with innovation. Some students that are not admitted based upon a merit status struggle to excel in their posts. For instance, an individual who gets accepted to systems analyst position at IBM, will continue to struggle if he does not understand the basics of management information systems. If that particular individual is not capable of handling the complicated tasks expected from him, then clearly he should not have been employed. The worst part is the fact that it has divided the country deeply in the issue as the flaws of this legislation are so deep that it gives an advantage to minorities. The whole notion of reverse discrimination is a huge flaw in the author’s logic because the author insists it opens new doors for opportunities. However, the author fails to address that it leads to preferential treatment. Imagine a scenario in which a Caucasian and a minority both apply for a high-qualification job. For the sake of the

Wednesday, November 20, 2019

Basel Accord Essay Example | Topics and Well Written Essays - 1000 words

Basel Accord - Essay Example The paper tells that the Basel Committee on Bank Supervision (BCBS) was originally established in the 1970s to tackle the new challenges of banking across international boundaries. It became apparent that the failings and collapse of one country's banks was now being felt in other countries all over the world. It was obvious that intervention and prevention was necessary. In the 1980s, the United States Congress, pushed domestic regulatory agencies to set and enforce standards, including a fixed proportion of capital a bank must hold, or capital adequacy. This is how the Basel Accords began. The accords have been adapted and expanded in attempts to meet needs and to speak to aspects that previous version of the accords may not have addressed sufficiently. In order to understand the Basel Accords better it is useful to review them individually in order to better compare and contrast the variations. The BCBS determined that bank capital would be organized into 2 separate tiers. Tier 1f ocuses on the higher-quality capital, those that represents items of the lowest priority of repayment and easiest to absorb when lost. Most of Tier 1 involves â€Å"core† capital, or common equity, which arises from actual ownership in the bank, like common stock, undivided profits, and surplus monies. Tier 2, also called supplementary capital, include certain reserves, and term debt. The capital under Tier 2 can be divided into 2 more sublevels; the upper focuses on maintaining characteristics of being continuous, like preferred capital, and equity. The lower level, is the least costly for banks to issue because it pertains to debts with a time of maturity of at least 10 years.(Eubanks, 2010) Basel I was the first attempt made to establish a standard of regulating international banking and it came under a great deal of criticism. Opponents felt that the Basel I Accord approach to â€Å"risk-weighing assets.† They claimed that this system is too broad and lacks the fin ite specialization to address all of the unique risks that apply to the differing assets held by the bank. As a response the BCBS released a revision to the accord called the â€Å"International Convergence of Capital Measurement and Capital Standards: Revised Framework,† which is, also, known as Basel II.(Larson, 2011) Basel II Basel II differs from Basel I in a distinct way. It introduced a section of â€Å"Pillars,† which intended to rectify the capital adequacy issues with Basel I. Pillar 1, specifically, deals with the procedures of calculating the required capital within banking organizations. This accord will determine risk potential based upon the totality of their credit risk, market risks, and operational risks. Pillar 2, ideally, was placed to increase, both, accountability and transparency with the banking system. Pillar 3 works to require banking institutions to disclose risk exposures, allowing for better assessment of the needed safety to help create a

Monday, November 18, 2019

Evidence week three Essay Example | Topics and Well Written Essays - 500 words

Evidence week three - Essay Example ant facts that can be used to resolve the conflicts that regularly arise between nurses and pregnant mothers will be identified and used in the nursing field to create a harmonious coexistence. Presumable, nurses regularly perceive mothers diagnosed with diabetes with discernment (Eadara et al., 2010). Using the PICOT format my question will be: Question: How do expectant mothers diagnosed with diabetes relate to nurses and how do they perceive reporting their blood sugar levels to their healthcare givers within the first 6 months? (Eadara et al., 2010). The question is both qualitative and quantitative. It will use a mixed research design to enable a proper analysis of data and/or information and uncover the real facts underlying GDM and Preeclampsia (Eadara et al., 2010). Using a mixed design will also allow me to reference data appropriately and eventually derive knowledge that can contribute to the growth of evidence-based knowledge in nursing. This is a great case to refer to from the nursing examination strategies and methodologies utilized within the course contemplate by the people. This is among the greatest obscurities being in the nursing field and profession in the entire world since it is a nearly related viable action done by the attendant specialists as a careful investigation. According to Houser, the fall and damage cases have been on the ascent and essentially interfaced to the nature of nursing mind in the locale specified. Houser states that the examination is about the reason for fall and harm cases and its answer (Sudbury et al., 2007). The name of the examination database is called "Fall and Injury Prevention". This is pointed at securing what reasons individuals to fall and be presented to damages and what sort of individuals are obliged to give answer for these cases. It has been secured that numerous individuals get wounds from tumbling down on the floor or on articles and need fitting medical caretak er administer to their damages. A

Friday, November 15, 2019

The Biological Effects Of Ionizing Radiation Biology Essay

The Biological Effects Of Ionizing Radiation Biology Essay The biological effects of ionizing radiation are determined by both the radiation dose and the radiation quality ionization density. To understand the radiation protection concerns associated with different types of ionizing radiation, knowledge of both the extent of exposure and consequent macroscopic dose absorbed gray value, as well as the microscopic dose distribution of the radiation modality is required. The definitions of these variables are discussed below but in general to advance the knowledge of the biological effects of different radiation types one needs to know the dose absorbed, the radiation quality and effectiveness of a particular radiation type to induce biological damage. In this study the biological effect of high energy neutrons is compared to that of a reference radiation type 60Co ÃŽÂ ³-rays for a cohort of donors, mostly radiation workers. Comparisons are made at different dose levels in blood cells from each donor to ascertain the relative biological effectiveness of the test radiation modality against that of a recognized reference radiation (Hall, 2005). Such studies are essential to determine the radiation quality for high energy neutron sources applicable to practises in radiation protection. In some nuclear medicine applications radionuclides are used to treat malignant disease. For this the use of short lived alpha particle emitters or other radiation modalities that deliver high ionization densities in cells, are particularly attractive. This as the cellular response in relation to inherent radiosensitivity of the effected cells is thought to be more consistent compared to the use of radionuclides that emit radiation with a lower ionization density e.g. ÃŽÂ ²-particles. The relative biological effectiveness of the high energy neutrons used in this study is followed as a function of the inherent radiosensitivity of different individuals. This allows the identification of cell populations that are relatively sensitive or relatively resistant to radiation. As such research material is available to investigate cellular response too Auger electrons. The latter is known to induce biological damage akin to that of alpha particles. A short description of the physical and biological variables applicable to this study is summarised below. Ionizing Radiation The term ionizing radiation refers to both charged particles (e.g., electrons or protons) and uncharged particles (e.g., photons or neutrons) that can impart enough energy to atoms and molecules to cause ionizations in that medium, or to initiate nuclear or elementary-particle transformations that in turn result in ionization or the production of ionizing radiation. Ionization produced by particles is the process by which one or more electrons are liberated in collisions of the particles with atoms or molecules (The International Commission on Radiation Units and Measurements [ICRU] Report 85, 2011). Interaction of Ionizing Radiation with Matter Ionizing radiation is not restricted to ionization events alone. Several physical and chemical effects in matter such as: heat generation, atomic displacements, excitation of atoms and molecules, destruction of chemical bonds and nuclear reactions may occur. The effects of ionizing radiation on matter depend on the type and energy of radiation, the target, and the irradiation conditions. Radiation can be categorized in terms of how it induces ionizations: Directly ionizing radiation, consist of charged particles such as electrons, protons and alpha particles. Indirectly ionizing radiation consists of neutral particles and/or electromagnetic radiation such as neutrons and photons (ÃŽÂ ³-rays and X-rays). Ionising radiation interacts with matter by: Interaction with the electron cloud of the atom, or by Interaction with the nucleus of the atom. Types of ionizing radiation linked to this study ÃŽÂ ³-rays Ionizing photons (ÃŽÂ ³- and X-rays) are indirectly ionising radiation. These wave like particles have zero rest mass and carry no electrical charge. Low energy (E>2m0c2) may be absorbed by atomic nuclei and initiate nuclear reactions (Cember, 1969). The charged electrons emitted from the atoms, produce the excitation and ionisation events in the absorbing medium. Neutrons Neutrons, similar to ionizing photons are indirectly ionizing radiations; however, these particles do have a rest mass. There is negligible interaction between neutrons and the electron cloud of atoms since neutrons do not have a net electrical charge (Henry, 1969). The principle interactions occur through direct collisions with atomic nuclei during elastic scattering events. In this process, ionisation is produced by charged particles such as recoil nuclei and nuclear reaction products. The production of secondary ionising photons will result in the release of energetic electrons. In turn these charged particles can deposit energy at a considerable distance from the interaction sites (Pizzarello, 1982). Auger electrons Auger electron emission is an atomic-, not a nuclear process. In this process an electron is ejected from an orbital shell of the atom. A preceding event, e.g. electron capture (EC) or internal conversion (IC) leaves the atom with a vacant state in its electron configuration. An electron from a higher energy shell will drop into the vacant state and the energy difference will be emitted as a characteristic x-ray (Cember, 1969). The energy of the x-ray (Ex-ray) being the difference in energy (E) between the two electron shells L and K. Ex-ray = EL -EK Alternatively, the energy may be transferred to an electron of an outer shell, causing it to be ejected from the atom (Fig. 1). The emitted electron is known as an Auger electron and similarly to the x-ray has an energy: EAuger = EΆ -EB where: EΆ = the energy of inner-shell vacancy energy of outer-shell vacancy EB = binding energy of emitted (Auger) electron Auger emission is favoured for, low-Z materials where electron binding energies are small. Auger electrons have low kinetic energies; hence travel only a very short range in the absorbing medium (Cember, 1969). File:Auger Process.svg Fig. 1: Schematic representation of the Auger electron emission process, where an orbital electron is ejected following an ionization event. Dosimetric Quantities Several dosimetric quantities have been defined to quantify energy deposition in a medium when ionizing radiation passes through it. Radiation fields are well described by physical quantities such as particle fluence or air kerma free in air are used. However these quantities do not relate to the effects of exposure on biological systems (International Commission on Radiological Protection [ICRP] Publication 103, 2007). The absorbed dose, D, is the basic physical quantity used in radiobiology, radiology and radiation protection that quantifies energy deposition by any type of radiation in any absorbing material. The International System of Units (SI) of absorbed dose is joule per kilogram (J.kg-1) and is termed the gray (Gy). Absorbed dose, D, is defined as the quotient of mean energy, dÃŽÂ µ, imparted by ionising radiation in a volume element and the mass, dm, of the matter in that volume (Cember, 1969). The absorbed dose quantifies the energy imparted per unit mass absorbing medium, but does not relate this value to radiation damage induced in cells or tissue. The radiation weighted dose (HT) is used as a measure of the biological effect for a specific radiation quality on cells or tissue. It is calculated from equation where DT,R is the mean absorbed dose in a tissue T due to radiation of type R and wR is the corresponding dimensionless radiation weighting factor. The unit of radiation weighted dose is J.kg-1 and is termed the sievert (Sv). Radiation weighting factors are recommended by the International Commission on Radiological Protection (International Commission on Radiological Protection [ICRP] Publication 103, 2007) and are derived from studies on the effect of the micro-deposition of radiation energy in tissue and on its carcinogenic potential. Linear Energy Transfer (LET) Ionizing radiation deposits energy in the form of ionizations along the track of the ionizing particle. The spatial distribution of these ionization events is related to the radiation type. The term linear energy transfer (LET) relates to the rate at which secondary charged particles deposit energy in the absorbing medium per unit distance (keV/ µm). LET is a realistic measure of radiation quality (Duncan, 1977). The LET (L) of charged particles in a medium is defined as the quotient of dE/dl where dE is the average energy locally imparted to the medium by a charged particle of specified energy in traversing a distance dl (Pizzarello, 1982). For high energy photons (x- and ÃŽÂ ³-rays), fast electrons are ejected when energetic photons interact with the absorbing medium. The primary ionization events along the track of the ionizing particle are well separated. This type of sparsely ionizing radiation is termed low-LET radiation. The LET of a 60Cobalt teletherapy source (1.3325 and 1.1732 MeV) is in the range of 0.24 keV/ µm (Vral et al., 1994). Neutrons cause the emission of recoil protons, alpha particles and heavy nuclear fragments during scattering events. These emitted charged particles interact more readily with the absorbing medium and cause densely spaced ionizing events along its track. The p66(Be) neutron beam used in this study has an ionization density of 20 keV/ µm and hence regarded as high-LET radiation. Auger electrons travel very short distances in the absorbing medium due to their low kinetic energies. All the energy of these particles is liberated in small volumes over short track lengths. Ionization densities are therefore very high, up to 40 keV/ µm this is comparable to high-LET alpha particles (Godu et al., 1994). Relative Biological Effectiveness (RBE) The degree of damage caused by ionizing radiation depends firstly on the absorbed dose and secondly on the ionization density or quality of radiation. Variances in the biological effects of different radiation qualities can be described in terms of the relative biological effectiveness (RBE). RBE defines the magnitude of biological response for a certain radiation quality compared to a distinct reference radiation. It is expressed in terms of the ratio (Quoc, 2009): Megavoltage X-rays or 60Co ÃŽÂ ³-rays are commonly employed as the reference radiation since these are standard therapeutic sources of radiation. Thus for an identical dose neutrons the biological effect observed would be greater, compared to 60Co ÃŽÂ ³-rays. The fundamental difference between these radiation modalities is in the spatial orientation or micro deposition of energy. Furthermore, RBE varies as a function of the dose applied increase in RBE is noted for a decrease in dose. By evaluating dose response curves (Fig 2), it is evident that the shoulder of the neutron curve is much shallower (smaller ÃŽÂ ²-value) compared to the reference radiation curve. Therefore changes in RBE are prominent over low dose ranges (Hall, 2005). Fig 2: Dose response curves based on the linear quadratic model demonstrate differences in RBE as a function of dose. Through evaluation of the biological effect curves it is apparent that the RBE for a specific radiation quality may vary. This is characterized by the type of tissue or cells being investigated, dose and dose rates applied oxygenation status of the tissue, energy of radiation and the phase of the cell cycle and inherent radiosensitivity of cells. The RBE increases with a decrease in dose, to reach a maximum RBE denoted RBEM this is calculated from the ratio of the initial slope of the dose response curves for both radiation modalities. RBE LET relationship For a given absorbed dose, differences in the biological response for several cell lines, exposed to different radiation qualities have been demonstrated (Slabbert et al., 1996). Cells exposed to a specified dose low LET radiation do not exhibit the same biological endpoint than those exposed to same dose high LET radiation. This since with low LET radiation a substantial amount of damage may be repaired because the energy density imparted to each ionization site is relatively low. The predominant mode of interaction for this radiation type is indirect through chemical attack from radiolysis of water. As the LET increases, for a specific dose, fewer sites are damaged but the sites that are located along the track of the ionizing particle are severely damaged because more energy is imparted. Thus the probability of direct interaction between the particle track and the target molecule increases with an increase in LET. The RBE of radiation can be correlated with the estimates of LET values. However, as the LET increases, exceeding 10keV/ µm it is no longer possible to assign a single value for the RBE. Beyond this LET, the shape of the cell survival curve changes markedly in the shoulder region compared to low-LET. Since RBE is a measure of the biological effect produced, comparison of the low-LET and high-LET curves will reveal that RBE increases with decreasing dose (Hall, 2005). The average separation in ionizing events at LET of about 100 keV/ÃŽÂ ¼m is equal to the width of deoxyribonucleic acid (DNA) double strand molecule (Fig. 3). Further increase in LET results in decreased RBE since ionization events occur at smaller intervals than DNA molecule strand separation (Fig. 3) and this energy imparted does not contribute to DNA damage. Fig 3: Average spatial distribution of ionizing events for different LET values in relation to the DNA double helix structure (Hall 2005). Cellular Radiosensitivity Tissue radiosensitivity models In 1906 the radiobiologists Bergonie and Tribondeau established a rule for tissue radiosensitivity. They studied the relative radiosensitivities of cells and from this could predict which type of cells would be more radiosensitive (Hall, 2005). Bergonie and Tribondeau realized that cells were most sensitive to radiation when they are: Rapidly dividing (high mitotic activity). Cells with a long dividing future. Cells of an unspecialised type. The law of Bergonie and Tribondeau was later adapted by Ancel and Vitemberger; they concluded that radiation damage is dependent on two factors: the biological stress on the cell. the conditions to which the cell is exposed pre and post irradiation. Cell division causes biological stress thus cells with a short doubling time express radiation damage at an earlier stage than slowly dividing cells. Undifferentiated rapidly dividing cells therefore are most radiosensitive (Hall, 2005). A comprehensive system of classification was proposed by Rubin and Casarett, cell populations were grouped into 4 categories based on the reproduction kinetics: Vegetative intermitotic cells were defined as rapidly dividing undifferentiated cells. These cells usually have a short life cycle. For example: erythroblasts and intestinal crypt cells and are very radiosensitive. Differentiating intermitotic cells are characterized as actively dividing cells with some level of differentiation. Examples include: meylocytes and midlevel cells in maturing cell lines these cells are radiosensitive. Reverting postmitotic cells are regarded as to not divide regularly and generally long lived. Liver cells is an example of this cell type, these cell types exhibit a degree radioresistance. Fixed postmitotic cells do not divide. Cells beloning to this classification are regarded to be highly differentiated and highly specialized in both morphology and function. These cells are replaced by differentiating cells in the cell maturation lines and are regarded as the most radioresistant cell types. Nerve and muscle cells are prime examples (Hall, 2005). Michalowski proposed a type of classification which divides tissues into hierarchical (H-type) and flexible (F-type) populations. Within this classification cells are grouped in 3 distinct categories: Stem cells, that continuously divide and reproduce to give rise to both new stem cells and cells that eventually give rise to mature functional cells. Maturing cells arising from stem cells and through progressive division eventually differentiate into an end-stage mature functional cell. Mature adult functional cells that do not divide Examples of H-type populations include the bone marrow, intestinal epithelium and epidermis; these cells are capable of unlimited proliferation. In F-type populations the adult cells can under certain circumstance be induced to undergo division and reproduce another adult cell. Examples include; liver parenchymal cells and thyroid cells. The two types represent the extremes in cell populations. It should be noted that most tissue populations exist between the extremes, these exhibit characteristics of both types where mature cells are able to divide a limited number of times. The sensitivity to radiation can be attributed to the length of the life cycle and the reproductive potential of the critical cell line within that tissue (Hall, 2005). Cell cycle dependent radiosensitivity As cells progress through the cell cycle various physical and biochemical changes occur (Fig. 4). These changes influence the response of cells to ionizing radiation. Variations in radiosensitivity for several cell types at different stages of the cell cycle has been documented (Hall, 2005). Following the law of Bergonie and Tribondeau that cells with high mitotic activity are most radiosensitive, it was found that cells in the mitotic phase (M-phase) of the cell cycle are most sensitive. Late stage gap 2 (G2) phase cells are also very sensitive with gap 1 (G1) phase being more radioresistant and synthesis (S phase) cells the most resistant (Domon, 1980). Fig. 4: Cell cycle of proliferating cells representing the different phases leading up to cell division. The G0 resting phase for cells that do not actively proliferate has been included since T-lymphocytes naturally occur in this phase (Hall, 2005). Nonproliferating cells, generally cells that are fully differentiated, may enter the rest phase G0 from G1 and remain inactive for long periods of time. Peripheral T-lymphocytes seldom replicate naturally and remain in G0 indefinately. Lymphocyte Radiosensitivity The hematopoietic system is very sensitive to radiation. Differential blood analyses are routinely employed as a measure of radiation exposure. This measurement is based on the sensitivity of stem cells and the changes observed in the constituents of peripheral blood due to variations in transit time from stem cell to functioning cell (Hall, 2005). It has been shown that lymphocytes, although they are resting cells (G0 phase) which do not actively proliferate nor do have a long dividing future hence do not meet the criteria of a radiosensitive cell type as described above are of the most radio sensitive cells. The reasons for their acute sensitivity cannot be explained (Hall, 2005). Furthermore two distinct subpopulations T-lymphocytes with respect to radiosensitivity were found in peripheral blood. The small T-lymphocyte which is extremely radiosensitive and disappears almost completely from the peripheral blood at doses of 500 mGy (Kataoka, 1974, Knox, 1982 and Hall, 2005). Cytogenetic expression of ionizing radiation induced damage The primary target in radiotherapy is the double helix deoxyribonucleic acid (DNA) molecule (Rothkam et al. 2009). This macro molecule contains the genetic code critical to the development and functioning of most living organisms. The DNA molecule consists of two strands held together by hydrogen bonds between the bases. Each strand is made up of four types of nucleotides. A nucleotide consists of a five-carbon sugar (deoxyribose), a phosphate group and a nitrogen containing base. The nitrogen containing bases are adenine, guanine, thymine or cytosine. Base pairing between two nucleotide strands is universally constant with adenine pairing with thymine and guanine with cytosine (Fig. 5). This attribute permits effective single strand break repair since the opposite strand is used as a template during the repair process. The base sequence within a nucleotide strand differs; the arrangement of bases defines the genetic code. The double helix DNA molecule is wound up on histones and bou nd together by proteins to form nucleosomes. This structure is folded and coiled repeatedly to become a chromosome. Fig. 5: The double helix structure of a DNA molecule consists of two neucleotide strands held together by hydrogen bonds between the bases. Figure modified from http://evolution.berkeley.edu/evolibrary/article/history_22 by P Beukes. Ionizing radiation can either interact directly or indirectly with the DNA strand. When an ionization event occurs in close proximity to the DNA molecule direct ionization can denature the strand. Ionization events that occur within the medium surrounding the DNA produce free radicals such as hydrogen peroxide through radiolysis of water. Damage induced by ionizing radiation to the DNA include base damage (BD), single strand breaks (SSB), abasic sites (AS), DNA-protein cross-links (DPC), and double strand breaks (DSB) (Fig. 6). Fig. 6: Examples of several radiation induced DNA lesions. Figure modified from Best B (9) by P Beukes. Low-LET radiation primarily causes numerous single strand breaks, through direct and indirect interaction (Hall, 2005). Single strand breaks are of lesser biological importance since these are readily repaired by using the opposite strand as a template. High-LET radiation damage is dominated by direct interactions with the DNA molecule. Densely ionizing radiation has a greater probability to induce irreparable or lethal double strand breaks since energy deposition occurs in discrete tracks (Hall, 2005). The number of tracks will be fewer but more densely packed compared to low-LET radiation of equivalent doses. Several techniques to quantify chromosomal damage and chromatid breaks have been established. These range from isolating DNA and passing it through a porous substrate or gel (Hall, 2005) by applying an external potential difference too advanced techniques of visually observing and numerating chromosomal aberrations of interphase cells. Cytogenetic chromosome aberration assays of peripheral blood T-lymphocytes to assess radiation damage include but are not limited to: premature chromosome condensation (PCC) assay, metaphase spread dicentric and ring chromosome aberration assay (DCA), metaphase spread fluorescence in situ hybridisation (FISH) translocation assay and cytokinesis blocked micronuclei (CBMN) assay (Fig. 7). Fig. 7: Different cytogenetic assays on peripheral T-lymphocytes for use in biological dosimetry. Figure modified from Cytogenetic Dosimetry IAEA, 2011. PCC occurs when an interphase cell is fused with a mitotic cell. The fusion causes the interphase cell to produce condensed chromosomes prematurely. Chromosomal aberrations can thus be analysed immediately following irradiation without the need for mitogen stimulation or cell culturing. Numeration of dicentrics in metaphase spreads has been used with great success to assess radiation damage in cells since the 1960s (Vral et al, 2010). The incidence of these aberrations follows a linear quadratic function with respect to the dose. Unstable aberrations like dicentrics or centric rings are lethal to the cell hence not passed on to daughter cells (Hall, 2005). In contrast translocations are stable aberrations; these are not lethal to the cell and passed on to daughter cells. Examination of translocations thus provides a long term history of exposure. Although the abovementioned techniques are very accurate and well described, the complexity and time consuming nature of the assays has stimulated the development of automated methods of measuring chromosomal damage. Micronuclei (MN) formation in peripheral blood T-lymphocytes lends itself to automation, since the outcome of radiation insult is visually not too complex with limited variables. DNA damage incurred from ionizing radiation or chemical clastogens induce the formation of acentric chromosome fragments and to a small extent malsegregation of whole chromosomes. Acentric chromosome fragments and whole chromosomes that are unable to engage with the mitotic spindle lag behind at anaphase (Cytogenetic Dosimetry IAEA, 2011). Micronuclei originate from these acentric chromosome fragments or whole chromosomes which are excluded from the main nuclei during the metaphase/anaphase transition of mitosis. The lagging chromosome fragment or whole chromosome forms a small separate nucleus visible in the cytoplasm of the cell. Image recognition software can thus be employed to quantify radiation damage by applying classifiers that describe cell size, staining intensity, cell separation, aspect ratio and cell characteristics when numerating MN frequency in BN cells. The classifiers are fully customizable depending on cell size, staining technique or cell type that will be used. Rationale for this study The principal objective of this study is to define RBE variations for high-LET radiation with respect to radiosensitivity. Specifically this is done for very high energy neutrons and Auger electrons. In general the response of different cell types vary much more to treatment with low-LET radiation compared to high-LET radiation (Broerse et al. 1978). Radiosensitivity differences have been demonstrated for different cancer cell lines (Slabbert et al. 1996) as well as various clonogenic mammalian cells (Hall, 2005) exposed to both high and low-LET radiation. In general there is an expectation and in certain cases some experimental evidence to support less variations in radiosensitivities of cells to high-LET radiation. Furthermore the ranking in the relative radiosensitivity of cell types changed for neutron treatments compared to exposure to X-rays (Broerse et al. 1978). To quantify the radiation risk of individuals exposed to cosmic rays or mixed radiation fields of neutrons and ÃŽÂ ³-rays, several experiments were conducted to ascertain biological damage induced by neutron beams of various energies (Nolte et al., 2007). Clonogenic survival data (Hall, 2005), dicentric chromosome aberrations (Heimers 1994) and micronuclei formation (Slabbert et. al 2010) have been followed. Chromosome aberration frequencies have been quantified and this represent radiation risk to neutron energies ranging from 36 keV up to 14.6 MeV (Schmid et al. 2003). To complement these studies additional measurements have been made for blood cells exposed to 60 MeV and 192 MeV quasi monoenergetic neutron beams (Nolte et al. 2007). Comparisons of RBE values obtained in these studies are shown in figure 1. Significant changes in the maximum relative biological effectiveness (RBEM) of these neutron sources are demonstrated as a function of neutron energy, with a maximum value of 90 at 0.4 MeV. RBEM drop to  ±15 for neutron energies higher than 10 MeV and it appears that the RBEM remain constant up to 200 MeV. The RBEM value of 47 -113 reported by Heimers et al. (1999) is not consistent with these observations. Fig. 1: RBEM values for neutrons of different energies after Nolte et al. (2007) The data shown in Fig. 1 was obtained by using the blood of a single donor. This to ensure consistency in the biological response for different neutron energies used in different radiation facilities in different parts of the world. Keeping the donor constant has the advantage that only a single data set for the reference radiation was needed. These measurements were done over several years. In all these studies, dicentric chromosome aberrations were followed. As informative as these investigations may be, it is doubtful if RBE values obtained from blood samples from a single donor are indeed representative for the wider population to state radiation weighting factors. It is unclear if RBE values for high energy neutrons will vary when measured with cells with different inherent radiosensitivities. Warenius et al. (1994) demonstrated that the RBE of a 62.5 MeV neutron beam increases with increase in radioresistance to 6 MV X-rays. Similarly Slabbert et al (1996) using a 29 MeV p(66)/Be neutron with an average energy of 29 MeV, noted a statistically significant increase in the RBE values for cell types with increased radioresistance to 60Co ÃŽÂ ³-rays. Although these investigators used 11 different cell types, few of these were indeed radioresistant to 60Co ÃŽÂ ³-rays. Close inspection of the data shows that the relationship between neutron RBE and radioresistance to photons disappear when the cell type with the highest resistance to ÃŽÂ ³-rays (Gurney melanoma) is removed from the data set Slabbert et al. 1996). In a follow up study the authors failed to demonstrate the relationship for a p(66)/Be neutron beam but such a relationship was demonstrated for a d14/Be neutron beam (Slabbert et al. 2000). It therefore appears that the relationship for RBE and radioresistance is dependent on the selection of cells used in the study as well as the neutron energy. Using lymphocytes Vral et al. (1994) demonstrated a clear reduction RBEM values for 5,5 MeV neutrons with an increase in the ÃŽÂ ±-values of dose effect curves obtained for 60Co ÃŽÂ ³-rays. This for lymphocytes obtained from six healthy donors. Using only four donors Slabbert et al. (2010) also demonstrated a relationship between RBEM neutrons and radiosensitivity to 60Co y rays. In the latter case the RBEM values are lower as can be expected since these investigators used a higher energy neutron source. Although a significant relationship between these parameters has been demonstrated by the investigators, the cohort of 4 donors in the study is very small. In fact 2 out of the 4 donors have different RBEM values but appear to have the same radiosensitivity. A study using larger number of donors with blood cells exposed to high energy neutrons is clearly needed. This in particular too verify the findings above indicating a different wR for donors of different sensitivity. The studies of RBE variations with neutron energy by Schmid et al., (2003), Nolte et al. (2005) and were conducted dicentric formations observed in metaphase spreads. It is known that more than six months were used to analyse the data for different doses for blood cells obtained from a single donor exposed to a single neutron energy. It follows that some method of automation to assist the radiobiological evaluation of cellular radiation damage is needed to quantify wR values as a function of radiosensitivity. Recently a semi-automated image analysis system, Metafer 4, this holds promise to test numerous donors for micronuclei formations Study to include more participants hence Metaferà ¢Ã¢â€š ¬Ã‚ ¦.

Wednesday, November 13, 2019

Myth- Aliki, The Gods And Goddesses Of Olympics :: essays research papers

Myth- Aliki, The Gods and Goddesses of Olympics History 106-05 Nov. 27, 1996 Eng. 265-01 Oct. 1, 1996 Prof Janice Antczak Myth- Aliki , The Gods and Goddesses of Olympics , Harper Collins Publishers , 1994 . After reading The Gods and Goddesses of Olympus , my first reaction was that it was a wonderful and fascinating example of how Greek mythology explains the theories about life , death , and the wonders of nature . Although I enjoyed the book , I also wondered if it was a little too confusing to a young child , since many long Greek names were used and many characters interacting together became too complicated and involved. The story began with the creation of the earth , sky, all living things, and with the birth of the Gods and Goddesses that reigned on Mount Olympus . The author also took each of the twelve gods and goddesses and individually summarized their personality and duties and their purpose and connection to the world . The author who also illustrated the book , used brilliant and vibrant colors and also portrayed the personality visually by scenes and images that clearly showed the emotional side of the gods . This myth contained some violence , sinister and inappropriate behavior among the gods and cruel and even frightening illustrations that I thought might be too overwhelming for a young impressionable mind . An example of this would be when " Cronus married his sister Rhea , and they had many children . But Cronus was afraid that one of them might overthrow him just as he had overthrown his father . So as each child was born , he swallowed it ." Although Cronus eventually "throws up" the unharmed children in the end, I feel the initial reaction might be more lasting , as well as the fact that Cronus married his sister , which is an unacceptable taboo in society . There were other strong images conveyed , both verbally and visually dealing with death , jealousy , deceit , and deformities of man and beast . Although I enjoyed this book , I felt it should be read to an older audience that will not be negatively impressed by some parts of the story . Tall Tale : Kellog Steven , Sally Ann Thunder , Ann Whirlwind Crockett , Morrow Junior Books , 1995 In this tale , author Steven Kellog depicts the incredible story of a girl named Sally Ann Thunder Ann Whirlwind who has an amazing amount of strength, vitality and agility and who sets off for the frontier at age eight .

Sunday, November 10, 2019

Video games do not cause violence

Video games do not cause violence BY jur525 Video Games: Beneficial or Cause Violence Do modern video games contribute to the increasing level of Violence that is all around us? Can we really attribute the shootings and bombings we see on the news to the increased violence and realism in video games ? These are the questions reporters should be asking. Instead the first question out of their mouths If the suspect Is an adolescent will most certainly be ‘ Was he addicted to playing violent video games ? ‘ If the answer is yes, they look no farther.They should Investigate the suspect's background to see If he was violent before he started playing games. Then look at the studies done on violent video games and they will find that 98. 7% of all teens regardless of gender have played violent games(Klrsh, Steven J. ). What Is truly remarkable is that less than one percent of all teens go out and commit a violent act. Therefore this will prove that it is the violence in the susp ect not the violence in the game that has led to so many deaths. The findings of many studies prove it is society not video games who has let the gamer down.It is society who has given them a ‘bad wrap'. The media and most continued to drop as the sales of violent video games has climbed dramatically. Researchers have considered role playing games a double edge sword. They say they are excellent teaching tools but the violent ones teach violent conditioning. This is NOT so! According to Christopher Ferguson of Setson University and the independent researcher Cheryl Olson in her study published in Springer's Journal of Youth and Adolescences it is exactly the opposite.Their research found that playing games actually had a very calming effect on the outh's with attention deficit symptoms and helped to reduce their aggressive and bullying behavior. They also stated that video games could be helpful when used to distract and relax children and adolescence during painful medical pr ocedures. If you decide to do research on your own be careful. Some studies have carefully rigged the results to only give the answers they want to prove correct.They recorded only the data the showed they were right, that the games can and often does lead to aggressive and violent behavior. If their findings did not show the prime reason being aggressive onditioning and that being exposed to this violence in the games was the prominent reason for the increase it was left out. You also need to check the background of the adolescents they Did they carefully choose only those adolescents that already had a tendency told behavior even before they played the games.Both these facts could greatly change the outcome of the study. When you are listening to the media take into account they are giving attention grabbing headlines and if portraying video games in a bad light gets them the audience hey crave they will say or do anything to achieve their goal. The media loves to make outrageous claims that video games either â€Å"inspired† or â€Å"trained† the suspect to commit these violent acts.They use the rigged studies as their backup even calling them in to act as experts on the subject. For instance Guy Porter and Vladan Starcevic who claim that â€Å"while playing video games outwardly appears to be an innocuous activity, the limited data available suggest playing video games may be related to aggressive and/or antisocial behavior. So next time you hear about a video game â€Å"causing† an adolescent to â€Å"commit† violence remember look up your facts yourself.You may see that while some psychologists want to include video game addiction in the Diagnostic and Statistical Manual for mental disorders most doctors do not agree. Most can see that video games do have considerable potential to enhance the lives of all adolescents. I believe that violent video games directly cause aggression. I also believe gaming is a very social activity which when given the chance can greatly improve adolescent's lives. Gaming is becoming a common way for the adolescent to communicate with the outside world.The gamer could very well have friends all over the world as I do. Society and the Media need to deal with the violent aggression in the adolescents that have shown a predisposition for such violence. That until this is done the real cause of all the shootings won't be addressed and handled accordingly. Video Games Do Not Cause Violence Video games have come a long way since they were first introduced in 1967. In addition to the impressive improvement in graphics, the increase of the violent content has become quite the hot topic amongst parents and politicians alike. The most popular aspect is whether or not violent video games inhibit aggressive behavior. Early research that suggested there was a link between the two has been deemed problematic. However, in recent studies â€Å"research has not found that children who play violent video games are more violent than other kids, nor harmed in any other identifiable fashion.† (Ferguson, 2011) Violent video games do not lead to violence in society because they improve other skills, many other factors heavily contribute to making society violent, and they are a tool for social interaction. Of course the most obvious skill that video games improve would be visual skills. Video games allow gamers to be more attune to their surroundings and â€Å"greatly enhance th e ability to effectively distribute attention over space and time, as well as the number of items that can be attended.† (Achtman, 2008)A press release from the American Psychological Association declares â€Å"Playing video games, including violent shooter games, may boost children’s learning, health, and social skills. † Playing simulated war games such as â€Å"Call of Duty: improves spatial navigation, reasoning, memory, and perception. These types of games also inhibit enhanced abilities to problem solve. Without a doubt, the contents of today’s media are constantly on display for any man, woman, or child to see.Specifically, television, bringing the violent filled news and movies to any home with an open outlet. The homicide rate has doubled after television was introduced in the U. S. (Faria, 2013) Exposure to this form of media and the glorification of violent behavior on television has a great influence on society. Another factor to consider when reviewing the violence in society is the biological factors and environment one is exposed to. Heritage and temperament, and parental rejection-acceptance are considered common underlying causes of anger and aggression.(Blake & Harmin, 2007)Additionally, peer pressure can try to influence behaviors or try to influence thinking or values. Peer pressure can leave children with an enormous amount of stress which can lead to aggressive and violent behavior. At the same time, video games are a great tool for social interaction. Games such as â€Å"World of Warcraft† connect players to a virtual world where literally millions of other gamers are playing at the same time. These games allow you to talk to these other players via chat, either text or voice, and sometimes both.For that reason, gamers develop friendships online and as a result, keep away from drugs and other negative activities presented to them outside of the game, Also, the social aspects of the game promote teamwork and cooperation. (Frostling-Henningsson, 2009) Furthermore, the opportunity to strike up conversation presents itself in a less stressful manner so gamers are able to improve their social skills in this form as well. Besides that, with players from all around the world, English is a second tongue to many gamers.Chatting online with other gamers presents the opportunity to practice their language skills and pick up on â€Å"slang† words to help them become more fluent in their speech. To that end, video games do not cause violence in society because they improve cognitive skills, other media forms are more correlated with aggression, and they are a building block in social connectedness. Psychological studies purporting to show a connection between exposure to violent video games and aggression do not prove that any connection is evident.Any effects that may be displayed are indistinguishable from the effects of other types of media. (Brown, 2011) Video games have transformed the way generations can learn and have opened up a whole new world to the socially isolated. Moreover, video games have simply become society’s scapegoat when it comes to placing the blame for violence and aggressive behavior. When tragedy strikes, people want answers and will join forces in placing the blame. In the end, video games are merely a form of entertainment, as they were intended to be.

Friday, November 8, 2019

Stable Isotope Analysis in Archaeology

Stable Isotope Analysis in Archaeology Stable isotope analysis is a scientific technique which is used by archaeologists and other scholars to collect information from an animals bones to identify the photosynthesis process of the plants it consumed during its lifetime. That information is enormously useful in a wide number of applications, from determining the dietary habits of ancient hominid ancestors to tracing the agricultural origins of seized cocaine and illegally poached rhinoceros horn.   What are Stable Isotopes? All of the earth and its atmosphere is made up of atoms of different elements, such as oxygen, carbon, and nitrogen. Each of these elements has several forms, based on their atomic weight (the number of neutrons in each atom). For example, 99 percent of all carbon in our atmosphere exists in the form called Carbon-12; but the remaining one percent carbon is made up of two several slightly different forms of carbon, called Carbon-13 and Carbon-14. Carbon-12 (abbreviated 12C) has an atomic weight of 12, which is made up of 6 protons, 6 neutrons, and 6 electrons- the 6 electrons dont add anything to the atomic weight. Carbon-13 (13C) still has 6 protons and 6 electrons, but it has 7 neutrons. Carbon-14 (14C) has 6 protons and 8 neutrons, which is too heavy to hold together in a stable way, and it emits energy to get rid of the excess, which is why scientists call it radioactive. All three forms react the exact same way- if you combine carbon with oxygen you always get carbon dioxide, no matter how many neutrons there are. The 12C and 13C forms are stable- that is to say, they don’t change over time. Carbon-14, on the other hand, is not stable but instead decays at a known rate- because of that, we can use its remaining ratio to Carbon-13 to calculate radiocarbon dates, but that’s another issue entirely. Inheriting Constant Ratios The ratio of Carbon-12 to Carbon-13 is constant in the earth’s atmosphere. There are always one hundred 12C atoms to one 13C atom. During the process of photosynthesis, plants absorb the carbon atoms in the earth’s atmosphere, water, and soil, and store them in the cells of their leaves, fruits, nuts, and roots. But, the ratio of the forms of carbon gets altered as part of the photosynthesis process.   During photosynthesis, plants alter the 100 12C/1 13C chemical ratio differently in different climatic regions. Plants that live in regions with lots of sun and little water have relatively fewer 12C atoms in their cells (compared to 13C) than do plants that live in forests or wetlands. Scientists categorize plants by the version of photosynthesis they use into groups called C3, C4, and CAM.   Are You What You Have Eaten? The ratio of 12C/13C is hardwired into the plant’s cells, and- here’s the best part- as the cells get passed up the food chain (i.e., the roots, leaves, and fruit are eaten by animals and humans), the ratio of 12C to 13C remains virtually unchanged as it is in turn stored in the bones, teeth, and hair of the animals and humans. In other words, if you can determine the ratio of 12C to 13C that is stored in an animals bones, you can figure out whether the plants they ate used C4, C3, or CAM processes, and therefore, what the environment of the plants was like. In other words, assuming you eat locally, where you live is hardwired into your bones by what you eat. That measuring is accomplished by mass spectrometer analysis. Carbon is not by a long shot the only element used by stable isotope researchers. Currently, researchers are looking at measuring the ratios of stable isotopes of oxygen, nitrogen, strontium, hydrogen, sulfur, lead, and many other elements that are processed by plants and animals. That research has led to a simply incredible diversity of human and animal dietary information. Earliest Studies The very first archaeological application of stable isotope research was in the 1970s, by South African archaeologist Nikolaas van der Merwe, who was excavating at the African Iron Age site of Kgopolwe 3, one of several sites in the Transvaal Lowveld of South Africa, called Phalaborwa. Van de Merwe found a human male skeleton in an ash heap that did not look like the other burials from the village. The skeleton was different, morphologically, from the other inhabitants of Phalaborwa, and he had been buried in a completely different manner than the typical villager. The man looked like a Khoisan; and Khoisans should not have been at Phalaborwa, who were ancestral Sotho tribesmen. Van der Merwe and his colleagues J. C. Vogel and Philip Rightmire decided to look at the chemical signature in his bones, and the initial results suggested that the man was a sorghum farmer from a Khoisan village who somehow had died at Kgopolwe 3. Applying Stable Isotopes in Archaeology The technique and results of the Phalaborwa study were discussed at a seminar at SUNY Binghamton where van der Merwe was teaching. At the time, SUNY was investigating Late Woodland burials, and together they decided it would be interesting to see if the addition of maize (American corn, a subtropical C4 domesticate) to the diet would be identifiable in people who formerly only had access to C3 plants: and it was.   That study became the first published archaeological study applying stable isotope analysis, in 1977. They compared the stable carbon isotope ratios (13C/12C) in the collagen of human ribs from an Archaic (2500-2000 BCE) and an Early Woodland (400–100 BCE) archaeological site in New York (i.e., before corn arrived in the region) with the 13C/12C ratios in ribs from a Late Woodland (ca. 1000–1300 CE) and a Historic Period site (after corn arrived) from the same area. They were able to show that the chemical signatures in the ribs were an indication that the maize was not present in the early periods, but had become a staple food by the time of the Late Woodland. Based on this demonstration and available evidence for the distribution of the stable carbon isotopes in nature, Vogel and van der Merwe suggested that the technique could be used to detect maize agriculture in the Woodlands and tropical forests of the Americas; determine the importance of marine foods in the diets of coastal communities; document changes in vegetation cover over time in savannas on the basis of browsing/grazing ratios of mixed-feeding herbivores; and possibly to determine origins in forensic investigations. New Applications of Stable Isotope Research Since 1977, applications of stable isotope analysis have exploded in number and breadth, using the stable isotope ratios of the light elements hydrogen, carbon, nitrogen, oxygen, and sulfur in human and animal bone (collagen and apatite), tooth enamel and hair, as well as in pottery residues baked onto the surface or absorbed into the ceramic wall to determine diets and water sources. Light stable isotope ratios (usually of carbon and nitrogen) have been used to investigate such dietary components as marine creatures (e.g. seals, fish, and shellfish), various domesticated plants such as maize and millet; and cattle dairying (milk residues in pottery), and mother’s milk (age of weaning, detected in the tooth row). Dietary studies have been done on hominins from the present day to our ancient ancestors Homo habilis and the Australopithecines. Other isotopic research has focused on determining the geographic origins of things. Various stable isotope ratios in combination, sometimes including the isotopes of heavy elements like strontium and lead, have been used to determine whether the residents of ancient cities were immigrants or were born locally; to trace the origins of poached ivory and rhino horn to break up smuggling rings; and to determine the agricultural origins of cocaine, heroin, and the cotton fiber used to make fake $100 bills.   Another example of isotopic fractionation that has a useful application involves rain, which contains the stable hydrogen isotopes 1H and 2H (deuterium) and the oxygen isotopes 16O and 18O. Water evaporates in large quantities at the equator and the water vapor disperses to the north and south. As the H2O falls back to earth, the heavy isotopes rain out first. By the time it falls as snow at the poles, the moisture is severely depleted in the heavy isotopes of hydrogen and oxygen. The global distribution of these isotopes in the rain (and in tap water) can be mapped and the origins of the consumers can be determined by isotopic analysis of hair.   Sources and Recent Studies Grant, Jennifer. Of Hunting and Herding: Isotopic Evidence in Wild and Domesticated Camelids from the Southern Argentine Puna (2120–420years BP). Journal of Archaeological Science: Reports 11 (2017): 29–37. Print.Iglesias, Carlos, et al. Stable Isotope Analysis Confirms Substantial Differences between Subtropical and Temperate Shallow Lake Food Webs. Hydrobiologia 784.1 (2017): 111–23. Print.Katzenberg, M. Anne, and Andrea L. Waters-Rist. Stable Isotope Analysis: A Tool for Studying Past Diet, Demography, and Life History. Biological Anthropology of the Human Skeleton. Eds. Katzenberg, M. Anne, and Anne L. Grauer. 3rd ed. New York: John Wiley Sons, Inc., 2019. 467–504. Print.Price, T. Douglas, et al. Isotopic Provenancing of the . Antiquity 90.352 (2016): 1022–37. Print.Salme Ship Burials in Pre-Viking Age EstoniaSealy, J. C., and N. J. van der Merwe. On Approaches to Dietary Reconstruction in the Western Cape: Are You What You Have Eaten?- a Reply to Parkington. Journal of Archaeological Science 19.4 (1992): 459–66. Print. Somerville, Andrew D., et al. Diet and Gender in the Tiwanaku Colonies: Stable Isotope Analysis of Human Bone Collagen and Apatite from Moquegua, Peru. American Journal of Physical Anthropology 158.3 (2015): 408–22. Print.Sugiyama, Nawa, Andrew D. Somerville, and Margaret J. Schoeninger. Stable Isotopes and Zooarchaeology at Teotihuacan, Mexico Reveal Earliest Evidence of Wild Carnivore Management in Mesoamerica. PLoS ONE 10.9 (2015): e0135635. Print.Vogel, J.C., and Nikolaas J. Van der Merwe. Isotopic Evidence for Early Maize Cultivation in New York State. American Antiquity 42.2 (1977): 238–42. Print.

Wednesday, November 6, 2019

5 Lucrative and Rewarding Trucking Jobs to Consider

5 Lucrative and Rewarding Trucking Jobs to Consider OTR trucking can be a thankless job- long hours, lots of time away from home, constant tedium, and  the ever-present  need for vigilance where safety is concerned. Given how difficult it can be, and how high the entry-level standards are, it should come as no surprise that many jobs go unfilled every year. What you probably didn’t realize is that truck drivers make great money. If you’re independent, a conscientious driver, and don’t mind the lone-wolf lifestyle, trucking might be a good career move for you. The high demand means high pay and job security. The schedules can be flexible, you can live almost anywhere you want, and your view will always change by the mile. Not to mention, trucking companies usually offer great benefits.And that’s just for your normal, run-of-the-mill trucking job. Here are 5 specialized trucking jobs that offer even higher pay, just to give you something to aspire to.Oversized LoadHeavy loads and double-wides get reflecte d in your paycheck. You’ll have to go through special training and licensing for these positions, but the benefits and pay would be more than worth it.Liquid HaulingDriving a truck full of hazardous liquids, gases, or chemicals requires an enormous amount of skill and expertise. The more of each you have, the more likely you are to get the top compensation.Ice RoadThis is one of the hardest, scariest jobs out there. But you can work just a few months each year and make six figures. Of course, you will also have to be exceptionally talented at driving on ice roads in the Arctic Circle through extreme cold (-40 degrees) and though frequent white-outs and storms.MiningThe mining industry has trucking jobs available driving dump trucks to and from mine sites. These are some of the highest paying jobs in that industry. Even as a contractor, you could make $100k a year.InterstateInterstate truck driving is a bit less glamorous, and certainly less dangerous than some of the options above. But it still requires you to drive hard (and safely) to meet deadlines over enormous distances. And the pay is still comparatively very high!

Monday, November 4, 2019

Logistics & Physical Distribution Assignment Example | Topics and Well Written Essays - 2000 words

Logistics & Physical Distribution - Assignment Example The ERP software used and the planning and control mechanism provide a comprehensive outlook on entire supply chain process of the company. The virtual companies presently do not necessarily have to have a good handling and storing mechanism. Besides that, they need not have a direct customer-vendor relationship. The owners of such companies have little role to play in the entire production and sales process. All they require are customers and brand association. This is done by means of a sophisticated internet and web-based technology, which brings the customer and the brand together. The owner of such a virtual company collects money from sales and transfers the share to the manufacturer (Donat, 2003). The paper is aimed at an analysis of supply chain and its processes in the context of a web-based business that is engaged in selling consumer electronics. After providing a brief outline of supply chain of the company, the paper studies the causes behind choosing the supply chain pattern and its management and reporting structure. The paper concludes after a comprehensive evaluation of the control and planning mechanism in product logistics and distribution activity. The supply chain designed for the purpose of this study is based on a web-based product selling service, where the company is a website that deals with sales of electronic items sourced directly from the manufacturer. This retail distribution based supply chain is engaged in the business of real time buying and virtual selling of electronic products, primarily catering to the consumer goods electronic industry. The e-commerce business is fairly catching up in the global context and offers ease and convenience in shopping for branded and standardised products that do not require hand-on shopping experience. The e-commerce business aims to provide convenience, efficiency, speed and ease in shopping experience to the

Friday, November 1, 2019

Rehabilitation Essay Example | Topics and Well Written Essays - 750 words

Rehabilitation - Essay Example This paper also explores the roles that the general environment plays in the success of rehabilitation treatments, including the roles that family, friends and the general society play in rehabilitation. (NIH; Mayo Clinic Staff; McLellan et al.; WebMD; National Library of Medicine; World Health Organization). II. Discussion There is an element to different forms of dependencies and illnesses, such as drug dependence and alcohol dependence, that is chronic and not easily treatable, and this implies that in many cases the focus and commitment of patients undergoing rehabilitation play a role in treatment outcomes. That the dependencies and illnesses are chronic and long-term also implies that those who are being treated must match the interest and the dedication of those offering help. This might be where general society, friends and family may fall short, because of the costs and the emotional and psychological consistency that is required for patients to become better over time. This may also be why treatments sometimes fail, as evidenced by the relapses observed in the medical and academic literature. Some patients become better for a time in relapse cases, but they sooner or later go back to old habits, whether those be dependencies or psychological and emotional dysfunctions. The relapses may be partly due to the lack of dedication on the part of the patients. This is recognized to such an extent that relapses are included as a necessary component of rehabilitation programs, and relapses are considered in all-inclusive treatment protocols that take into consideration the willingness of the patients to be treated. Recognizing that relapses are common and that patient attitudes factor into the success or failure of treatments also is an admission that dependencies, emotional and physical traumas, and other conditions that require rehabilitation are complex, and that many factors need to be considered in devising rehabilitation treatments and protocols that wor k. The reality of relapses point to human factors and patient attitudes and inclinations as outlying factors that affect treatment outcomes (NIH; Mayo Clinic Staff; McLellan et al.; WebMD). To be sure, there are aspects of various illnesses, such as drug dependence and the emotional and psychological traumas experienced by soldiers returning from war, that are physiological, and that in a way those who are seeking rehabilitation are those who admit that they don’t have total control of their will. They easily succumb to the temptations of their addictions, for instance, or that they have no willpower to get out of the psychological and emotional traumas that haunt them in the case of soldiers returning from war. This is recognized, and the literature suggests that science and medicine have progressed over the years to provide medications and other related interventions that treat those physical dependencies and allow patients to get over the physical aspects of their conditio ns. On the other hand, even with some effective drugs and treatments, the literature also suggests that treatment success rates remain inconsistent and varying, again taking us back to discussions on just how much effect the individual will power of the patients have in the success