Sunday, March 31, 2019
Research On Initial Public religious whirl And Underpricing Finance EssayInitial Public Offering ( sign public going) of self-coloured is wide under tolld. sign offering underpricing is presented as the percentage difference amid the offer toll and the closing scathe of the first- affair-day, ordinarily in appearance of initial supportive bear when sh atomic number 18s atomic number 18 sweetly rewardd. initial offering underpricing is seen as cheat oning shares at discount in the initial offering. The discount requires issuer to leave money on the table to come downtle with fit asideors, which incur wealth overtaking for the issuer (Camp, Comer and How, 2006). Therefore, there are numerous theories established to explain the reason for this discount sales agreement in initial public offering, which generally categorized into four branches asymmetric culture, institutional reasons, ascertain considerations, and behavioral approaches (Ljungqvist, 2007). Among these theories, asymmetric breeding speculation is the most studied prudence in the past 40 years. Nevertheless, studies on the institutional and behavioral aspects are heating recently, especially when shedding lights on emerging initial offering trades where inadequacy of efficient institutional support and exist everywhere-speculation behavior environs.Evidence of underpricing initial public offering underpricing phenomenon is firstly academic documented in 1970s (Stoll and Curley, 1970 Reilly, 1973 Logue, 1973 Ibbotson, 1975). Early findings (exclusively focus on US food market) indicate that underpricing is influenced by particular periods (Ibboston and Jaffe, 1975) and particular industry, usually natural resource (oil and gas) industry (Ritter, 1984). However, these findings are challenged by Smith (1986) who claimed that underpricing occurs in the entire period of 1960s-1980s, quite an than concentrates in particular periods, and underpricing direct exists crossw ise all industries with dependableish exceeds 15%. Recent study is to a greater extent convincible with larger m period and sample observations. Loughran and Ritter (2004) document this underpricing discount has averaged round 19% in the US since the 1960s. Nevertheless, underpricing train (i.e. the average first-day go by) tends to fluctuate, 21% in the 1960s, 12% in the 1970s, 16% in the 1980s, 15% in 1990-1998 and consequently exploded to more than 65% in the 1999-2000 network tattle period, and go back to 12% in 2001-2008 (reference). Table verifiable studies stir all-embracing the scope of research from the US to the whole world. Underpricing is internationally documented, and the level is passing gamey in emerging markets. gibe to (reference)s research, China (1990-1996, 226, 388%) US (1960-1996, 13308, 15.8%) lacquer (1970-1996, 975, 24%). Table. (Reference) pass ons wider research. France 3-14% Australia 11-30% Taiwan 30-47% Greece 48-64% Brazil 74-78.5% Chi na 127-950%. Table collect to its compendious history with sloshed organization oblige characteristics, Chinese initial offering market draws research interest. The average initial return of initial offerings in China during 1999-2002 was 3.3 propagation the average emerging markets initial return (excluding China) and 6.9 times that of developed countries (Reference). audition sizeSample periodInitial return (%)Mok and Hui (1998)871990-1993289.20%Datar and monoamine oxidase (1998)2261990-1996388.00%Su and Fleisher (1999)3081990-1995948.59%Chen et al. (2000)2771992-1995350.47%Liu and Li (2000)7811991-1999139.40%Chi and Pad approacht (2002)6681996-2000129.16%Su (2003)5871994- 1999119.38%Chan et al. (2003)5701993-1998175.40%Chan et al. (2003)2861999-2000104.70%Wang (2005)7471994-1999271.90%Kimbro (2005)6911995-2002132.00%Li (2006)3141999-2001134.62%crooked culture speculationThe cornerstone of this theory is that there is asymmetric culture among parties (issuer, universal ag ent, and enthroneor) in the IPO. house and Dimson (2009) proved that the level of trust amid investors, issuers, and insurers plays a crucial persona on the level of IPO underpricing over time in the UK. Asymmetric cultivation leads to ex punt suspense among parties. Higher ex back indecision results in higher underpricing. Ritter (1984) raised the changing jeopardy fundamental law hypothesis, which assumes that riskier IPOs will be underpriced by more than less-risky IPOs. Beatty and Ritter (1986) then extend rock (1986)s asymmetric nurture model (winners curse) by introducing the ex back uncertainness pie-eyed to an IPOs market clearing price. The ex ante uncertainty among investors over the value of firm determines the underpricing level of the IPO (Loughran and Ritter, 2004). The level of underpricing increases with the degree of ex ante uncertainty about the value of the firm (Beatty and Ritter, 1986 Ljungqvist, 2007). Firms with more uncertainty about growth opportunities halt higher levels of underpricing than new(prenominal) firms on average (Ritter, 1984 Beatty and Zajac, 1994 Welbourne and Cyr, 1999). Under the scope of asymmetric information theory, there are common chord models winners curse, principal-agent and foreshadowing. Winners curse assumes informed investors develop remediate information. Principal-agent model argues insurance agents fall upon bring out information. Signaling model emphasizes on the make better information retained by issuers.Winners curse model is ground on asymmetric information surrounded by informed and uninformed investors (Rock, 1986). This model assumes informed investors have better information about the new firms prospects than the issuer and its insurance companys. Uninformed investors would only get un puffive IPO firms shares because informed investors have already picked up attractive firms share with better information. That is to say, uniformed investors would only expect nega tive return. Consequently, uniformed investors are un rided to participate only if new-issue offer prices are scummy enough to repair them for expected damagees on less attractive issues (Rock, 1986 Ritter and Welch, 2002). Under this assumption, issuers or underwriters have to underprice their IPO shares, i.e. selling with discount, to attract these uninformed investors. Underpricing is seen as compensation to uninformed investors (Beatty and Ritter, 1986). Underwriters have the intention to underprice the IPO shares in gear up to keep the uninformed investors stay in the market to induct offering successful. Underwriter could use underpricing to obtain full subscription in order to forge the shares offering successfully. Moreover, Loughran and Ritter (2002) argue that winners curse is not the dominate score in IPO underpricing now. Winners curse problem and dynamic information accomplishment were main explanations in 1980s in US IPO market. In nineties US, psycho psy choanalyst reporting and side payments to CEOs and meditation groovyists (spinning hypothesis) are main reasons (i propose).Welch (1992) claims that underpricing is ca employ by the cascades force in the IPO market. This Cascades effect is presented as the asymmetric information between informed and uninformed investors. Underpricing generates information momentum, which results in a higher market clearing price at the end of the lockup period (the time between share-offer day and listed day) when insiders (first buyers) veritable(prenominal)ly start to sell some of their shares. These first buyers behavior would influence the avocation buyers perception on value of shares. Since there is selling pressure when IPO ends and the analyst coverage starts, the market price could still maintain at a high level in the first-trading-day, indeed incur important underpricing level (Bradley et al. 2003 Ofek and Richardson, 2003 Bradley, Jordan, Roten, and Yi, 2001 Brav and Gompers, 200 3 Field and Hanka, 2001).Principal-agent model focuses on asymmetric information between underwriters and issuers (Baron and Holmstrom, 1980 Baron, 1982). Baron (1982) assumes that underwriter is better informed about demand conditions than the issuer, prima(p) to a principal-agent problem. In this model, the function and role of underwriters are mainly studied. Underwriters pauperization to underprice IPOs (Baron and Holmstrom, 1980 Baron, 1982 Loughran and Ritter, 2002/2004 Ljungqvist and Wilhelm, 2003). First of all, underwriter has to underprice in order to sell all shares, i.e. underwriters use underpricing to obtain full subscription in order to make IPO successfully. There are uninformed investors who have the money to invest in the market. Underwriters convince issuers into underpricing to pr flatt these uninformed investors from leaving the IPO market. Underpricing is to go underwriters to put forth the correct level of effort (Baron, 1982). Underwriter has to equaliser this trade-off in the principal-agent problem. On one side, underpricing would incur wealth loss for the issuer and deoxidize commission revenue for underwriters, on the other side, Beatty and Ritter (1986) argue that as repeat players, underwriters have an incentive to ensure that new issues are underpriced by enough lest they lose underwriting commissions (especially for those uninformed investors) in the incoming. Empirical studies (Nanda and Yun, 1997 Dundar, 2000) claim that underwriters subsequently lose IPO market share if they either underprice or overprice too much. However, the principal-agent model is challenged by Muscarella and Vetsuypens (1989), who argue that underpricing phenomenon still exists in underwriter (investment bank) IPO itself in which there is no principal-agent problem.Second, underpricing could incur over-subsucription in an IPO, which gives underwriter the discretion to allot IPO shares. Underwriters crapper decide to whom to divvy up shares if there is excess demand. In this case, underwriters discretion acts like interest substitution with their clients. They want to retain the buy-side clients, thus to allocate underpriced IPOs to them. Recurrent institutional investors would get the IPO shares and enjoy a authoritative initial return (Loughran and Ritter, 2002). Underwriters have an incentive to underprice IPOs if they receive commission business in return for leaving money on the table. Underpricing could facilitate the loyalty between underwriter and its clients, which could in turn facilitate underwriters sale of subsequent IPOs and flavor offerings. For example, in the late 1990s IPOs were allocated to investors largely on the basis of the past and future commission business on the other trades (Reuter, 2004).Third, spinning effect induces underwriter to underpricing. The spinning explanation describe issuers are willing to hire underwriters with a history of underpricing because issuers receive side-payments. rotate may be used by the underwriter to acquire IPO deals and influence IPO pricing, scarcely it stool similarly be used as part of a long-run business schema with a given issuer to attract future underwriter mandates. The side-payments of spinning makes issuers reluctant to change its original underwriter for subsequent offerings (Dundar, 2000 Krigman, Shaw and Womack, 2001 Burch, Nanda and Warther, 2005 Ljungqvist, Marston and Wilhelm, 2006/2009). Spinning effect was first documented by Siconolfi (1997) in a debate Street Journal article. Specifically, underwriters set up personal brokerage accounts for venture capitalists and the executives of issuing firms in order to allocate hot IPOs to them (Siconolfi, 1997). The hot IPOs government agency shares those are underpriced and would gain a huge positive initial return aftermarket, which would increase the personal wealth of the managers of issuing firms (Loughran and Ritter, 2002). The use of hot IPOs to bribe issuers cre ated an incentive for issuers to seek out underwriters who willing to offer this hot IPO through underpricing, rather than to obviate such underwriters. Allocating hot IPOs to the issuers and their friends (through friends and family accounts) allowed underwriters to underprice even more, i.e. selling at a friendly price (larger discount) (Fulghieri and Spiegel, 1993 Loughran and Ritter, 2002 Ljungqvist and Wilhelm, 2003). Underwriters may be more inclined to give fond trysts of shares to favorred investors (friends, family, executives, etc.) and unfavorable assignations to non-favored non-connected investors. The latter(prenominal) would require higher underpricing to participate in the IPO market. The outcome of this mould is not due to ex ante uncertainty, but due to arbitrary assignation of shares by underwriters. Furthermore, this discretion is not mitigated by strong institutional good example. During the late 1990s and early 2000, spinning was a far-flung practice i n the US, despite having one of the strongest investor trade protection rules at the said(prenominal) time (Liu and Ritter, 2009).Signaling model, first referred by Leland and Pyle in 1977, assumes the issuer itself best knows its prospects (possesses better information). Underpricing is a signal that the firm is good (Allen and Faulhaber, 1989 Grinblatt and Hwang, 1989 Welch, 1989). If the issuer possesses the best information about its reliable value, a high quality firm could use underpricing as a style to distinguish itself from low quality companies. These firms with the most favorable prospects find it optimal to signal their type by underpricing their initial issue of shares, and investors know that only the best firms brush aside recoup the cost of this signal from subsequent issues. In niggling, a partial offering of shares is made initially, information is then revealed, and subsequently more shares will be sold. In contrast, low quality companies might tend to price fully (Bergstrom, Nilsson and Wahlberg, 2006).Hiring reputable underwriter with potent analysts would mitigate ex ante uncertainty, thus get over the underpricing level. Empirical study demos the more market power of underwriter (with strong analyst team, influential and bullish, usually), the more underpricing extent (Hoberg, 2007). Hiring a esteemed underwriter (Booth and Smith, 1986 Carter and Manaster, 1990 Michaely and Shaw, 1994) or a reputable auditor (Titman and Trueman, 1986) is seen as a specific substance to reduce the ex ante uncertainty. Carter and Manaster (1990) and Carter et al. (1998) argue that IPOs taken by prestigious underwriters benefit from superior certification. The choice of underwriter indicates the quality of this IPO implicitly, because the study of underwriter may provides certain guarantee on the value of the issuer, which in turn, mitigates the ex ante uncertainty, thus the underpricing level would be reduced. Nevertheless, trial-and-error evi dences show a mixed result. There is a negative relation between underwriter prestige and underpricing level in the 1980s, but a positive relation in the 1990s (Beatty and Welch, 1996 Cooney, Singh, Carter, and Dark, 2001).Issuers want to hire reputable underwriters who have, not only because of this could reduce ex ante uncertainty, but also the influential and bullish analyst coverage provided by reputable underwriters (Dunbar, 2000 Clarke, Dunbar and Kahle, 2001 Krigman, Shaw and Womack, 2001). Analyst coverage is crucial on the discovery of true value of the firm, especially its advert on sequent shares offering. Ljungqvist, Jenkinson and Wilhelm (2003) prove that influential analyst could bring the businesses for underwriters (investment banks). esteemed investment banks also tend to recruit analysts who making optimistic forecasts (Hong and Kubik, 2003). Although analyst coverage is expensive for underwriters (the largest US investment banks each spent close to $1 billion pe r year on equity research in 2000, for example) (Rynecki, 2002), these costs are covered partly by underwriting fee charging from issuers. Due to the information production cost, umpteen firms would prefer later IPO. Firms do IPO firstly could incur analyst coverage advantage (more information revelation) for other firms wanting for IPO in the same industry (i.e. free befool effect). In this case, underwriter equilibrate this information cost for the before Firms with underpricing to investors (Benveniste, Busaba, and Wilhelm, 2002 Benveniste et al., 2003). Moreover, issuers know reluctant to change its underwriter for seasoned equity offering (SEO) if the underwriter did analyst coverage and the underprice effect is significant in the IPO. drop-off and Denis (2004) proved this with the example 1050 US IPO firms during 1993-2000.When initial offering shares, the issuer increases tension on the advertisement effect brought by analyst coverage from underwriter, rather than the l evel of underpricing itself. Empirical studies (Cliff and Denis, 2004 Dunbar, 2000 Clarke et al. 2007) illustrates that many US issuers accepted underpricing in 1990s since they focused more on choosing an underwriter with an influential analyst than on getting a high offer price. The underlying principal is that underpricing could attract investors attention to this firm. Issuers have the incentive to reduce underpricing, and model their optimal behavior. Firms could gain advertisement benefits from underpricing, which creates beneficial condition for sequent offering (Habib and Ljungqvist, 2001). A high quality firm is underpriced (sell shares at discount) at the initial offering in order to attract market attention through following(a) analyst coverage, usually, massive and efficient analyst coverage would mitigate the asymmetric information among investors and present the high quality of the firm, finally, the more realization on the true value of the firm among investors could help the firm sell its sequent seasoned offering shares at a higher price (i.e. recoup the loss from the underpricing in the initial offering). This process is called partial adjustment phenomenon (Hanley, 1993). About third of all IPO issuers between 1977 and 1982 had reissued equity by 1986, the typical standard being at least three times the initial offering (Welch, 1989). Analyst coverage relates to the future predicted value of the issuer, thus it is important. Moreover, the development of internet and cable television extend the influence of analyst coverage on the share price. In this way, the share price aftermarket would increase, which further provides the opportunity for issuer to offer higher price for its seasoned offering.Behavior Finance Speculative bubble theory afterwards the internet bubble collapse in the US in early 2000, the academic focus transferred to behavior finance. The asymmetric information theory is based on the efficient market hypothesis. The ex an te uncertainty leads to the difficulty on firm valuation for investors, therefore, issuer and underwriter would set higher underpricing level to attract investors. Underpricing is seen as take selling strategy for an IPO, once listed in the secondary market, share price would return to its fair value. Asymmetric information theory predicts disdain underpricing if information is distributed more homogeneously across investors (Michaely and Shaw, 1994). However, it is challenged by heterogeneous expectation hypothesis in the tenor market (Miller, 1977), which argues this deliberated underpricing strategy of IPO (selling at discount) disrupts the market efficiency (Loughran et al., 1994). According to Miller (1977), there are two assumptions in the market the heterogeneity expectation and restriction on short-selling. The optimistic investors buy and hold shares, whereas demoralized investors can not participate in the trade since the short selling is restricted. Consequently, shar e price reflects the opinion from optimistic investors, and thus the share price is overvalued compared to its fair value.Aggarwal and Rivoli (1990) raised the spoilt bubble theory to argue that IPO underpricing is caused by faddish behavior on behalf of investors. This theory reveals there is speculative environment in secondary market, which increases the market price of the first-trading-day, thus incurs laborious underpricing phenomenon. The speculative bubble theory to Ibbotsons opinion that underpricing is cyclical, which could date back to 1970s. Ibbotson and Jaffe (1975) embed the level of underpricing fluctuates between different time periods. One explanation for the variation may be the fact that there are hot and iciness IPO markets (Ibbotson et al., 2001). In a hot IPO market, the average level of underpricing is large and the amount of firms going public increases. Afterwards there is a high rate of firms going public, but the level of underpricing decreases. The f ollowing nipping period starts with fewer firms going public and very low underpricing or even overpricing. There is strong empiric evidence for this recurrent pattern, but the existence of this pattern has not yet seen sufficiently explained theoretically (Ibbotson and Ritter, 1995).Aggarwal (2000) provides empirical evidence to prove there is positive relationship between underpricing level and market index. Faddish investor hypothesis claims that in the hot market, over-optimistic (irrational) investors overpriced the IPO. This means the high initial return of IPO is not caused by deliberate underpricing pre-IPO solely, but is overpriced by optimistic investors in the secondary market.On one side, large amount of irrational investor is the root of high initial return in IPO, because irrational investors determine the transaction price in the secondary market (Ljungqvist, Nanda and Singh, 2003). Ljungqvist and Nanda (2002) claim that personal investor is seen as irrational inves tor, whereas the issuer, underwriter and institutional investors are seen as rational investor. Ljungqvist and Wilhem (2003) proved that personal investors have over-optimistic expectation on gestate return in the hot market and these personal investors are typical noisy traders in IPO market, who prefer to make investment decision in terms of past initial return of previous IPOs. Delong, et al. (1990) reveal the influence of noisy trader on the share price. These noisy traders in IPO market are typical positive market feedback traders. When recent initial returns are high in the IPO market, these investors would purchase new issues, thus these purchases increase the demand for following IPOs, thus raise the initial return for these following IPOs.On the other side, it is believed that inequality of demand and supply of IPO indigenous market causes or intensifies the speculative environment in the secondary market (Aggarwal, 2000). Inequality between demand and supply leads to spe culative opportunity. The underlying reason for this inequality is that IPO mechanism is not market-oriented in some countries, which is controlled by government (China, for example) (Su, 2004). IPO supply in the primary market is not adequate because of the government control. When new issues are over-subscribed, the irrational investors (speculators), who are constrained in the primary market, would be released in the secondary market. Meanwhile, due to the restriction on short selling (in China, for instance), investors could only make money when price increases. Therefore, investors push up the price on the first-trading-day, which causes severe underprcing level. court-ordered modeling theory good framework theory could explain the different underpricing level among different countries. Legal framework has significant impact on ex ante uncertainty in IPO market. Ex ante uncertainty caused by restrictive constrains, wealth redistribution, and market incompleteness, leads to t he IPO underpricing phenomenon (Mauer and Senbet, 1992). Difference in legal frameworks of various countries explain the ex ante uncertainty degree and the decisions made by investors in the market (La Porta et al., 1997/1998/2002). Cross-country differences in the legal framework affect self-will structure (La Porta et al., 2002), ownership effectiveness (Heugens et al., 2009), capital structure (De Jong et al., 2008), summation structure (Claessens and Laeven, 2003), dividend policy (La Porta et al., 2000), corporate judicature (La Porta et al., 2000 Mitton, 2002) and corporate valuation (La Porta et al., 2002). Legal frameworks deem to reduce uncertainty by creating a stable foundation in which subsequent human interactions can be grounded (North, 1994 Peng, 2009 Van Essen et al., 2009). First of all, legal framework affects issue firms value. Legal framework can influence the ex ante uncertainty about firm value in more or less the same way as ex ante firm-specific risk at t he time of IPO. Firms operating in a legal environment with poor protection of intellectual property rights are unwilling to invest in intangible assets (Research and Development cap expertness, or branding effect, for example), leading to raze firm growth and thus lower firm value.Second, legal framework affects investors decision. Stronger investor protection could reduce the investment risk (for example, lower asset volatility, lower systematic risk, lower stock volatility, higher risk-adjusted return as measured by the Sharpe and Treynor index) (Chung et al., 2007 Hail and Leuz, 2006 Chiou et al., 2010). In countries with weaker legal protection, investors will be more uncertain about realizing a return on their investment (Shleifer and Vishny, 1997). start levels of legal protection for investors will create more uncertainty with obligingness to post IPO strategies and managerial decisions that may negatively affect firm value (Claessens and Laeven, 2003). In a country with a weaker legal framework, managers or dominating shareholders have more opportunities to transfer profits or assets out of the firm at the expense of the minority shareholders. Weaker legal framework could provide opportunity for damaging firm value through transfer pricing, asset stripping and investor dilution (Cheung et al., 2009 Berkman et al., 2009). This increased probability of ex post expropriation by prudence or dominating shareholders increases the ex ante uncertainty at the time of IPO (Johnson et al., 2000). The higher the expropriation risk, the more the offer needs to be underpriced to compensate for this ex ante uncertainty. There is conflicts between dominating shareholders and remote shareholders because outside shareholders require higher risk premiums (higher cost of capital) which caused by the weak legal framework (Himmelberg et al., 2004 Giannetti and Simonov, 2006 Albuquerue and Wang, 2008). Although it is argued that issuers can independently improve their level of minority investor protection by a listing on a foreign stock exchange with higher standards of investor protection (i.e. cross-listing), it is doubtful that they can fully compensate for the lack of an adequate legal framework at the country-level (Black, 2001 Reese and Weisbach, 2002 Roosenboom and van Dijk, 2009).Third, Underpricing could avoid potential legal liability, which is another explanation theory provided by Tinic (1988). It is claimed that underpricing reduces some(prenominal) the probability of lawsuits if subsequently the firm does not do well in the aftermarket, because the investor is the direct recipient of the benefit from underpricing (Milgrom and Roberts, 1986 Tinic, 1988). Underwriters are unwilling to price these offerings at high level, in case that the market would concern about lawsuits and thus damage to its reputation if the shares eventually dropped in price aftermarket. The argument is based on that unsophisticated and uninformed investors we re holloding up the price to baseless levels, and the underwriters were unwilling to price the IPOs at the market price determined by these noise traders.Ownership control theoryOwnership control theory is described as IPO is expected to bring in new shareholders, who would dilute the control power of original shareholders (managers), therefore, issuers have less motive to bargain for higher offer price, and result in underpricing. Ljungqvist and Wilhelm (2003) explain this ownership fragmentation would incur underpricing through the realignment of incentives hypothesis. Logically, the issuer firms holding large proportion shares would have incentive to argue for higher offer price thus reduce the underpricing level (Barry, 1989 Habib and Ljungqvist, 2001 Bradley and Jordan, 2002 Ljungqvist and Wilhelm, 2003). Moreover, the excess demand for shares caused by underpricing enables managers to allocate small adventure of shares to many dispersed small investors. Therefore, original managers control power is alter since they would be the dominate shareholders. In other words, underpricing could give the managers power on control (Brennan and Franks, 1997 Boulton et al., 2007). However, the ownership control theory is challenged. Other substitute mechanisms for retaining control such as takeover defenses, non-voting stocks and alike are more effective, because underpricing can not prevent outside investors from accumulating larger stakes of shares once trading begins in the aftermarket (Ljungqvist, 2007).Issue mechanism unconquerable priceOffer price = Predetermined priceBookbuildingUnderwriter set the final offer price by consulting with investorsAuctionOffer price = lowest price which bid the final shareHybridBookbuilding + Fixed price Auction + Fixed priceBookbuilding, by which underwriter has the discretion on share apportioning, can induce investor to reveal their information through their indications of interest, which can reduce information asymmetry t hus lower underpricing (Benveniste and Spindt, 1989 Benveniste and Wilhelm, 1990/1997 Sherman and Titman, 2002 Ritter and Welch, 2002 Gondat-Larralde and James, 2008). On one side, underwriters tend to allocate IPOs to investors who provide information about their demand (i.e. the price discovery process). terms discovery eliminates the winners curse problem, thus reduce underpricing level. On the other side, bookbuilding authorised underwriter the discretion on share allocation (so called rationing allocation). After collecting investors indications of interest, the underwriter allocates no (or only a few) shares to any investor who bid conservatively. This rationing share allocation could reduce the underpricing level. Koh and Walter (1989) found the likelihood of receiving an allocation in this mechanism was negatively related to the degree of underpricing, and average initial returns fall substantially from 27% to 1% when adjusted for rationing allocation in Singapore case stud y. Levis (1990) and Keloharju (1993) claim Rationing share allocation mechanism could reduce the initial return in UK, and in Finland respectively. Aggarwal, Prabhala, and Puri (2002) also find that institutional investors earn greater returns on their IPO allocations than do retail investors largely in bookbuilding mechanism, because they are allocated more shares in those IPOs that are most likely to appreciate in price.However, imposing constraints on the underwriters allocation discretion can interfere with the efficiency of the bookbuilding. The quality of bookbuilding in many European and Asian countries is damaged by certain restriction on the use of bookbuilding, which leading to higher underpricing (Ljungqvist et al., 2003). Requiring that a certain fraction of the shares be allocated to retail investors, as is common in parts of Europe and Asia, reduces underwriters ability to target allocations at the most aggressive (institutional) bidders and so may force them to rely m ore on price than on allocations to reward truth-telling. Moreover, empirical study indicates that bookbuilding in countries outside the US only reduces the level of underpricing when used in combination with US investment banks (underwriter) and targeted at US investors.Although the mathematical process of the different issuing me
The Science of ToxicologyIntroduction to ToxicologyThe acquirement of Toxicology consists of the reflect of biology, chemistry, and medicine, that is relate with study of painful do of chemicals on living organisms. It too studies the harmful effects of the chemical, biological and the physical agents in biological systems that establish the termination of damage in living organisms. The relationship between the given battery-acid and its effects on the exposed organism is of very high moment in toxicology. Variables that influence chemical toxicity, includes the given dosage, the probable route of exposure, species, age, agitate and environment.A toxicologist is a scientist or checkup personal who specializes in the study and observation of symptoms, function and mechanism, treatments and commention of venoms and toxins especially in event of poisonous substanceing. To consort as toxicologist one should get a degree in toxicology or a related sector like biochemistry and the life sciences.The main branches of toxicology be rhetorical toxicologyIt is the use of toxicology and other disciplines much(prenominal) as pharmacology, chemistry such as analytical chemistry and clinical chemistry to aid medical or good investigation of death due to poisoning, and drug use. The school principal concern for forensic toxicology is non always the legal outcome of the toxicological investigation or the technology used, but rather the obtaining and interpreting of the deduction and results. A toxicological abstract now whoremaster be make to divers(prenominal) kinds of attempts.A forensic toxicologist must minutely consider the context of an investigation, particularly either physical symptoms that argon recorded, and any evidences collected at scene of the crime that helps in narrowing the search, such as any operable chemicals powders and/or trace residue. Armed with this information and exemplifications with which to work, the toxic tri ggermanstances that ar present in that respect, its concentrations, the probable chemicals effects on the person, all of these information atomic number 18 determined by the forensic toxicologist.In vitro toxicityIt is the scientific analysis of the effects of toxic chemical substances on cell cultured bacterium or mammalian cells. These methods ar used primarily to identify solemn chemicals, to verify the lack of certain toxic properties in the early stages of breeding of potentially useful new substances like therapeutic drugs, agro chemicals, food work and additives and other useful substances.In vitro assays for xenobiotic toxicity ar c befully considered by major government organizatios (e.g. EPA, NTP, FDA), to better assess human risks. There are major activities in using in vitro systems to advance to a lower placestanding of poisonous substance activities, and the use of human cells, tissues and organs to define human-particular proposition toxic effects.Environme ntal toxicologyIt is a multidisciplinary field of science concerned with study of the harmful effects of respective(a) chemical agents, biological agents and physical agents on living organisms. it is a sub discipline of environmental toxicology that is concerned with studying the harmful effects of toxicants, at the general population and ecosystem levels. aesculapian toxicologyIt is a medical subfield focusing on the diagnosis of health problems, their management and prevention of adverse health effects such as poisoning and other complications from medications, occupational toxicants, toxicants in the environment, and/or various other biological agents. Medical toxicologists personal are confused in the assessment and treatment for poisoning, the harmful drug reaction, overdoses and substance abuse.Medical toxicology practitioners are physicians, whose primary specialization is by and large in emergency medicine, occupational medicine or pediatrics.EcotoxicologyIt is the study of the effects of toxic chemicals on the biological organisms, at the population, community and at the ecosystem levels. Study of Ecotoxicology is a multidisciplinary field, which combines toxicology and ecology.The ultimate discipline of this approach is to be able to predict the effects of pollution so that efficient and effective action to prevent or remediate any adverse effect can be identified. In the ecosystems that are already affected by pollution, Eco toxicological studies can inform as to the trump out method for action to restore the ecosystem efficiently and effectively.Ecotoxicology differs from science of environmental toxicology in that it combines the effects of stressors across all the levels of biological organizations i.e. from the molecular to whole communities and ecosystems, whereas science of environmental toxicology focuses upon the effects at level of the individual and below.EntomotoxicologyIt is the analysis of toxins in arthropods that feed on carrion . Using arthropods in corpse or at crime scene, investigators can tamely determine whether toxins or poisons were present in a body at the exact time of death. This technique is a major advancement in forensics. Before, such determinations were impossible in the case of the severely decomposed bodies, which were devoid of intoxicated tissue and body fluids. Ongoing researches into the effects of toxins on arthropod and their development has in any case allowed better estimations of the postmortem intervals.Forensic bugology is the application and also the study of insects and other arthropod biology to criminal matters. It also involves application of study of arthropods, such as insects, the arachnids, the centipedes, and millipedes, crustaceans to the criminal or legal proceedings. It is mainly associated with death investigations however, it may also be used to detect drugs, poisons and determine the location of an incident, and also find the presence and time of when the wou nds were caused. Forensic entomology can thus be further broken under three subparts urban, stored-product and lastly medico-legal/medico-criminal entomology.ToxinologyIt is the specialized field of science that deals mainly with the animate beings, seeds, and microbic toxins. It has been defined as the scientific discipline dealing with microbial toxins, plant toxins, and animal venoms. This involves more than just the chemistry and mode of action of toxins. It deals with the workings of venom, the poison-producing organisms, also the structure and functions of the venom glands, use of the venom or poison and also the ecological role of these compounds. Toxinology has also been further defined as the science of toxic substances produced by or stored in living organisms, their properties, and their biological importance for the organisms involved.clinical toxinologyWithin toxinology there is also a subgroup, i.e. clinical toxinologists, who studies the medical effects in humans, exposure to the toxins, also in animal venoms or in plant poisons. This includes problems such as venom from snakebite, presently considered to affect more than 2.5 million patients each year, with over more than 100,000 deaths.Clinical toxinology does not sacrifice specialist status yet within the field of medicinal study, unlike other fields such as military operation and radiology. However, training courses in clinical toxinology exists. ideal PreparationSample grooming is oftentimes the first trample in an analysis the result of this step can affect the rest of the analytical regale. To get accu rove results, a sample distribution should be representative, it should be ordered, homogenous, and must be suitable for chromatography newspaper column injection or other assay.The main steps in sample preparation areSample IdentificationSample reagent and touchstone pipettingSample extractionOutput to analyzer formatPreparative Stepsremotion of Soluble Protein precipitation filtrationExtraction single step liquid-liquid extraction ternary step liquid-liquid extraction (back-extraction) solid physical body extractionChemical qualifying derivatization for increase in volatility of sample chemical hydrolysis of glucuronide enzymeConcentration vaporCell lysis or tissue homogenationSample CharacterizationThere are many chromatographic assays (GC, GC/MS, HPLC, TLC, LC/MS/MS, ), that are used for personation and toxicological analysis of sample.To understand them, it is best to break them down into their modular components/stepsSample preparation detachment (the actual chromatography)Detection (UV/Vis spectrometry, Fluorescence spectrometry, wad spectrometry).Chromatographic ComponentsSample loadingThe mobile phase during breakup.The stationary phase during separation.Separation of individual molecules in the sample components is always based on their proportional proportion for the mobile phase versus the stationary phases.Because some of the molecule s have higher resemblance for the stationary phase, they will pass through column slower than the others and, therefore, will be sortd.Separation of the different Molecules by Chromatography after(prenominal) the injection, all molecules start out overlapping.Due to the varying sexual congress affinity for the stationary phase versus the mobile phases, individual molecules thus begin to separateAs the different molecules then elute off of the column, they are then detected as resolved crests.Relative Retention TimesDuring the separation, the supreme rates/times for movement of the molecules are not always reproducible. For example, the columns can get dirty, thus decreasing the amount of stationary phase that is available for the interaction with molecules.This can be compared to shortening the length of the column. However, it affect the rate and all molecules in the same way.Therefore, their relative rates/times are highly reproducible. The relative retention time (RRT) is d efined as the signal contracting time for a individual peak divided by the detection time for a cognize internal amount.RRTs are characteristic and reproducible identifiers of individual molecules.Quantification of Drug ConcentrationsPeak area generally correlates with the amount of drug that is loaded onto a column and on the original drug concentration. But, there can be sample-to-sample variations due to the extraction efficiency, the loading volumes, or the detection efficiency, etc.Again, the internal standard is utilized to correct for variations.Similar to the relative retention time, relative peak intensity is defined and related to drug concentration.Unlike the relative retention time, the given variation in the peak area is not always similar for all the molecules. Thus, the internal standard is chosen to be chemically similar to the analyte of interest to best correct for variations. However, adequate similarity is not easy to predict or establish.communications prot ocol for Quantification of Analyte Concentration Based Upon a Calibration CurveA known quantity of an internal standard is first added to every sample (including controls and calibrators) to begin with any other preparative step.All samples are then fain through the identical preparative steps, separated by a chromatographic method and quantitatively detected.The relative peak intensities are measured for a series of calibrators with a fixed amount of internal standard and varying amounts of a known analyte.These relative peak intensities are fit to an equation, generally linear, to define a calibration curve.The relative peak intensities of unknown samples are then calculated and then related to the calibration curve to mensurate the concentration of the analyte(drug) in the original clinical sample.some(a) Characterization Techniques proportion ChromatographyAffinity chromatography is used for separating biochemical confections based on the highly specific interaction between conjugates such as that between antigens and antibodies, enzymes and substrates, or receptors and ligands. article of beliefHere, the stationary phase used is typically a gel matrix, often of agarose. Generally, we use an undefined heterogeneous group of molecules in solution, like, for example, growth spiritualist or blood serum. The molecule of interest will be having a well-defined property, and can be put to use during the affinity katharsis process. This process can thus be seen as a process of entrapment, with target molecule getting entrapped on solid or stationary phase and/ or medium. The molecules of mobile phase component will not become trapped as they do not possess this property. The stationary phase is then removed from the mixture, washed and target molecule released from entrapment in process known as elution. The most common use of affinity chromatography is for the finish of recombinant proteins.Affinity chromatography has use in number of applications, includin g purification from nucleic acid, and purification from blood and also protein purification from cell free extracts.Thin-layer chromatography (TLC)It is a chromatography technique used to separate non-volatile and stable mixtures. Thin-layer chromatography analysis is performed on sheet of various mediums, such as glass, plastic, or aluminum foil, they are then coat with a thin layer of adsorbent material, like silica gel, cellulose and also aluminum oxide. This layer is known as the stationary phase.After the sample is applied on the plate, a solvent or solvent mixture (known as the mobile phase) is drawn up the plate via capillary action. Because different analytes have different rate of ascension on the TLC plate, separation is achieved.It can monitor the progress of a reaction, or determine the whiteness of substances and/or identify the compounds present in a given mixture. Some examples are analyzing the fatty acids, detection of pesticides ,herbicides and/or insecticides in f ood and water, analyzing ceramides, analyzing the color composition of fibers in forensic toxicology, or identification of medicinal plants and their constituents and assaying the radiochemical artlessness of radiopharmaceuticals.A number of enhancements to the original method have been made, to increase the consequence achieved with TLC, to make the different steps automatic and to allow more stainless quantitative analysis. This is called HPTLC, or high-performance TLC.Summary of Major Learning Pointsmodular nature of chromatograpy. Assays are divided into three steps sample preparation, sample component separation and analyte detection. The separation steps consist of sample loading, preparing a mobile phase and a stationary phase.Importance of an internal standard for Calculating the relative retention times for component separation. Calculation of the relative peak areas and the generation of a calibration curve for the quantification of drug concentrations in the original clinical sample.Analytical specificity provided by Sample preparation techniques Separation during chromatography (RRT) Method chosen for detection
Saturday, March 30, 2019
pecuniary accountancy Standards Board Framework AnalysisIntroductionThe chronicle abstract mannequin has been criticized for not providing an adequate basis for standard sicting. This inadequacy is attest through the FASBs standards becoming more and more rule-based. Nevertheless, no empirical recount has been gathered to verify the criticisms of the conceptual mapping model. We analyzed the five qualitative characteristics of explanation entropy from the conceptual simulation in conjunction with an individuals end to employ/rely on pecuniary averments. Using morphological equation modeling, we crap that notwithstanding one qualitative characteristic, reliability, affected a persons intention to mathematical calling office pecuniary biddings. Additionally, it appears that the greatest factor that influences whether an individual rely on fiscal statements is their familiarity with explanation. Based on our findings, it appears that not yet does the conceptua l manakin direct to be altered, solely it also charters to be changed to champion accept principle-based news report standards that atomic number 18 officeful to all people, regardless of their background.The fiscal score system Standards Board (FASB) has been criticized for not requiring firms to report learning that is interpretable and recyclable for pecuniary statements users (CICA, 1980). The FASBs conceptual manikin is the core in which all score standards argon derived. Therefore, the avocation relationship conceptual poser must em physical structure a tag of qualitative characteristics that ensure fiscal insurance coverage fork overs users of fiscal statements with adequate nurture for decision making. The U.S. pecuniary business relationship conceptual material was established mingled with late 1970s and early 1980s. Statement of monetary Accounting Concepts (SFAC) no 2 (1980) indicates that there ar five main qualitative characteristics of invoice culture takeability, relevancy, reliability, comparability, and consistency.Nature and Purpose of the abstract FrameworkThe conceptual simulation was formed with the intention of providing the backbone for principle-based score standards (Nobes, 2005). However, the Securities and Exchange Commission (SEC) has deep criticized the accounting standards climb board for becoming overly rules-based, which paves the expressive style for the structuring of proceedings in the companys favor (SEC 108(d)). Critics of the framework rent stressed that the move towards rule-based standards are a consequence of inadequacies in the accounting conceptual foundation. Nobes (2005) argues that the need for rule-based accounting standards is a direct leave alone of the FASB trying to force a learn between standards and a conceptual framework that is not fully actual. A unyielding and strong conceptual framework is vital for the separatement of principle-based accounting standa rds and the progression towards convergence in international accounting standards.However, researchers are incognizant of any empirical differentiate that supports the criticisms of the stream conceptual framework. Additionally, none of the critics admit looked at the conceptual framework from the most important viewpoint, the users perspective. Therefore, the purpose of this writing is to empirically analyze the adequacy of the conceptual framework, from a users perspective, in well-disposed intercourse to an individuals reliance on fiscal statements for decision making. We developed a accompany instrument to analyze an individuals intention to rely on monetary statements apply Ajzens (1991) Theory of mean Behavior. We found that the reliability characteristic of the conceptual framework represented the only substantial dimension of a persons attitude bear on their intention to rely on financial statements. However, the learnability characteristic was come up signifi m intce. Within the context of the theory of planned behavior, social pressures was not signifi natest influence on the intention to use/rely on financial statements, yet familiarity with accounting was found to signifi pottly influence intention.The conceptual framework and potential financial statement users intentions fucking be analyzed deep down the context of Ajzens (1991) Theory of Planned Behavior. Ajzen (1991) indicates that empirical evidence suggests that we move find out an individuals intention to perform a behavior through analyzing their attitude, subjective norms, and sensed behavioral control. Within this perspective, we adapted Ajzens (1991) theory of planned behavior to an individuals propensity to rely on accounting financial statements.The purpose of this study was to leave alone an empirical analysis to the criticism against the FASBs conceptual framework. Our overall results suggest that the on-line(prenominal) conceptual framework does not adequately al ign the objectives of financing insurance coverage with the users of financial statements. Nevertheless, available findings have some interesting implications for the conceptual framework and future standard setting. Reliability is the only qualitative characteristic that has a positive statistical signifi movet relationship with intention. The accounting profession is face a choice between reliability and relevance in financial inform, as there is an inherent trade-off between reliability and relevance (Paton and Littleton, 1940 Vatter, 1947). Reliable culture possesses the characteristic of objectivity and verifiability, which is associated with diachronic cost accounting. Relevance, on the separate hand, pertains to any info that exit influence the users financial decision. numerous times the most pertinent study is often current or shotive in nature. olibanum, we cannot have accounting information that maximizes the characteristics of both relevant and reliable becau se relevant information is not always verifiable. We would have judge to see relevance as a significant factor in users intention to use financial statements since the recent accounting standards have travel toward fair rate accounting measures, which are considered to be more relevant than reliable information (Ciesielski Weirich, 2006). However, our results show that reliability is a significant factor. The current accounting curriculum could be the cause of our results since it is rooted in Paton and Littletons historical cost approach, which contractes on reliability of information.In the context of the Theory of Planned Behavior, we found that familiarity to be a statistically significant factor to an individuals intention to use financial statements. Thus, as an individual perishs more familiar with financial statements, he or she is more likely to have the intention to use or rely on them when making decision. An ANOVA analysis houses moreover support for this as it in dicates that intention to use or rely on financial statements is significantly different between accounting majors and non-accounting majors. This provides evidence that accounting could be becoming too demanding for individuals who are not proficient in accounting to envision.It appears that the movement towards rule-based accounting standards could be a contri notwithstandinging cause of this disparity in intention. That is, the accounting standards have become so technical upon their execution that the average reader of accounting can no longer discern the main objective of each financial statement element. This finding is troubling to accounting since it contradicts the primary objective of accounting, which is to provide useful accounting information for decision making. Accounting information should be useful for all people who want to use it rather than only being useful to those who understand it. Additionally, under no circumstances, should accounting information provide an advantage to individuals who happen to be experts within the field. Accounting should be a tool and not a barrierAt the-present, the accounting profession is grappling with a problem, which it has determine as the need for a conceptual framework of accounting. This framework has been painstakingly developed over centuries, and it is save the professions task to fine tune the existing conceptual framework because of the need for continual development due to changing conditions. This conceptual framework has neer been laid out in explicit depotinals consequently, it is continually overlooked. A conceptual framework has been described as a constitution, a coherent system of interrelated objectives and fundamentals that can lead to tenacious standards and that prescribes the nature, function, and limits of financial accounting and financial statements.For legion(predicate) accountants, the conceptual framework project is exhausting to come to grips with because the subject ma tter is abstract and accountants are accustomed to dealings with peculiar(prenominal) problems. In resolving those problems, accountants may unconsciously rely on their own conceptual frameworks, that CPAs have not previously been called on to spell out their frameworks in systematic, cohesive fashion so that separates can understand and evaluate them. It is essential that a framework be expressly established so that the FASB and those evaluating its standards are basing their judgments on the same set of objectives and concepts. An expressly established framework is also essential for preparers and canvassors to cod decisions about accounting issues that are not specifically covered by FASB standards or other authoritative literature.It is considered that if the conceptual framework makes sense and leads to relevant information, and if financial statement users make the necessary effort to fully understand it, their impudence in financial statements and their ability to use them effectively will also be enhanced. No one who supports the establishment of a conceptual framework should be laboring under the illusion that such a framework will automatically lead to a single definitive answer to each specific financial accounting problem. A conceptual framework can only provide guidance in identifying the relevant factors to be considered by standard setters and managers and auditors in making the judgments that are inevitable in financial reporting decisions.A Classical Model of Accounting The Framework spread outHistorically, the violateicularized information, which constituted the emergence of accounting, was embedded in a framework for control of human behavior. With the advent of exchange replacing a funding society, and with exchange ultimately producing a private economy, accounting derived its second, and in upstart times considered its most important, function as a planning instrument. The innocent model simply states that behavioral patterns do exist in the structural development of accounting that is, give a stimulus there will be a response which is direct reaction (an judge reaction) to that stimulus. iodin can relate this model to the classical model in frugalals, in which supply and demand for a commodity react in an expected manner due to a change in price. Figure 3 is a geometric illustration of the classical model. The special features of the model are(a) Stimulus (S) = Demand result (R) = Supply(b) Equilibrium (E) = Stimulus = Response(c) Environmental Condition (EC) = Price(d) Accounting Concept (AC) = ProductA Test of the Validity of the ModelIf the classical model does exist in accounting, the historical observations (see table I) should then bear testimony to its existence. The evidence to support this model is purely historical. However, no parallel should be drawn between this thesis (stimulus/Response) and Toynbees (1946, 88) line of interrogative sentence Can we say that the stimulus towards f inish grows positively stronger in proportion as the environment grows more difficult? Consequently, the criticism directed at his work should not be considered compensate remotely as applicable to this inquiry (Walsh 1951, 164-169).On the other hand, only in the extreme can the accusation directed at Kuhn 1962 be directed here, that the conceptual framework (classical model of accounting) as presented may subsume too many possibilities under a single formula (Buchner 1966, 137). More appropriately, this study is undertaken along the lines suggested by Einthoven (1973, 21) Accounting has passed through many stages These phases have been largely the responses to economic and social environments. Accounting has adapted itself in the past fairly strong to the changing demands of society. Therefore, the history of commerce, industry and government is reflected to a large spot in the history of accounting.What is of par get along importance is to realize that accounting, if it is to play a useful and effective role in society, must not keep up independent goals. It must continue to serve the objectives of its economic environment. The historical bring down in this connection is truly encouraging. Although accounting chiefly has responded to the needs of its surroundings, at times it has appeared to be out of touch with them. The purpose of this line of inquiry is to put into perspective concepts which have emerged out of sure historical events. (In this treatise, accounting concepts are considered to be organise with accounting metre and parley processes thus, whenever the term concept is used herein, it is to be understood that accounting measurement and communication processes are subsumed under this heading.)These concepts collectively constitute, or at least suggest, a conceptual framework of accounting. The classical model is postulated as fol downcasts For any given environmental state, there is a given response function which maximizes the prev ail socio-economic objective function. This response function cannot precede the environmental stimulus but is predicated upon it when such response function is suboptimal, the then existing objective function will not be maximized. In a dysfunctional state, a state in which environmental stimulus is at a low level a level below pre-existing environmental stimuli, disequilibrium would ensue. In any given environment, the warranted response may be greater or less than the natural or actual response.When environmental stimuli cease to conjure response, then the socio-economic climate will be characterized by stagnation as the least negative impact of disequilibrium conditions, and decline when such environmental stimuli are countercyclical.Stage 1 In this terminus, (1901 to 1920) the environmental stimulus was bodied policy of retaining a high proportion of earnings (Grant 1967, 196-197) (Kuznets 1951, 31) (Mills 1935, 361,386-187). This period is the beginning of corporate gre atism. The term corporate capitalism is used because it emphasizes the role in capital formation which deals have ascribed to themselves. Hoarding of funds by corporations has reduced the role and importance of the primary equity securities marketplace. The resource allocation process has been usurped by corporations (Donaldson 1961, 51-52, 56-63). The implication of such a condition is accentuated in the following statement It is the capital markets rather than intermediate or consumer markets that have been absorbed into the nucleotide of the new persona of corporation. (Rumelt 1974,153).The hard empirical evidence of this condition was revealed by several tests of the Linter Dividend Model, which maintains that dividends are a function of profit, and are adjusted to stick investment requirements (Kuh 1962,48) (Meyer and Kuh 1959,191) (Brittain 1966,195) (Dhrymes and Kurz 1967, 447). Given the new role assumed by the corporation in capital formation, the investment community (investing public) became forebodinged with the accounting measurement process.The accounting response was verifiability (auditing) to demonstrate the soundness of the discipline. Productivity of existing measurements had to be verified to satisfy the investors and character referenceors. The Companies actuate 1907 undeniable the filing of an audited yearbook equalizer sheet with the Registrar of Companies (Freer 1977, 18) (Edey and Panitpadki 1956, 373) (Chatfield 1956, 118). Thus, auditing became firmly established. The function of auditing measurements is the process of payoff of front accounting.Accounting is differentiated from other scientific disciplines in this aspect of replication. retort is a necessary condition in sound disciplines however, replication is generally undertaken in rare instances. In accounting, on the other hand, replication is undertaken very frequently for specified experiments business operations at the completion of the experiments busines s (operating) cycle. These experiments business operations, cover one year at the end of the year, the experiments are reconstructed on a sampling basis. Auditing is the process by which replication of accounting measurements are undertaken. Publicly held and some privately held corporations are required to furnish audited yearbook financial statements which cover their business activities on an annual basis.Stage 2- This period, (1921 to 1970) witnessed the keep of corporate retention policy. This condition shifted the idiom of the investor to focus on the Securities market in the hope of capital gains, because of the special(a) pass on on investment in the form of dividends. Indubitably, investors concern was shifted to market mouthful through stock price changes reflecting the earnings potential of the underlying securities (Brown 1971, 36-37, 40-41, and 44-51).With the securities market valuation of a companys share (equity) inextricably linked to the earnings per share, the emphasis is placed on the dynamics of accounting as reflected in the income statement. The Companies lick of 1928 and 1929 explicitly reflect this accounting response by requiring an income statement as a fundamental part of a set of financial statements (Freer 1977, 18) (Chatfield 1974, 118) Although an audit of such statement was not explicitly stipulated, it was implied. The accounting response of this period is extension of accounting apocalypse (Chatfield 1974, 118) (Blough 1974, 4-17).The Wall Street Crash of 1929 and concomitant market failures constitutes the environmental stimulus. In the U.S.A., the Securities Act of 1933 and then the Securities and Exchange Act of 1934 were enacted, providing for a significant involvement of the government in accounting. Stage 3- This period is characterized by the social awareness that business as well as government must be held socially accountable for their actions. Business can transfer certain costs to other segments of societ y, thus business benefits at the expense of society and government can not only gasconade hard earned dollars but through its policies affect adversely the eudaemonia of various segments of society.This awareness is epitomized in the thesis posited by Mobley 1970, 763 The technology of an economic system imposes a structure on its society which not only determines its economic activities but also influences its social well-being. Therefore, a measure limited to economic consequences is inadequate as an appraisal of the cause-effect relationships of the total system it neglects the social personal effects.The environmental stimulus of corporate social responsibility evoked the accounting response of socio-economic accounting a further extension of accounting disclosure. The term socio-economic accounting gained prominence in 1970, when Mobley broadly defined it as the ordering, measuring stick and analysis of the social and economic consequences of governmental and entrepreneurial behavior. Accounting disclosure was to be expanded beyond its existing boundaries beyond the normal economic consequences to include social consequences as well as economic effects which are not presently considered (Mob1ey 1970, 762).Approaches to dealing with the problems of the extension of the general information are being attempted. It has been demonstrated that the accounting framework is undefendable of generating the elongate disclosures on management for public scrutiny and evaluations (Charnels, Co1antoni, Cooper, and Kortanek 1972) (Aiken, Blackett, Isaacs 1975). However, many measurement problems have been exposed in this search process for means to satisfy the general information requirement of this new environmental stimulus (Estes 1972, 284) (Francis 1973). Welfare economics, as a discipline, has always been pertain with the social consequences of governmental and entrepreneurial actions, but the measurement and communication problems are, and always have been that of the discipline of accounting (Linowes 1968 1973).The Conceptual Framework A Continuing ProcessPresented above, the stimulus/response framework exhibiting structural adequacy, internal consistency and implemental practicality has demonstrated, unequivocally, its effectiveness over the centuries. The systemic information of financial accounting is the connective tissue of time in a financial perspective. The systemic information of managerial accounting is non-connective, but rather reflects events in a decision-making perspective. This can be best illustrated in the table below(Draw a table)The process of concept-formation is a special type of learning. The formation takes time and requires a variety of stimuli and reinforcements. The process is never fully determinate for even when the concept is well, it can suffer neglect or inhibition and it can be revived by further reinforcement or modified by new stimulation (Emphasis added.) (Meredith 1966, 79-80). A body of concept s and interlocking measurement and communication processes (types of information stocks and flows constraints on information permissible value and methods of measurement media of communication quantitative and qualitative) has been developed over the centuries.This set of concepts and interlocking measurement and communication processes has emerged as responses to specific stimuli at specific points in time to satisfy specific information needs. It is this body of concepts and interlocking measurement and communication processes, which is subject to amplification and modification that constitutes the conceptual framework of accounting. Possibly, with other modifications or amplifications deemed necessary, the conceptual framework as presented above can serve as an expressly established framework to enable preparers and auditors to make decisions, which would conform and be upheld, about accounting issues that are not specifically covered by FASB standards or authoritative literat ure.A conceptual framework is necessary because in the first place, to be useful, standard setting should develop on and relate to an established body of concepts and objectives. A soundly developed conceptual framework should enable the FASB to issue more useful and consistent standards over time. A coherent set of standards and rules should be the result, because they would be reinforced upon the same foundation. The framework should increase financial statement users understanding of and confidence in financial reporting, and it should enhance comparability among companies financial statements. Secondly, new and emerge practical problems should be more quickly solved by consultation to an existing framework of basic theory. It is difficult, if not im workable, for the FASB to prescribe the proper accounting treatment quickly for situations like this. Practicing accountants, however, must resolve such problems on a day-to-day basis.Through the exercise of good judgment and with the help of a universally accepted conceptual framework, practitioners can dismiss certain alternatives quickly and then focus on an acceptable treatment. Over the geezerhood numerous organizations, committees, and interested individuals developed and published their own conceptual frameworks. unless no single framework was universally accepted and relied on in practice. Recognizing the need for a generally accepted framework, the FASB in 1976 began work to develop a conceptual framework that would be a basis for setting accounting standards and for resolving financial reporting controversies. The FASB has issued sestet Statements of fiscal Accounting Concepts that relate to financial reporting for business enterprises. They are_ SFAC No. 1, Objectives of Financial Reporting by Business Enterprises, presentsgoals and purposes of accounting._ SFAC No. 2, Qualitative Characteristics of Accounting Information, examines thecharacteristics that make accounting information useful._ SF AC No. 3, Elements of Financial Statements of Business Enterprises, providesdefinitions of items in financial statements, such as assets, liabilities, revenues, andExpenses_ SFAC No. 5, Recognition and Measurement in Financial Statements of BusinessEnterprises, sets forth fundamental credit and measurement criteria andGuidance on what information should be formally incorporated into financial statementsand when._ SFAC No. 6, Elements of Financial Statements, replaces SFAC No. 3 and expandsits scope to include not-for-profit organizations._ SFAC No. 7, Using notes Flow Information and Present Value in Accounting Measurements, provides a framework for using expected future change flows and present values as a basis for measurement.At the first level, the objectives identify the goals and purposes of accounting. Ideally, accounting standards developed according to a conceptual framework will result in accounting reports that are more useful. At the second level are the qualitative ch aracteristics that make accounting information useful and the elements of financial statements (assets, liabilities, and so on). At the one-third level are the measurement and recognition concepts used in establishing and applying accounting standards. These concepts include assumptions, principles, and constraints that describe the present reporting environment.First Level Basic ObjectivesAs we discussed in Chapter 1, the objectives of financial reporting are to provide information that is (1). Useful to those making investment and credit decisions who have a reasonable understanding of business and economic activities. (2). laborsaving to present and potential investors, creditors, and other users in assessing the amounts, timing, and uncertainty of future cash flows and (3). about economic resources, the claims to those resources, and the changes in them. The objectives therefore, begin with a broad concern about information that is useful to investor and creditor decisions. Th at concern narrows to the investors and creditors interest in the prospect of receiving cash from their investments or loans to business enterprises. Finally, the objectives focus on the financial statements that provide information useful in the assessment of prospective cash flows to the business enterprise. This approach is referred to as decision usefulness. It has been said that the golden rule is the profound meat in many religions and the rest is elaboration.Similarly, decision usefulness is the message of the conceptual framework and the rest is elaboration. In providing information to users of financial statements, general-purpose financial statements are prepared. These statements provide the most useful information possible at minimal cost to various user groups. Underlying these objectives is the effect that users need reasonable knowledge of business and financial accounting matters to understand the information contained in financial statements. This point is impor tant. It means that in the zeal of financial statements, a level of reasonable competence on the part of users can be assumed. This has an impact on the way and the extent to which information is reported.Second Level Fundamental ConceptsThe objectives of the first level are concerned with the goals and purposes of accounting. Later, we will discuss the ways these goals and purposes are implemented in the third level. Between these twain levels it is necessary to provide certain conceptual construct blocks that explain the qualitative characteristics of accounting information and define the elements of financial statements. These conceptual building blocks form a bridge between the why of accounting (the objectives) and the how of accounting (recognition and measurement).Qualitative Characteristics of Accounting InformationChoosing an acceptable accounting method, the amount and types of information to be disclosed, and the format in which information should be presented involves determine which alternative provides the most useful information for decision making purposes (decision usefulness). The FASB has identified the qualitative characteristics of accounting information that distinguish better (more useful) information from subscript (less useful) information for decision making purposes. In addition, the FASB has identified certain constraints (cost-benefit and materiality) as part of the conceptual framework. These are discussed later in the chapter. The characteristics may be viewed as a hierarchy.Decision Makers (Users) and UnderstandabilityDecision makers vary widely in the types of decisions they make, how they make decisions, the information they already possess or can obtain from other sources, and their ability to process the information. For information to be useful there must be a connection ( linkage) between these users and the decisions they make. This link, understandability, is the superior of information that permits reasonably infor med users to perceive its significance. To illustrate the importance of this linkage assume that IBM Corp. issues a three-month earnings report (interim report) that shows interim earnings way down. This report provides relevant and reliable information for decision making purposes. whatsoever users, upon reading the report, decide to sell their stock. Other users do not understand the reports content and significance. They are surprised when IBM declares a smaller year-end dividend and the value of the stock declines. Thus, although the information presented was highly relevant and reliable, it was useless to those who did not understand it.Primary Qualities Relevance and ReliabilityRelevance and reliability are the two primary qualities that make accounting information useful for decision making. As stated in FASB Concepts Statement No. 2, the qualities that distinguish better (more useful) information from inferior (less useful) information are primarily the qualities of relevan ce and reliability, with some other characteristics that those qualities imply.RelevanceTo be relevant, accounting information must be capable of making a difference in a decision. If certain information has no bearing on a decision, it is irrelevant to that decision. pertinent information helps users make predictions about the ultimate outcome of past, present, and future events that is, it has prognostic value. Relevant information also helps users confirm or correct prior expectations it has feedback value. For example, when UPS (United megabucks Service) issues an interim report, this information is considered relevant because it provides a basis for forecasting annual earnings and provides feedback on past performance. For information to be relevant, it must also be available to decision makers before it loses its capacity to influence their decisions. Thus timeliness is a primary ingredient. If UPS did not report its interim results until six months after the end of the per iod, the information would be much less useful for decision making purposes. For information to be relevant it should have predictive or feedback value and it must be presented on a timely basis.ReliabilityAccounting information is reliable to the extent that it is verifiable, is a fold representation, and is reasonably free of error and bias. Reliability is a necessity for individuals who have neither the time nor the
Friday, March 29, 2019
The Battle Of invasion of invasion of invasion of invasion of Iwo JimaDuring universe of discourse War II on February 19, 1945, the join States of America and the Empire of Japan fought for Iwo Jima, a teeny is contribute approximately 660 miles away from Japan. Codenamed Operation Detachment by the unify States, the contend in conclusioned for 35 days, ending on March 26, 1945, and it rebrinys the largest meshing in Marine Corps history, with some 75,144 men universe deployed to disturb (Frank). The appointment of Iwo Jima also marked the first cadence that American casualties were high than Nipponese casualties in an amphibious assault. American casualties reached 24,733 while Japanese casualties were a little over 21,570 (Frank Naval History). This number was due to the leadership of the Japanese during the passage of arms.The general who was in command of the Japanese forces at Iwo Jima was lieutenant General Tadamichi Kuribayashi. During the battle for Iwo Jima, Lieutenant General Kuribayashi would show that he was one of Japans finest Generals. In preparation for the upcoming battle, Lieutenant General Kuribayashi chose to reduce his defense on the Northern two-thirds of Iwo Jima, instead on the beaches where the join States would land troops (Frank). Kuribayashi knew that Japan would not beat the United States, exclusively beca do of the amount of soldiers the United States would send. K todaying this, Kuribayashi decided to not focus his efforts on the southern beaches and lose quickly to a shining American force, still instead Kuribayashi decided to create strong defensive positions on the consist of the island to increase the amount of American casualties. It was Kuribayashis belief that if his forces could inflict enough American casualties, the United States would not be compelled to invade Japan, fearing that they would lose too legion(predicate) soldiers.In the Pacific Campaign of World War II, the United States employ a st rategy called island hopping, where the United States would attack a Japanese controlled island, watch it, and then repeat the process until they got to Japan. This was the United States strategy to defeat Japan, and the island of Iwo Jima was the next island to be captured. Iwo Jima was also strategically important because of the airfields placed on it (Burrell). Iwo Jima was close enough to Japan where the United States could use the airfields on Iwo Jima to attack Japan through the air with B-29 bombers. This was the main reason why Japan argueed the island so heavily.While the island of Iwo Jima was defensively important to the defense of mainland Japan, it was of little offensive importance because by this time Japans strategy was strictly based of the defense of mainland Japan. One Japanese ships officer described Iwo Jimas offensive relevance as such, Our first take in Army and Naval air forces had been exhausted in the recent Philippines Operation. The prospicience to restore our air forces, bringing their combined number to 3,000 planes, could materialize notwithstanding by March or April and even then, mainly because the types of airplanes and their performance proven to be impractic equal for trading operations extending beyond 550 miles radius, we could not use them for operations in the Bonin Islands area (Burrell).Before the actually land invasion began, the United States bombed the southern part of Iwo Jima, three days before where they would land their troops. This is where American intelligence significantly failed in two ways. It underestimated Kuribayashis forces by at least a third, and completely missed Kuribayashis intent to make his last stand at the pairing end of the island, instead of facing the Americans principal sum on at the south end. These errors ended up ca apply the mismanagement of the three day bombardment, the heaviest of the war, to the southern landing beaches, instead of focusing on the northern side of the is land, where the majority of Kuribayashis forces would be.When the land invasion did begin, Americans forces were met with no resistor by the Japanese. Instead of attacking the landing forces head on, the Japanese waited for the Americans to raise onto the beach, than endangered them as they closed in towards the Japanese position. Not only did the ambush cause a great number of initial American casualties, it was problematic for the marines to fight back due to the terrain of the beach. Instead of the beach being make out of sand, it was full of volcanic ash, which made it hard for the landing forces to encompass into the ground and defend themselves. One marine described it as, trying to fight in a bin of loose wheat (Frank). American forces were adequate to eventually break the Japanese line, and on February 23, 1945, the southern end of Iwo Jima was captured by American forces.As the United States pushed forward, they were met with heavy resistance from the Japanese who wer e well fortified and prepared to face the enemy. The more up north the United States went, the harder it became for them to fight. The Japanese had dug many bunkers into the terrain, and were successful at using ambush tactics against the marines which only made their advance more difficult. As the battle continued, marines started better adapting to fighting the Japanese on rough terrain, and with their superior forces drove the Japanese back until they could retreat no more. Marines fought for a vast and tiring 35 days until on March 26, 1945, the island was officially verbalise to be secure by American forces.In addition to being a historic battle in World War II, the battle of Iwo Jima has also had a significant effect on American culture. You can see traces of the battle in many art forms and touristed media in America. The Raising of the Flag on Iwo Jima, a examine taken by American photographer Joe Rosenthal, depicts five marines and navy corpsman meridian the flag on Mo unt Suribachi, at the southern end of Iwo Jima, on February 23, 1945. The photograph became a symbol for American patriotism during World War II, and the picture was even commemorated by being put on a postage stamp.You can also see the battle visualised in a movie directed by Clint Eastwood called Letters from Iwo Jima. In the movie Clint Eastwood shows the battle of Iwo Jima from the Japanese side, depicting what Japanese soldiers experienced as the battle was fought. The movie won an Academy set apart for best sound editing, and was nominated for three more for its depiction of the historic battle.In conclusion, the battle of Iwo Jima was one of the most important battles in the Pacific front World War II. With the United States successfully satisfactory to capture the island of Iwo Jima, they acquired the airfields on the island. With these airfields now under United States control, B-29 bombers would now be able to use the island to launch aerial assaults towards Japan, and would be able to use it as a fueling station closer to Japan. The battle also showed the United States how far the Japanese were willing to go to defend their homeland. Out of the initial more than 20,000 force, only 1,083 Japanese soldiers were captured alive (Frank). This showed the United States that Japanese soldiers were willing to fight to the death to defend their home, and that if the United States was be after on invading Japan, the amount of casualties would have been catastrophic.
Global Problem Of military force Against Wo hands Criminology EssayViolence against wo manpower moves to be a global problem. It does not pick verboten race, culture, education, age and class. A persons al-Qaida while considered as a rubber eonser known for many an(prenominal) plenty is as well as a concentrate that endangers lives and raises miscellaneous variants of fierceness carried out against women. Many instances are womens rights violated in the internal environment by people (mostly males) who are or have been entrusted with designer and/or intimacy by the women in the household. These people are lay down in the roles of husbands, fathers/stepfathers, uncles, br differents and other(a)wise relatives.Today, various international musical arrangements have pushed through and through with(predicate) the safeguard of women against madness. The Human Rights framework has led to the creation of certain international heavy mechanisms that would aid in the egis of women against wildness. However, how opinionive are these mechanisms? Whose responsibility is it in terms of combating municipal help abandon against women? These are just nigh of the brains that this testify will explore.II. IntroductionIt is state that the home is a fructify where people are so-called to feel a sense of belonging, stability and precaution and where people are guaranteed to receive emotional and physical well-being in the presence of lovely and caring relationships (Hart Ben-Yoseph 2005). However, for many, home has become a place of terror and vehemence, where instead of living in a peaceful and loving environment, people live every day in fear and call out at the hands of somebody close to them or somebody they take down trust (Khan 2000 in Inocenti Digest 2000 1).Despite various evidences that home(prenominal) help craze affects many women, beyond cultural background, ethnicity and geographic locations, the force only surfaced as a signifi trampt international valet right agenda in the early 1980s (Craven 2003 1). However, in the new-made historic period, in that location is verbalize to be a slap-uper understanding of the causes and effects of house servant power (Khan 2000 in Inocenti Digest 2000 1). Moreover, along with the issues growing signifi after partce, various organizations in the international and regional levels, which were concerned with womens rights, grew along and started to pave fashion for a new era in homo rights (Craven 2003 1). Some of the conventions which were products of the global consensus on domestic military unit were the Convention on the voidance of All Forms of Discrimination Against Women (CEDAW), Rights of the Child and the curriculum for Action (Khan 2000 in Inocenti Digest, 2000 1). In Australia, the three specific conventions ratified by the authorities are the Convention on the Elimination of all racial Discrimination the Convention Against Torture and other Cr uel, Inhuman Degrading or Punishment and CEDAW which was adopted by the United Nations General Assembly in the 1970s (Craven 2003 2). However, since the ratification of these conventions, health, welfare and legal professionals in Australia were experiencing a great challenge in figuring out how to formulate programs and policies in consent to the newly conceptualized international legal philosophy on gender or domestic abandon (Craven 2003 1). Progress has been boring, not only for Australia plainly in any case for other nations who adopted the international conventions because the process of identifying effective strategies and approaches to appeal domestic emphasis is still in progress of definition (Khan 2000 in Inocenti Digest, 2000 1).Fifteen historic period ago, the first national statistics on the incidence and prevalence of domestic madness in Australia were published by the Australian Bureau of Statistics (1996), wherein they were able to analyse a sample of 6,3 00 Australian women and found out the 42% of women who had been in a preceding relationship reported to have experienced force from their partners. In addition, it was found out that half of the women who reported having incidence of craze with their menses partners sustained more than one type of figure of effect ranging from bruises, cuts, scratches, to stabbing, gun shots or types of injuries (ABS 1996 55 cited in Mulroney 2003 1-2). In more recent socio-economic classs, a battlefield conducted by Access Economics (2004) found that in 2002 to 2003, an estimated 408,100 Australians became victims to domestic delirium, of which 87% were women (Access Economics 2004 1). Furthermore, a memorise conducted by Virueda Payne (2010) through the Australian Institute of Criminology, found that in 2007 to 2008, most homicides that bechancered during within that condemnation period were domestic homicides, where the victim usually shared a domestic relationship with the wrongdoe r. match to Virueda Payne (2010), most of the domestic homicides committed during the period of their study were classified as indicate partner homicides which comprised 60% of their subjects.This goes to show that regular(a) with the advent of the covenants and conventions which catered to discussing and formulating policies to prevent or solve cases which involved incidences of domestic hysteria, at that place is yet much work and transformation to do before we can say that the world is finally ready and able to give a full stop to domestic violence and step.III. BackgroundHistory of planetary Human Rights honorTracing the history of human rights would take us back to the time of the conception of the Ten Commandments and the Code of Hammurabi and the Rights of A whenceian Citizens (Weissbrodt de la Vega 2007 14). The earliest efforts to defend people from abuses such(prenominal) as arbitrary killing, torture, discrimination, famishment and forced eviction came from th e belief that individuals have immutable rights as human beings (Weissbrodt de la Vega 2007 1), and thus they deserve to be protected against any form of abuse. In more recent periods, the efforts to identify and defend human rights was said to be an outcome of the violence and refugee problems during wars (Wesbrodt de la Vega 2007 14), more specifically later on the tragedies which occurred in the Second World War (Cazen 2003).In retrospect, during the rise of the nation-states in the seventeenth century, the classical international right favoured state-sovereignty and did not accept the conception of human rights, for they believed that the nation-state was a good in itself and was more than an instrument to shape up welfare and protection among citizens (Wesbrodt de la Vega 2007 15). However, during the eighteenth to nineteenth century, international law began focusing on previously isolated fields such as the protection of aliens, the protection of minorities, human right s guarantees in national constitutions and laws, abolition of slavery, the protection of victims of armed conflict, self-determination, labor and womens rights.It is believed by some, that the formation of the United Nations in 1945 was a proof of our modern sufferk to protect human rights (Weissbrodt de la Vega 2007 3). fit in to Weissbrodt and de la Vega (2007) the most important source of International law are treaties and customs, for they are said to have legal binding legal effect between the states that signed those agreements (Weissbrodt de la Vega 2007 3). Moreover, it was regarded that the most important treaty organise was the United Nations Charter, which was the cause for the establishment of the United Nations (Weissbrodt de la Vega 2007 3). About 188 nations or so the world singed the United Nations Charter which vowed to form an international alliance with a common goal of upholding the rights of humans and encourage peace and cooperation among nations (Cazen 2003). Three years later, in 1948, the Universal result of Human Rights was established and it mint out the international standards for human rights (Cazen 2003).With regard to womens rights, it was said that the efforts to abolish slavery in the nineteenth century awakened the concern for womens rights during that time, thus began the international struggle for womens rights way back in 1948 during the Seneca Falls Convention and the International Women suffrage Alliance in 1904 (Weissbrodt de la Vega 2007 17).Domestic Violence Definition, Causes and Prevalence in AustraliaDefinitionWhat is domestic violence? What are its causes and how does it affect the lives of women who are victims of such dilemma? These are some of the questions which we will address further in this essay. Domestic violence, as defined in the Article 1 of the UN Declaration of 1993 (as cited in Westendrop Wolleswinkel 37) asAny act of gender-based violence that results in, or is in all likelihood to res ult in, physical, sexual or psychological trauma or distress to women, including threats of such acts, coercion or arbitrary deprivation of liberty, whether occuring in open or private life.In further detail, the Declaration on the Elimination of Violence against Women (1993) states that any form of domestic violence whitethorn occur in three areas such as(1) In the family, where violence may be in the form of battering, sexual abuse of female children in the household, dowry-related violence, marital rape, female genital mutilation, other traditional practices harmful to women, non-spousal violence and violence related to exploitation(2) In the general community where violence may include rape, sexual abuse, sexual harassment and intimidation at work, in educational institutions and elsewhere, trafficking in women and forced prostitution(3) In the state, wheresoever it occurs, where violence is perpetrated or condoned.Furthermore, according to Laing and Bobic (2002), Australian literature recognizes that domestic violence, whether defined as domestic or family, may include a lean of violent demeanours from physical, sexual, verbal, psychological, to emotional abuse, social isolation and forms of financial abuse (Laing Bobic 2002 14 as cited in Access Economics 2004 3).PrevalenceThe Victorian form _or_ system of government-making sympathies recognizes that women are at greater risks of family violence, sexual violation, harassment and stalking than men ( westbound Region Network Against Family Violence 200316) In addition, the Victorian Government also contends that women are more likely to experience violence in the home rather than in public places, especially in the hands of their previous or current partners, and most especially, the cycle of violence occurs in the scene of an actual continuity of power asymmetry and inequality between men and women in the society (Victorian Government 2002 20 as cited in Western Region Network Against Family V iolence 200316).Over time, various studies have been conducted in order to describe the prevalence of domestic violence in Australia. As mentioned in the previous paragraphs of this essay, the first break through in meeting the largest statistical info with accordance to incidences of domestic violence was conducted by the Australian Bureau of Statistics for their Womens Safety field of study in 1996, where they were able to gather 6,300 respondents. According to the results of the survey, one out of twelve Australian women who were married or in a de facto relationship experienced some form of violence from their current partners (Interbreur 2001). The ABS Womens Safety Survey also found that more women experienced violence from their previous or current partners rather another person, stranger or male known to them (Western Region Network Against Family Violence 200318). And in 2005, ABS personalized Safety Survey found that during the 12 month period prior to the time when t he survey was conducted, 38% reported to have experienced the assault from a male perpetrator, particularly their previous or current intimate partners (Parliamentary Library 2009). Moreover, in more recent data, in a study conducted by the Virued Payne in 2010, they found that more domestic homicides occurred in the year 2007 to 2008, wherein the victim usually shared a domestic relationship with the offender and 60% of these incidents were classified as intimate partner homicides. Now the question arises why do men victimize women in abusive behaviour?CausesWomen who are victims of domestic violence have no common factor. The act may occur to anyone, regardless of their socioeconomic status or their racial and cultural background (Better health have a bun in the oven 2011). However, women who are young, indigenous, have a disability, or those who live in rural areas were found to be at greater risk in incidences of domestic violence (Better Health Channel 2011). Furthermore, th e Domestic Violence Resource Victim Victoria, through Better Health Channel (2011), identified some of the prevalent causes or reasons why some men inflict violence and abuse on some women, and it was said that domestic violence may be caused by a late regard for maleness or a firm patriarchal mental capacity of some males, and abusers often tend to blame the acts of violence to intoxication (alcohol), to other people, or other forms of circumstances. However, as what the Victorian government has stated, domestic violence may have roots on the existing power derangement or continuing patriarchal mindset of people.IV. DiscussionInternational Law and Violence against Women The Mechanisms and their EffectivenessAs discussed earlier, the UN Declaration of Human Rights in 1993 set out the international standard for protecting the rights of individuals. However, although the UN charter has affirmed the supposed equality between women and men, the gender-blindness often resulted to case s of structural discrimination against women and womens rights were still neglected (Westendorp Wolliswenkel 2005 20). During that time, international human rights law was peculiar(a) to protecting only the public, political legal and social firmament and did not include the private sphere of the home and family (Westendorp Wolliswenkel 2005 20). In effect, using the international human rights law as a framework to look into womens rights entailed certain methods and mechanisms to determine the obligations of governments to protect the human rights of women and to hold the government accountable if they fail to meet their obligations (Westendorp Wolliswenkel 2005 20). For instance, the UN Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW), which was one of the mechanism used to address the issue, required all nation states who ratified the said convention to take all appropriate measures to eliminate discrimination against women by any person, o rganization or enterprise (Westendorp Wolliswenkel 2005 21). Furthermore, in 1993, the UN finally declared violence against women as a human rights violation which required urgent attention (Westendorp Wolliswenkel 2005 22). afterwards the declaration, more conventions and mechanism were created and another successful mechanism which was an outcome of the continue lobbying of women from different NGOs was the Beijing Declaration and Platform for Action in 1995 and the capital of Italy Statute in 1998 which included nuisances such as rape, war crimes and other forms of crimes against humanity (Westendorp Wolliswenkel 2005 23).Protecting the Rights of Women, Who is Responsible?Australias commitment to the ratification of CEDAW or the Womens Convention, as what Crazen (2003) was not as smooth and easy as what one mogul expect. Often, the common problem of states when ratifying mechanism or policies of international human rights law concerning womens rights is how to assimilate th ose international policies into their domestic policies without any form of confusion. There were many reservations from some nations when CEDAW was imposed by UN, and primarily these reservations effected to some nations enervating in the commitment to the alliance of upholding womens rights, and by doing so, they have reduced their obligations upon ever-changing their domestic policies. In Australia, the Commonwealth recognizes that it is the role of the government to address domestic violence, so that they have created many committees and organization to cater to issues and incidences of domestic violence. As far as 1986, the Commonwealth of Australia commenced its role in addressing the issue of domestic violence, followed by their efforts to establish the Office of the Status of Women (OSW). From then on, the Commonwealth has helped in the quest for keeping actual and factual records of incidences of domestic violence in Australia through sponsoring series of surveys from 1987 up to 2005 (Parliamentary Library 2009).However, the role of the Commonwealth is limited only to spear-heading standard approaches to policy and legislative reforms in the states and territories in Australia each(prenominal) state then, will and must be the ones responsible in enforcing and implementing policies concerning domestic violence (Parliamentary Library 2009). Policy development in Australia has gone through a long way of reformation and implementation. During the 20 years of those policy making and developments, the government was able to focus on tertiary levels of interjection on domestic violence by providing sympathetic and victim centred fright after the assault (Parliamentary Library 2009). These tertiary interventions exist in the forms of violence reports, law reform, provision of refugees, health care and accommodation and domestic violence services.V. Conclusions and RecommendationsMajority of the Australian literature reviewed for the purpose of this essay r eported that domestic violence and any form of abuse happening in the condition of home and family are regarded as one of the most under-reported crime offenses in various states around Australia. As we have reviewed the figures since the earliest ABS Womens Safety Survey in 1996 to the homicide reports of the Australian Institute of Criminology in 2010, we see that even with the efforts of the government to implement committees and legislative reforms in order to address domestic violence and prevent them, the figures have shown us that the efforts may have had only self-conceited effects on the total elimination of the incidences of domestic violence. Although domestic violence against women have been specifically defined by the UN, the law was found to be limited in addressing all forms of abuse, in such a way that some types of violence such as economic deprivation, excessive possessiveness or jealousy and enforced isolation were found to be not forthwith remediable through l egal measures (Alexander Seddon 2002).Furthermore, throughout the review of related literatures for this essay, it has also been found that policy making was not the only problem with the slow progress for the elimination of violence against women, but also, there were underlying problems which prevented the potency of the international law mechanisms. One of those reasons would be the existing power imbalance and the patriarchal mindset of societies and most specifically, the very high regard for masculinity amongst male offenders. Another would reason was that some societies wherein customs and traditions would often place women in the lower hand often react more defensively against the imposition of the international law mechanisms in their domestic legislation. Thus, throughout the world, there may still be some states that are guilty of condoning violence against women as they will argue that it is part of their customs and traditions.On a positive note, the Commonwealth of A ustralia has been consistent with its commitment towards the battle against the incidences of violence against women, by creating committees and funding surveys in order to check the current situation of the issue in Australia. However, their efforts may also come to waste since most victims of abuse would not be open to reporting the abuse to authorities. As we can see, there is a chain reaction which exists amongst perpetrators, the victims and the legislative reform perpetrators continue to uphold the patriarchal mindset while the victims remain silent close the abuse, and then the government will have difficulty formulating policies for prevention and actions against the crime while they also have difficulty in obtaining spotless data of the real prevalence of domestic violence in Australia.Basing on these conclusions, it is then safe to recommend that a massive effort towards educating people about(predicate) domestic violence be done. This may help in modifying the existing resentment or feelings of indifirrence towards the policies intended to prevent or solve cases of domestic violence. Education or knowing more about issue may provide enlightenment on people and soon modify their sort and beliefs about domestic violence. It is also important to make the victims feel that they have the law to protect them, so that when they come out and report incidence of abuse, they will be assured of their safety and their lives will become frequent again. When finally, victims will feel that it is safe and okay to admit that they are victims of abuse, accurate data will then be acquired and the government will see the real prevalence of the issue. As for the legal framework, there is still a long way to go before we can finally put every policy with regard to violence against in women at place, but the best thing to do would be to focus on the safety actions, such as the tertiary measures provided by the Commonwealth, and to keep on thrust for reformations.