Monday, January 27, 2020

Non Financial Performance Measures: Summary and Analysis

Non Financial Performance Measures: Summary and Analysis Total quality management and non financial performance measures In order to answer the first, fundamental, question of using non financial performance measure: why should companies use non-financial reporting, it is necessary to look at the relationship between market value and book value. The market value of a company reflects the investors’ perception of the company’s present, and future, value, as manifested by stock prices. The book value, on the other hand, reflects the value of the company as reported in the official balance sheet: assets less liabilities, or net assets. Thus book value represents, in a way, the official company value and is reported to shareholders and the financial community. The market and book values for companies were very close by the end of the 1970s. The picture has, however, changed dramatically, and one estimate from the current level of stock market valuations says that book value now represents on average just around one quarter of the market value (Dutta and Reicheistein, 2005). Other data indicates an even more dramatic change in companies with valuable brands, a reputation for high quality or technical expertise, for example, in individual companies, e.g. Microsoft, the estimated book value portion is around 9%, for SAP around 5%, and for Coca-Cola around 7%. (Daum, 2002) The ratio of book value to market value is often so small that the relevance of the balance sheet to modern has often questionable. It is, of course, crucial to understand the gap between market and book values, as the market value comes from the intangible assets, such as the customer, human resource, partner, and brand assets. In order to understand the gap, there is an ob vious need for relevant and reliable information on these intangible assets, thus non-financial performance measures aim to providing such information to the stakeholders, and in particular, to the present and future investors. Lack of reliable and relevant information on intangible assets implies there is no basis for non-financial reporting, which in turn implies that market values will change over time in a less well-founded way. There is always a certain level of volatility on the stock markets, and the increasing relative importance of intangible, non financial performance measurements that has emerged over the last few decades, in combination with a persistent lack of reliable and relevant information on these assets, and no systematic non-financial reporting, is expected to create an increasing volatility. This is clearly seen, for example, on the trends in the NYSE over the past three decades (Kristensen and Westlund, 2003). It is, naturally, expected that a lack of non financial reporting will imply a significant portion of unnecessary volatility, which is clearly demonstrated by the stock price development for technology stocks (Kristensen and Westlund, 2003). For example, the IT ‘bubble’ to a significant extent was built up by a lack of proper information and analysis of intangible assets in these companies, due to a lack of non financial performance measurement and thus an overvaluing of intangible technology assets, such as AOL’s telecommunications distribution networks at the time of the AOL-Time Warner merger (The Economist, 2002). It is clear that this demonstrates a malfunctioning of the capital markets, causing significant negative consequences by destroying values in the short term, as well as long term. As such, the main purpose of non financial performance measurement is to provide the market investors and analysts with information to verify the present and expected future value of a company. Ultimately, the process of verifying the market value at a certain time will then be more fact based, thus reducing the unnecessary price volatility. In order to accurately achieve this, the key predictors of a company’s future financial performance: revenue, profits and market share, are crucial. Most recent research identifies these predictors as being primarily intangibles, non financial assets, thus explaining why market value today is basically determined by intangible assets. (Kristensen and Westlund, 2003) In particular, indicators related to the customer asset: the size and ‘quality’ of the customer base, the human capital, the brand assets, the value of corporate citizenship, and the firms product quality and expertise, will dominate. If such an indicator is a reas onably stable, strong and sustainable predictor of future financial performance, it should be called a ‘Value Driver’ (Kristensen and Westlund, 2003). Non-financial reporting aims at disclosing information on value drivers, which must be operationalised and transparent and, ideally, verifiable according to new accounting standards in order to become true non financial performance measures. Total quality management (TQM) practices have been implemented by firms interested in enhancing their survival prospects by including quality and continuous improvement in their strategic priorities. As such, they often have to be measured using both financial and non financial measures, as the expertise and cultural aspects of the TQM process are often difficult to measure by purely quantitative, financial means. One of the key measures of the success of TQM is the balanced scorecard (BSC) approach, which appraises both the four key dimensions of firm performance: customers, financial, learning and growth, and also the internal business processes. The main advantage of this is that TQM does not consider employee satisfaction in its search for continuous improvement, but the BSC does consider employee satisfaction. (Hoque, 2003) Therefore, by adopting a BSC a firm that has adopted TQM will overcome this oversight which will in turn increase employee satisfaction and subsequently firm performance. Indeed, in the modern business context, employee satisfaction is key to firm performance, and so the BSC is an important non financial performance measure. TQM’s relentless pursuit of quality demands that firms identify all non-value adding waste in the manufacturing process and implement procedures to eliminate, or at least reduce, such activities. This implies better production planning to limit over-production and excessive inventory and improved product and plant design to eliminate wasteful movement and handling. (Smith, 1997) Substandard items must be eliminated and a changed attitude reinforced which is customer-focused and adopts ‘the next person on the production line is my customer’ approach (Hoque, 2003). The cost of quality is a potentially important component of management accounting systems which may facilitate the implementation of total quality management, despite being difficult to measure in absolute financial terms. The costs of prevention, appraisal and failure are all aspects of the cost of quality, and it is often necessary to use non financial performance measures to assess these. Prevention costs include the costs of plant, product and process planning, preventive maintenance, training and the implementation of statistical process control systems, and appraisal costs include the costs of inspection and testing of both incoming and outgoing materials, and the cost of maintaining and administering appraisal systems and equipment, both of which can be measure financially However, whilst failure costs include, at the internal level, the financial costs of scrap, rework, redesign and safety stocks necessary to provide a buffer against such failure; at the external level they include losses associated with customers, goodwill and reputation, all of which require non financial performance measurements. Analysis of the costs of external failure is increasingly becoming the focus of attention in this area, reflecting the current trend towards increasing customer orientation of management accounting. (Smith, 1997) Quality considerations also extend beyond those focused on the difficult ‘cost of quality’ question, and non financial reporting is useful in providing measures of other aspects of quality, such as the quality of purchased components, equipment failure and maintenance efforts. As a result, it is necessary for TQM practitioners to consider the relationship between the types of targets or benchmarks used in the two main contrasting performance improvement strategies continuous improvement and radical change. (Johnston et al, 2001) hypothesised that the process of target setting and the reward structures adopted would be different between the two strategies, proposing that organisations involved in continuous improvement of a process will base their performance targets on past performance and internal benchmarking, arrived at through consultation and with a mixture of financial and non-financial measurements of targets. However, for processes involving radical change, targets will be based on external benchmarks imposed by senior management, with purely financial targets, and financial rewards for their achievement. However, research showed that financial measurement and reward strategies predominated in both improvement strategies, thus implying that the pot ential benefits of adopting process changes are being constrained by only considering the financial side. However, whilst academic research and other research activities among accounting organisations on intangible assets has so far mainly focused on creating awareness of the significant importance of intangibles on future financial performance, to a lesser extent, research has dealt with the serious information deficiencies related to intangible assets (Hothorn et al, 2005). In particular, the research focus seems to have been on studying the dramatic shift in production functions and asset composition of the economy, rather than the underlying problems associated with measuring intangible assets, especially in cases such as Enron, where derivatives were grossly overvalued (Wilson et al, 2003). This research involves a multitude of research activities, including the macroeconomic theory of growth, as well as empirical studies on individual companies. The growing importance of intangible assets has already been demonstrated, and one rationale behind this development is the fact that the annual United States investments in intangible assets are of approximately the same magnitude as investments in physical assets: approximately $1.2 trillion (Lev, 2001). These activities have clearly led to a rather general acceptance that traditional, financial, accounting-based, information systems fail to provide investors and policy makers with insights on the impact on the economy from intangibles. This is of increasing importance, given that the volatility of stock prices is becoming abnormally high, and this raises many serious consequences, including systematic inefficiency in managerial decisions. The recommendations so far from researchers seem to be voluntary disclosure of information on intangibles, and indeed, some companies now report externally on various aspects of intangible assets, but this happens in a very non-standardised way and seems to be of limited value for investors’ decisions. Consultants, in particular linked to the accounting industry, are empirically analysing causes and consequences of investments in intangible assets, but this research provides only marginal guidelines in identifying best practice for non financial reporting (Kristensen and Westlund, 2003) Different suggestions have been presented by researchers to identify the new research agenda to understand better and manage intangible assets, with Lev (2001), for example, suggesting a focus on research related to organizational structures. The obviously incorrect validation of intangible assets in the cases of Enron and WorldCom (The Economist, 2002), shows that another focus of research mus t be to identify best practice methodology to measure intangible assets, and to measure the main intangible value drivers for future financial performance. Indeed, the sustainability of non financial reporting is completely dependent on how it will be accepted by the stakeholders of the business community: investors, analysts, customers, boards, management, employees, the accounting profession, etc This, in turn, completely depends on the ability and willingness by the accountants to provide a formal verification of the process to generate information, as well as on the information itself. Finally, this in turn depends on the quality of the information: â€Å"Companies must start by first identifying their true value drivers both financial and non-financial within the context of their business model, and by ensuring they have defined the right metrics as well as the measurement methodologies and systems to capture the right information for internal management† (PriceWaterhouseCoopers, 2001). Recognising that the treatment of non financial performance is a key current issue in accountancy, accounting associations have already identified a number of criteria and principles to secure and describe the quality of non-financial information. This process, however, appears far from being finalised, and in particular lacks a focus on the statistical characteristics of the information, and there is also a need for further operationalisation and transparency of the quality principles (Lev, 2001) As a general principle, any verification process should verify that non-financial reporting includes the right choice of information, has the necessary degree of relevance, and that the information provided has a reasonable level of reliability. If these three requirements are not sufficiently fulfilled there is unlikely to be a sustainable future for non financial performance measurement, in TQM or any other business aspect. Further to this, information that does not say anything or very little about future financial performance should not be included in non-financial reporting. All the included information must manifest the so called Value Drivers (Kristensen and Westlund, 2003) and such value drivers should be either directly linked to future financial data, or they could be indirectly linked, through a direct value driver. Thus, relevance should be defined by the existence of verified links to future financial numbers; however this raises a number of pertinent questions to be answered by the accounting profession. Mainly, they would need to decide which financial criteria should primarily be considered to secure relevance; and which future time period is of interest to investors. For the moment, it is probably worthwhile to have a very broad scope here, as this would mean that any financial information of interest could be used and, in addition, the future time period is defined in a very generic way. Of course, it is much more difficult to verify links to financial numbers if the lead time is substantial, and so care should be taken that the data will have a recognised financial impact within a reasonably short period of time. Indeed, whatever financial criterion and time period is chosen, it is crucial to be able to verify a strong enough and stable likely future financial impact from the non financial data. In order to better, and more accountably, measure this, such impacts should be statistically significant according to a standard statistical measurement, written into the accounting standards. However, the question that remains is still whether impacts should also exceed a certain financial level, as well as a statistical level, in order to qualify as a significant non financial value driver. In this context, it is also difficult to decide whether these qualifying criteria should also involve the extent to which a value driver will explain any likely variation in the future financial criterion. There are many potential principles to be found in information theory and statistics that might be used here, such as direct explanatory power (Kristensen and Westlund, 2003), but unfortunately the requirement levels necessary to use these principles are not very easy to determine, and could be open to abuse. In summary, in almost all modern industries, the book value of a company does not reflect the actual market value of the company, due to the increasing importance of branding, technology, knowledge and reputation. Whilst the market and book values were still very close at the end of the 1970s, since then the picture has changed dramatically, with estimates stating that book value now represents just one quarter of the market value. As a result, it is reasonable to conclude that the measurement of intangible, non financial factors is now roughly three times as important to investors as the measurement of financially measured, tangible assets. As the market value comes from intangible assets, like the customer, human resource, partner and brand assets, in order to understand the gap there is an obvious need for relevant and reliable information on these intangible assets, which is best provided by non financial performance measures. In the context of TQM, a large portion of the process improvements seen due to TQM initiatives will not have a definite financial effect; rather they will improve a product’s attractiveness to customers, or improve the efficiency of a firm’s processes. As a result, their primary impact will be difficult to measure by financial measures, and so non financial performance measures will be most relevant. From this, it follows that the accountancy profession needs a new reporting system and also need to define a best practice of measurement for these non financial performance measures, in order to reflect the true value of initiatives such as TQM. This system has a number of requirements, including causality, standardisation, relevance or link to financial results and reliability. The prevailing opinion appears to be that it is time that new reporting systems are introduced and implemented, as the discrepancy between the importance of intangibles and the ability to account for these types of assets constitutes a growing challenge for companies, investors and for society in general. The relevant people, including academics, managers, accountants, practitioners and auditors, should thus come together and formulate a new charter for future reporting of non financial performance measures. References: Daum, J. H. (2002) Intangible Assets or the Art to Create Value Wiley. Dutta, S. and Reicheistein, S. (2005) Stock Price, Earnings, and Book Value in Managerial Performance Measures. Accounting Review; Vol. 80, Issue 4, p. 1069. Hoque, Z. (2003) Total Quality Management and the Balanced Scorecard Approach: A Critical Analysis of their Potential Relationships and Directions for Research. Critical Perspectives on Accounting; Vol. 14, Issue 5, p. 553. Hothorn, T. Leisch, F. Zeileis, A. and Hornik, K. (2005) The Design and Analysis of Benchmark Experiments. Journal of Computational Graphical Statistics; Vol. 14, Issue 3, p. 675. Johnston, R. Fitzgerald, L. Markou, E. and Brignall, S. (2001) Target setting for evolutionary and revolutionary process change. International Journal of Operations Production Management; Vol. 21, Issue 11, p. 1387. Kristensen, K. and Westlund, A. H. (2003) Valid and reliable measurements for sustainable non-financial reporting. Total Quality Management and Business Excellence; Vol. 14, Issue 2, p. 161 Lev, B. (2001) Intangibles: Management, Measurements and Reporting Brookings Institution Press. PriceWaterhouseCoopers (2002) Value reporting, Forecast 2002 Bringing Information out into the Open. Smith, M. (1997) Putting NFIs to work in a balanced scorecard environment. Management Accounting: Magazine for Chartered Management Accountants; Vol. 75, Issue 3, p. 32. The Economist (2002) A steal? Vol. 365, Issue 8296, p. 57. Wilson, A. Key, K. G. and Clark, R. L. (2003) Enron: An In-Depth Analysis Of The Hedging Schemes. Journal of Applied Business Research; Vol. 19, Issue 4, p. 15.

Sunday, January 19, 2020

Cell bio lab report Essay

Purpose: During this experiment we compared the hemagglutination reaction of control Con A solution at 2 mg/ml in Con A buffer with the hemagglutination reaction of your own purified Con A sample that you diluted previously at 2 mg/ml in Con A buffer. The purpose of this lab was to determine the strength of the reaction by performing serial dilutions on both the Con A sample and the control Con A sample, and determine through observations whether or not addition of galactose or mannose will inhibit this reaction. I hypothesize that the Con A + galactose solutions will have partial agglutination and partial no agglutination, and the Con A + mannose solutions will have all no agglutination. Results: Rows A and B had half agglutination and half no agglutination, while row C had all no agglutination. Row D had half agglutination and half partial agglutination, while row E had 4 columns with agglutination and 8 columns with no agglutination. Row F had complete agglutination throughout. Con A reaction plate (Row/Column) Reaction A1-A6 (Control) Agglutination/inhibited A7-A12 (Control) No Agglutination/not inhibited B1-B6 (Con A + galactose) Agglutination/inhibited B7-B12 (Con A + galactose) No Agglutination/not inhibited C1-C12 (Con A + mannose) No Agglutination/not inhibited D1-D5 (Sample) Agglutination/inhibited D6-D12 (Sample) Partial agglutination/inhibited E1-E4 (Con A + galactose) Agglutination/inhibited E5-E12 (Con A + galactose) No Agglutination/not inhibited F1-F12 (Con A + mannose) No agglutination/not inhibited G1-G12 ((-)Control) Partial agglutination/inhibited H1-H12 (RBCs) Partial agglutination/inhibited Discussion: My hypothesis was proven correct, the Con A + galactose solutions did have partial agglutination and partial no agglutination, and the Con A + mannose solutions did have all no agglutination. This means that in the Con A + galactose solutions the ones that had agglutination the red cells aggregated with the lectin and sediment in the form of a uniform layer that covered the whole bottom of the well including the slopes, and in the Con A + mannose solutions the ones that failed to have agglutination take place covered only part of the bottom. We found through this experiment that the control Con A sample and the Con A sample had very similar strengths of reaction. Conclusion: In this experiment through serial dilutions on a 96 well plate we determined the strength of the each reaction on both the Con A sample and the control sample. We found through our experiment that the addition of galactose or mannose will partially inhibit or inhibit the reaction between our Con A sample and control Con A sample.

Friday, January 10, 2020

Mobile television Essay

Mobile television is television watched on a small handheld or mobile device. It includes pay TV service delivered via mobile phone networks or received free-to-air via terrestrial television stations. Regular broadcast standards or special mobile TV transmission formats can be used. Additional features include downloading TV programs and podcasts from the internet and the ability to store programming for later viewing. According to the Harvard Business Review, the growing adoption of smartphones allowed users to watch as much mobile video in just three days of the 2010 Winter Olympics as they watched throughout the entire 2008 Summer Olympics – an increase of 564%.[1] DMB in South Korea History The first pocket-sized mobile television was sold to the public by Clive Sinclair in January 1977. It was called the Microvision or the MTV-1. It had a 2-inch CRT screen and was also the first television which could pick up signals in multiple countries. It measured 102Ãâ€"159Ãâ€"41mm and was sold for less than  £100 in the UK and for around $400 in the US. The project took over ten years to develop and was funded by around  £1.6 million in British Government grants.[2][3] Mobile TV is one of the features provided by many 3G phones. In 2002, South Korea became the first country in the world to have a commercial mobile TV CDMA IS95-C network, and mobile TV over 3G (CDMA2000 1X EVDO) also became available that same year. In 2005, South Korea also became the first country in the world to have mobile TV when it started satellite DMB (S-DMB) and terrestrial DMB (T-DMB) services on May 1 and December 1, respectively. Today, South Korea and Japan are at the forefront of this developing sector.[4] Mobile TV services were launched by the operator CSL in Hong Kong, March 2006, on the 3G network.[5] BT in the United Kingdom was the among the first companies outside South Korea to launch Mobile TV in September 2006, although the service was abandoned less than a year later.[6] The same happened to â€Å"MFD Mobiles Fernsehen Deutschland†, who launched their DMB-based service June 2006 in Germany, and stopped it in April 2008.[7] Also in June 2006, mobile operator 3 in Italy (part of Hutchison Whampoa) launched their mobile TV service, but opposed to their counterpart in Germany this was based on DVB-H.[8] Sprint started offering the service in February 2006 and was the first US carrier to offer the service. In the US Verizon Wireless and more recently AT&T are offering the service. In South Korea, mobile TV is largely divided into satellite DMB (S-DMB) and terrestrial DMB (T-DMB). Although S-DMB initially had more content, T-DMB has gained much wider popularity because it is free and included as a feature in most mobile handsets sold in the country today. Challenges Mobile TV usage can be divided into three classes: †¢ Fixed – Watched while not moving, possibly moved when not being watched †¢ Nomadic – Watched while moving slowly (e.g. walking) †¢ Mobile – Watched when moving quickly (e.g. in a car) Each of these pose different challenges. Device Manufacturer’s challenges †¢ Power consumption – Continuous receipt, decoding, and display of video requires continuous power, and cannot benefit from all of the types of optimizations that are used to reduce power consumption for data and voice services. †¢ Memory – To support the large buffer requirements of mobile TV. Currently available memory capabilities will not be suited for long hours of mobile TV viewing. Furthermore, potential future applications like peer-to-peer video sharing in mobile phones and consumer broadcasting would definitely add to the increasing memory requirements. The existing P2P algorithms won’t be enough for mobile devices, necessitating the advent of mobile P2P algorithms. There is one start-up technology that claims patentability on its mobile P2P, but has not drawn attention from device manufacturers yet. †¢ Display – Larger and higher-resolution displays are necessary for a good viewing experience. †¢ Processing power – Si gnificantly more processor performance is required for mobile TV than that used for UI and simple applications, like browsers and messaging. Content Provider’s challenges †¢ Mobile TV specific content – Mobisodes: mobile episodes of popular shows which are relatively shorter (3 to 5 minutes), to suit the likely viewing habits of the mobile TV user. Digital TV North America As of January 2012, there are 120 stations in the United States broadcasting using the ATSC-M/H â€Å"Mobile DTV† standard – a mobile and handheld enhancement to the HDTV standard that improves handling of multipath interference while mobile.[9] The defunct MediaFLO used COFDM broadcast on UHF TV channel 55. Like satellite TV, it was encrypted and controlled by conditional access (provided via the cellular network). It required a subscription for each mobile device, and was limited to the AT&T Mobility or Verizon Wireless networks. Broadcast mobile DTV development While MediaFLO uses the TV spectrum and MobiTV used cell phone networks,[10] â€Å"mobile DTV† (ATSC-M/H) uses the digital TV spectrum. At the National Association of Broadcasters (NAB) show in April 2007 in Las Vegas, the ATSC and 8VSB methods for delivering mobile DTV were shown. A-VSB (Advanced VSB), from Samsung and Rohde & Schwarz, was shown at the previous year’s show. In 2007, LG, whose Zenith Electronics came up with 8VSB, introduced (with Harris Group) its Mobile-Pedestrian-Handheld (MPH) system. As the broadcast networks began making their content available online, mobile DTV meant stations would have another way to compete. Sinclair Broadcast Group tested A-VSB in fall 2006, and its KVCW and KVMY were participating in the mobile DTV product demonstrations at the NAB show. A-VSB had worked in buses at the 2007 Consumer Electronics Mobile television Show. ION Media Networks started a test station on channel 38, which was to be used for digital LPTV, to use for a single-frequency network (SFN). In some areas, more than one TV transmitter would be needed to cover all areas. Mobile DTV could have been used at that time because it would not affect HDTV reception. A single standard, however, had to be developed.[11] At the Consumer Electronics Show in January 2009, the first prototype devices from LG and other manufacturers were demonstrated, including receivers for cars from Kenwood, Visteon and Delphi. It was announced that 63 stations in 22 markets would debut the service in 2009. Gannett Broadcasting president David Lougee pointed out that many of those attending the inauguration of Barack Obama would likely hear him but not see him; had the new technology been in place, this would not have been a problem.[12] In April 2009, the Open Mobile Video Coalition, made up of over 800 broadcast stations, selected four test stations: Gannett’s WATL and ION’s WPXA-TV in Atlanta, and Fisher Communications’ KOMO-TV and Belo’s KONG-TV in Seattle. WPXA had begun mobile DTV broadcasting on April 1. The others would start in May.[13] Later in 2009, ION said it was making available HDTV, standard definition and Mobile DTV streams using its affiliates in New York City and Washington, D.C. The â€Å"triple-play† concept was part of an effort to create a Mobile DTV standard. At the time, only those with prototype receivers could pick up the streams. ION Chairman and CEO Brandon Burgess said mobile DTV lets stations â€Å"think beyond the living room and bring live television and real time information to consumers wherever they may be.†[14] The Advanced Television Systems Committee started work on mobile DTV standards in May 2007, and manufacturers and sellers worked q uickly to make the new technology a reality. The OMVC persuaded LG and Samsung to work together starting in May 2008 so that differing systems (possibly a self-destructing format war) would not delay or kill the technology. Early in July 2009, the ATSC Technology and Standards Group approved the ATSC-M/H standard for mobile DTV which all members green-lighted October 15. The public could be using the new devices by 2010, though watching TV on cell phones seemed unlikely in the near future since telephone manufacturers did not yet include that capability. The technology was expected to be used for polls and even voting.[15][16] By the end of the year, the ATSC and the Consumer Electronics Association began identifying products meeting the standard with â€Å"MDTV†.[17] Paul Karpowicz, NAB Television Board chairman and president of Meredith Broadcast Group, said This milestone ushers in the new era of digital television broadcasting, giving local TV stations and networks new opportunities to reach viewers on the go. This will introduce the power of local broadcasting to a new generation of viewers and provide all-important emergency alert, local news and other programming to consumers across the nation.[16] Later in July, the first multi-station tests began in Washington, D.C., while single stations in New York City and Raleigh, North Carolina already offered mobile DTV. The OMVC chose Atlanta’s WATL and Seattle’s KONG as â€Å"model stations† where product testing could take place. 70 stations in 28 media markets planned streams by the end of 2009. The Washington test would involve WPXW-TV, WUSA, WDCA, WRC-TV, WHUT-TV, WNUV in Baltimore, and WNVT, a part of MHz Networks, a multicasting service. All of the stations would have two of more channels each, with â€Å"electronic service guide and alert data† among the services. 20 sellers of equipment would use these stations to test using the existing standard, but testing the final standard would come later, and tests by the public would happen in 2010, when many more devices would be ready. Obviously, manufacturing large numbers of the devices could not take place without the final standard. LG, however, began mass-producing chips in June. ION technology vice president Brett Jenkins said, â€Å"We’re really at a stage like the initial launch of DTV back in 1998. There are almost going to be more transmitters transmitting mobile than receive devices on the market, and that’s probably what you’ll see for the next six to nine months.† Devices would eventually include USB dongles, netbooks, portable DVD players and in-car displays.[18] White House officials and members of Congress saw the triple-play concept in an ION demonstration on July 28, 2009 in conjunction with the OMVC.[19][20] Another demonstration took place October 16, 2009 with journalists, industry executives and broadcasters riding around Washington, D.C. in a bus with prototype devices. Included were those who would be testing the devices in the Washington and Baltimore markets in January 2010.[21] On August 7, 2009, BlackBerry service began on six TV stations–WISH-TV in Indianapolis; WAVY-TV in Hampton Roads, Virginia; KRQE in Albuquerque, New Mexico; WANE-TV in Fort Wayne, Indiana; WALA-TV in Mobile, Alabama; and KXAN-TV in Austin, Texas. 27 other stations will eventually offer the service, and LIN TV, which developed the BlackBerry service, has an iPhone application planned.[20] By October, 30 stations were airing mobile DTV signals, and that number was expected to be 50 by year-end. Also in the same month, FCC chair Julius Genachowski announced efforts to increase the amount of spectrum available to wireless services.[16] Also in August, WTVE and Axcera began testing a single-frequency network (SFN) with multiple transmitters using the new mobile standard. The RNN affiliate in Reading, Pennsylvania had used this concept since 2007.[22] Richard Mertz of Cavell, Mertz & Associates says VHF won’t work as well for mobile DTV because a 15-inch antenna or some other solution would be required, although he has heard from people who had no problems. An amplified antenna or higher power for the transmitting station would likely be needed, as well as repeater stations where terrain is a problem.[23] Lougee, whose company planned testing in its 19 markets in 2010, said the chip designs with the new devices made targeted advertising possible.[21] In December 2009, Concept Enterprises introduced the first Mobile DTV tuner for automobiles. Unlike earlier units, this one will provide a clear picture without pixilation in a fast-moving vehicle, using an LG M/H chip and a one-inch roof-mounted antenna. No subscription wil be required.[24] Also in December, the Consumer Electronics Association hosted a â€Å"plugfest† in Washington, D.C. to allow manufacturers to test various devices. More than 15 companies, and engineers from different countries, tested four transmission systems, 12 receiver systems, and four software types.[17][25] On December 1, News Corp. chairman Rupert Murdoch said mobile DTV would be important to the future of all journalism, and he planned to offer TV and possibly newspaper content in this way.[26] At the January 2010 Consumer Electronics Show, NAB head Gordon H. Smith disputed the idea that broadcasting’s days were numbered, calling mobile DTV the proof over-the-air television would continue its popularity. He said people would use cell phones and other devices to watch, and broadcast technology would be the best way to do this. Wireless broadband, which some wanted to replace broadcasting, would not be able to handle the demand for video services.[27] ION’s Burgess showed off one of the first iPhones capable of receiving mobile DTV, while ION’s Jenkins showed an LG Maze and a Valups Tivit; the latter sends signals to the iPod Touch and will soon work with the Google Nexus.[28] Sinclair Broadcast Group director of advanced technology Mark Aitken said the mobile DTV concept of multiple transmitters would help free up spectrum for wireless broadband in rural areas but not large cities. He also explained to the FCC that mobile DTV was the best method for sending out live video to those using cell phones and similar devices.[29] The OMVC’s Mobile DTV Consumer Showcase began May 3, 2010 and lasted all summer. Nine stations planned to distribute 20 programs, including local and network shows as well as cable programs, to Samsung Moment phones. Dell Netbooks and Valups Tivits also received programming.[30] On September 23, 2010, Media General began its first MDTV service at WCMH-TV in Columbus, Ohio and had plans to do the same a month later at WFLA-TV in the Tampa Bay, Florida area and five to seven more stations in its portfolio.[31] On November 19, 2010, a joint venture of 12 major broadcasters, known as the Mobile Content Venture, announced plans to upgrade TV stations in 20 markets representing 40 percent of the United States population to deliver live video to portable devices by the end of 2011.[32] Brian Lawlor, a Scripps TV senior vice president, said that, in September 2011, Scripps stations would offer an â€Å"app† allowing people with an iPhone or iPad to see emergency information (e.g. weather bulletins) in the event of a power outage.[33] In 2012, a number of stations plan to conduct tests of the Mobile Emergency Alert System (M-EAS), a system to deliver emergency information via mobile DTV.[34] In January, 2012, the MCV announced that MetroPCS would offer MCV’s Dyle mobile DTV service. Samsung planned an Android phone capable of receiving this service late in 2012.[35] At the end of 2012, Dyle was in 35  markets and capable of reaching 55 percent of viewers.[36] At the NAB show in April 2012, MCV announced that 17 additional television stations will launch mobile DTV, bringing the total to 92, covering more than 55% of US homes. Included are stations in three new markets – Austin, Texas, Boston, Massachusetts, and Dayton, Ohio.[37] In September 2012, WRAL-TV announced rollout of a Mobile Emergency Alert System based around mobile digital television technology.[38] A process called Syncbak uses cell phones rather than TV spectrum.[39] References [1] [2] [3] [4] Looking for TV Genius? | Red Bee Media (http:/ / www. tvgenius. net/ blog/ 2011/ 01/ 31/ 4-ways-smartphones-save-tv/ ) Clive’s achievements (http:/ / www. sinclair-research. co. uk/ about-srl. php) Sinclair Research Video and TV gear (http:/ / www. retrothing. com/ video_tv/ index. html), Retrothing.com NYTimes.com via Yahoo! Finance: Mobile TV Spreading in Europe and to the U.S. (http:/ / biz. yahoo. com/ nytimes/ 080506/ 1194771946810. html?. v=18), May 6, 2008 [5] 3G UK: The service is based on the Golden Dynamic Enterprises Ltd. (http:/ / www. 3g. co. uk/ PR/ March2006/ 2732. htm)’s â€Å"VOIR Portal† (http:/ / findarticles. com/ p/ articles/ mi_m0EIN/ is_2006_Dec_4/ ai_n16881105) and follows the 3GPP standard 3G-324 M. The same service is also deployed to Philippines in 2007. [6] ZDnet: BT ditches mobile TV service (http:/ / news. zdnet. co. uk/ communications/ 0,1000000085,39288247,00. htm), 26 July 2007 [7] Broadband TV news: MFD hands back German T-DMB licence (http:/ / www. broadbandtvnews. com/ ?p=4682), May 1, 2008 [8] The Register: DVB-H rockets ahead in Italy (http:/ / www. theregister. co. uk/ 2006/ 07/ 28/ dvbh_success_in_italy/ ), 28 July 2006 [9] OMVC announces sizable growth in number of MDTV stations at CES | RF content from Broadcast Engineering (http:/ / broadcastengineering. com/ RF/ OMVC-mobile-DTV-presence-announces-growth-CES-01192012/ index. html) [10] Thompson, Mark (2010-06-03). â€Å"mobile tv cell phone networks:† (http:/ / mobitv. com/ technology/ managed-service-platform). Broadcasting & Cable. . Retrieved 2010-06-03. [11] Dickson, Glen (2007-04-14). â€Å"NAB: Mobile DTV Hits the Strip† (http:/ / www. broadcastingcable. com/ article/ 108538-NAB_Mobile_DTV_Hits_the_Strip. php). Broadcasting & Cable. . Retrieved 2009-07-21. [12] Dickson, Glen (2009-01-11). â€Å"CES: Broadcasters’ Mobile DTV Moment† (http:/ / www. broadcastingcable. com/ article/ 161893-CES_Broadcast ers_Mobile_DTV_Moment. php?rssid=20102& q=broadcasters+ mobile+ dtv+ moment). Broadcasting & Cable. . Retrieved 2009-12-03. [13] Dickson, Glen (2009-04-20). â€Å"NAB 2009: Broadcasters Set Mobile DTV Test Markets† (http:/ / www. broadcastingcable. com/ article/ 209447-NAB_2009_Broadcasters_Set_Mobile_DTV_Test_Markets. php?rssid=20068& q=broadcasters+ set+ mobile+ dtv+ test+ markets). Broadcasting & Cable. . Retrieved 2009-12-17. [14] Dickson, Glen (2009-06-29). â€Å"ION Broadcasts Mobile DTV in N.Y., D.C.: Hails Its Digital TV â€Å"Triple Play†Ã¢â‚¬  (http:/ / www. broadcastingcable. com/ article/ 307120-ION_Broadcasts_Mobile_DTV_in_N_Y_D_C_. php?rssid=20068& q=digital+ tv). Broadcasting & Cable. . Retrieved 2009-07-02. [15] Dickson, Glen (2009-07-06). â€Å"ATSC-M/H voted to proposed standard status† (http:/ / www. broadcastingcable. com/ article/ 307463-Mobile_DTV_is_Almost_Official. php?rssid=20065& q=digital+ tv). Broadcasting & Cable. . Retrieved 2009-07-08. [16] Dickson, Glen (2009-10-16). â€Å"Mobile DTV Standard Approved† (http:/ / www. broadcastingcable. com/ article/ 358341-Mobile_DTV_Standard_Approved. php?rssid=20292& q=digital+ tv). Broadcasting & Cable. . Retrieved 2009-10-16. [17] Dickson, Glen (2009-12-16). â€Å"ATSC Launches Certification Program For Mobile DTV† ( http:/ / www. broadcastingcable. com/ article/ 440764-ATSC_Launches_Certification_Program_For_Mobile_DTV. php?rssid=20102& q=digital+ tv). Broadcasting & Cable. . Retrieved 2009-12-17. [18] Dickson, Glen (2009-07-13). â€Å"Special Report: Mobile DTV Heats Up† (http:/ / www. broadcastingcable. com/ article/ 314792-Special_Report_Mobile_DTV_Heats_Up. php). Broadcasting & Cable. . Retrieved 2009-07-15. [19] Dickson, Glen (2009-07-22). â€Å"ION, OMVC Organize DTV Showcase in D.C.† (http:/ / www. broadcastingcable. com/ article/ 316065-ION_OMVC_Organize_DTV_Showcase_in_D_C_. php?rssid=20068& q=digital+ tv). Broadcasting & Cable. . Retrieved 2009-07-22. [20] Eggerton, John (2009-08-07). â€Å"LIN TV Develops Blackberry App For Mobile TV Service† (http:/ / www. broadcastingcable. com/ article/ 326796-LIN_TV_Develops_Blackberry_App_For_Mobile_TV_Service. php?q=digital+ tv). Broadcasting & Cable. . Retrieved 2009-08-11. [21] Eggerton, John (2009-10-16). â€Å"OMVC Doe s Mobile DTV Tour† (http:/ / www. broadcastingcable. com/ article/ 358415-OMVC_Does_Mobile_DTV_Tour. php?rssid=20103& q=digital+ tv). Broadcasting & Cable. . Retrieved 2009-10-23. [22] Dickson, Glen (2009-12-18). â€Å"WTVE Tests SFN For Mobile DTV† (http:/ / www. broadcastingcable. com/ article/ 441031-WTVE_Tests_SFN_For_Mobile_DTV. php?rssid=20065& q=digital+ tv). Broadcasting & Cable. . Retrieved 2010-01-13. [23] Jessell, Harry A. (2009-09-24). â€Å"Digital VHF Needs A Power Boost† (http:/ / www. tvnewscheck. com/ articles/ 2009/ 09/ 24/ daily. 2/ ). TVNewsCheck. . Retrieved 2009-10-15. [24] Gilroy, Amy (2009-11-09). â€Å"First Mobile DTV Car Tuner At $499† (http:/ / www. twice. com/ article/ 388144-First_Mobile_DTV_Car_Tuner_At_499. php/ ). TWICE. . Retrieved 2009-11-10. [25] Dickson, Glen (2009-12-02). â€Å"Mobile DTV Picks Up Speed† (http:/ / www. broadcastingcable. com/ article/ 394993-Mobile_DTV_Picks_Up_Speed. php?rssid=20068& q=digital+ tv). Broadcasting & Cable. . Retrieved 2009-12-03. [26] Eggerton, John (2009-12-01). â€Å"Murdoch Says Mobile TV Is Key to Future† (http:/ / www. broadcastingcable. com/ article/ 391233-Murdoch_Says_Mobile_TV_Is_Key_to_Future. php?rssid=20070& q=digital+ tv). Broadcasting & Cable. . Retrieved 2009-12-03. [27] Dickson, Glen (2010-01-07). â€Å"CES 2010: Broadcasters Tout Mobile DTV Progress† (http:/ / www. broadcastingcable. com/ article/ 442953-CES_2010_Broadcasters_Tout_Mobile_ DTV_Progress. php?rssid=20068& q=digital+ tv). Broadcasting & Cable. . Retrieved 2010-01-13. [28] Dickson, Glen (2010-01-09). â€Å"NAB Shows Off New Spectrum Applications† (http:/ / www. broadcastingcable. com/ article/ 443352-NAB_Shows_Off_New_Spectrum_Applications. php?rssid=20068& q=digital+ tv). Broadcasting & Cable. . Retrieved 2010-01-13.

Thursday, January 2, 2020

Determining the Strength of Acids and Bases

Strong electrolytes are completely dissociated into ions in water. The acid or base molecule does not exist in aqueous solution, only ions. Weak electrolytes are incompletely dissociated. Here are definitions and examples of strong and weak acids and strong and weak bases. Strong Acids Strong acids completely dissociate in water, forming H and an anion. There are six strong acids. The others are considered to be weak acids. You should commit the strong acids to memory: HCl: hydrochloric acidHNO3: nitric acidH2SO4: sulfuric acidHBr: hydrobromic acidHI: hydroiodic acidHClO4: perchloric acid If the acid is 100 percent dissociated in solutions of 1.0 M or less, it is called strong. Sulfuric acid is considered strong only in its first dissociation step;  100 percent dissociation isnt true as solutions become more concentrated.   H2SO4 → H HSO4- Weak Acids A weak acid only partially dissociates in water to give H and the anion. Examples of weak acids include hydrofluoric acid, HF, and acetic acid, CH3COOH. Weak acids include: Molecules that contain an ionizable proton. A molecule with a formula starting with H usually is an acid.Organic acids containing one or more carboxyl group, -COOH. The H is ionizable.Anions with an ionizable proton (e.g., HSO4- → H SO42-).CationsTransition metal cationsHeavy metal cations with high chargeNH4 dissociates into NH3 H Strong Bases Strong bases dissociate 100 percent into the cation and OH- (hydroxide ion). The hydroxides of the Group I and Group II metals usually are considered to be strong bases. LiOH: lithium hydroxideNaOH: sodium hydroxideKOH: potassium hydroxideRbOH: rubidium hydroxideCsOH: cesium hydroxide*Ca(OH)2: calcium hydroxide*Sr(OH)2: strontium hydroxide*Ba(OH)2: barium hydroxide * These bases completely dissociate in solutions of 0.01 M or less. The other bases make solutions of 1.0 M and are 100 percent dissociated at that concentration. There are other strong bases than those listed, but they are not often encountered. Weak Bases Examples of weak bases include ammonia, NH3, and diethylamine, (CH3CH2)2NH. Like weak acids, weak bases do not completely dissociate in aqueous solution. Most weak bases are anions of weak acids.Weak bases do not furnish OH- ions by dissociation. Instead, they react with water to generate OH- ions.