» 
Arabic Bulgarian Chinese Croatian Czech Danish Dutch English Estonian Finnish French German Greek Hebrew Hindi Hungarian Icelandic Indonesian Italian Japanese Korean Latvian Lithuanian Malagasy Norwegian Persian Polish Portuguese Romanian Russian Serbian Slovak Slovenian Spanish Swedish Thai Turkish Vietnamese
Arabic Bulgarian Chinese Croatian Czech Danish Dutch English Estonian Finnish French German Greek Hebrew Hindi Hungarian Icelandic Indonesian Italian Japanese Korean Latvian Lithuanian Malagasy Norwegian Persian Polish Portuguese Romanian Russian Serbian Slovak Slovenian Spanish Swedish Thai Turkish Vietnamese

definition - Closed_captions

definition of Wikipedia

   Advertizing ▼

Wikipedia - see also

Wikipedia

Closed captioning

From Wikipedia, the free encyclopedia

  (Redirected from Closed captions)
Jump to: navigation, search
Jack Foley created the "CC in a TV" symbol while senior graphic designer at WGBH.

Closed captioning is a term describing several systems developed to display text on a television or video screen to provide additional or interpretive information to viewers who wish to access it. Closed captions typically display a transcription of the audio portion of a program as it occurs (either verbatim or in edited form), sometimes including non-speech elements.

Contents

Terminology

The term "closed" in closed captioning indicates that not all viewers see the captions—only those who choose to decode or activate them. This distinguishes from "open captions" (sometimes called "burned-in" or "hardcoded" captions), which are visible to all viewers.

Most of the world does not distinguish captions from subtitles. In the United States and Canada, these terms do have different meanings, however: "subtitles" assume the viewer can hear but cannot understand the language or accent, or the speech is not entirely clear, so they only transcribe dialogue and some on-screen text. "Captions" aim to describe to the hearing-impaired all significant audio content—spoken dialogue and non-speech information such as the identity of speakers and, occasionally, their manner of speaking—along with music or sound effects using words or symbols.

The United Kingdom, Ireland, and most other countries do not distinguish between subtitles and closed captions, and use "subtitles" as the general term—the equivalent of "captioning" is usually referred to as "Subtitles for the hard of hearing". Their presence is referenced on screen by notation which says "Subtitles" or previously "Subtitles 888" (the latter is in reference to the conventional teletext channel for captions).

Application

Most commonly, closed captions are used by deaf or hard of hearing individuals to assist comprehension. They can also be used as a tool by those learning to read, learning to speak a non-native language, or in an environment where the audio is difficult to hear or is intentionally muted. Captions can also be used by viewers who simply wish to read a transcript along with the program audio.

In the United States, the National Captioning Institute noted that 'English-as-a-second-language' (ESL) learners were the largest group buying decoders in the late 1980s and early 1990s before built-in decoders became a standard feature of US television sets. This suggested that the largest audience of closed captioning was people whose native language was not English. In the United Kingdom, of 7.5 million people using TV subtitles (closed captioning), 6 million have no hearing impairment [1].

Closed captions are also used in public environments, such as bars, and restaurants, where patrons may not be able to hear over the background noise, or where multiple televisions are displaying different programs.[2][3][4]

Some television sets can be set to automatically turn captioning on when the volume is muted.

Television and video

For live programs, spoken words comprising the television program's soundtrack are transcribed by a human operator (a Speech-to-Text Reporter) using stenotype or stenomask type of machines, whose phonetic output is instantly translated into text by a computer and displayed on the screen. This technique was developed in the 1970s as an initiative of the BBC's Ceefax teletext service.[5] In collaboration with the BBC, a university student took on the research project of writing the first phonetics-to-text conversion program for this purpose. Sometimes the captions of live broadcasts, like news bulletins, sports events, live entertainment shows, and other live shows fall behind by a few seconds. This delay is because the machine does not know what the person is going to say next, so after the person on the show says the sentence, the captions appear.[6] Automatic computer speech recognition now works well when trained to recognize a single voice, and so since 2003 the BBC does live subtitling by having someone re-speak what is being broadcast.

In some cases the transcript is available beforehand and captions are simply displayed during the program after being edited. For programs that have a mix of pre-prepared and live content, such as news bulletins, a combination of the above techniques is used.

For prerecorded programs, commercials, and home videos, audio is transcribed and captions are prepared, positioned, and timed in advance.

For all types of NTSC programming, captions are "encoded" into Line 21 of the vertical blanking interval – a part of the TV picture that sits just above the visible portion and is usually unseen. For ATSC (digital television) programming, three streams are encoded in the video: two are backward compatible Line 21 captions, and the third is a set of up to 63 additional caption streams encoded in EIA-708 format.[7]

Captioning is transmitted and stored differently in PAL and SECAM countries, where teletext is used rather than Line 21, but the methods of preparation are similar. For home videotapes, a variation of the Line 21 system is used in PAL countries. Teletext captions can't be stored on a standard VHS tape (due to limited bandwidth), although they are available on S-VHS tapes and DVDs.

For older televisions, a set-top box or other decoder is usually required. In the US, since the passage of the Television Decoder Circuitry Act, manufacturers of most television receivers sold have been required to include closed captioning display capability. High-definition TV sets, receivers, and tuner cards are also covered, though the technical specifications are different. (High-definition display screens, as opposed to high-definition TVs, may lack captioning.) Canada has no similar law, but receives the same sets as the US in most cases.

There are three styles of Line 21 closed captioning:[citation needed]

  • Roll-up or scroll-up or scrolling: The words appear from left to right, up to one line at a time; when a line is filled, the whole line scrolls up to make way for a new line, and the line on top is erased. The captions usually appear at the bottom of the screen, but can actually be placed anywhere to avoid covering graphics or action. This method is used for live events, where a sequential word-by-word captioning process is needed.
A still frame showing simulated closed captioning in the pop-on style
  • Pop-on or pop-up or block: A caption appears anywhere on the screen as a whole, followed by another caption or no captions. This method is used for most pre-taped television and film programming. One error for some programs that use this style is a white space will appear at the beginning of the program. Another is when the screen momentarily will, as if it was the "roll up" style, type random letters on screen, and then revert back to normal. Also, the capitalization varies based on the caption provider. Though most of the time they're all capitalized, some caption providers will have capital and lower case letters.
  • Paint-on: The caption, whether it be a single word or a line, appears on the screen letter-by-letter from left to right, but ends up as a stationary block like pop-on captions. Rarely used, it is most often seen in very first captions when little time is available to read the caption or in "overlay" captions added to an existing caption.

A single program may include scroll-up and pop-on captions (e.g., scroll-up for narration and pop-on for song lyrics). A musical note symbol (hash sign in UK, Ireland and Australia) is used to indicate song lyrics or background music. Generally, lyrics are preceded and followed by music notes (or hash signs), while song titles are bracketed like a sound effect. Standards vary from country to country and company to company.

For live programs, some soap operas, and other shows captioned using scroll-up, Line 21 caption text include the symbols '>>' to indicate a new speaker (the name of the new speaker sometimes appears as well), and '>>>' in news reports to identify a new story. In some cases, '>>' means one person is talking and '>>>' means two or more people are talking. Capitals are frequently used because many older home caption decoder fonts had no descenders for the lowercase letters g, j, p, q, and y, though virtually all modern TVs have caption character sets with descenders. Text can be italicized, among a few other style choices. Captions can be presented in different colors as well. Coloration is rarely used in North America, but can sometimes be seen on music videos on MTV or VH-1, and in the captioning's production credits. More often, coloration is used in the United Kingdom, Ireland, Australia and New Zealand for speaker differentiation.

There were many shortcomings in the original Line 21 specification from a typographic standpoint, since, for example, it lacked many of the characters required for captioning in languages other than English. Since that time, the core Line 21 character set has been expanded to include quite a few more characters, handling most requirements for languages common in North and South America such as French, Spanish, and Portuguese, though those extended characters are not required in all decoders and are thus unreliable in everyday use. The problem has been almost eliminated with the EIA-708 standard for digital television, which boasts a far more comprehensive character set.

Captions are often edited to make them easier to read and to reduce the amount of text displayed onscreen. This editing can be very minor, with only a few occasional unimportant missed lines, to severe, where virtually every line spoken by the actors is condensed. The measure used to guide this editing is words per minute, commonly varying from 180 to 300, depending on the type of program. Offensive words are also captioned, but if the program is censored for TV broadcast, the broadcaster might not have arranged for the captioning to be edited or censored also. The "TV Guardian", a television set top box, is available to parents who wish to censor offensive language of programs–the video signal is fed into the box and if it detects an offensive word in the captioning, the audio signal is bleeped or muted for that period of time.

Caption channels

Telemundo bug touting CC1 and CC3 captions.

The Line 21 data stream can consist of data from several data channelsmultiplexed together. Field 1 has four data channels: two Captions (CC1, CC2)and two Text (T1, T2). Field 2 has five additional data channels: twoCaptions (CC3, CC4), two Text (T3, T4), and Extended Data Services (XDS). XDSdata structure is defined in CEA–608.

As CC1 and CC2 share bandwidth, if there is a lot of data in CC1, there will be little room for CC2 data. Similarly CC3 and CC4 share the second field of line 21. Since some early caption decoders supported only CC1 and CC2, captions in a second language were often placed in CC2. This led to bandwidth problems, however, and the current FCC recommendation is that bilingual programming should have the second caption language in CC3. Telemundo, for example, provides English subtitles for many of its Spanish programs in CC3.

DVDs

NTSC DVDs may carry closed captions in data packets of the MPEG-2 video streams inside of the Video-TS folder. Once played out of the analog outputs of a set top DVD player, the caption data is converted to the Line 21 format.[8] They are sent to the TV by the player and can be displayed with a TV's built-in decoder or a set-top decoder as usual. When viewed on a personal computer caption data can be viewed by software that can read and decode the caption data packets in the MPEG-2 streams of the DVD-Video disc. Both Windows Media Player and Apple's DVD Player have the ability to read and decode caption data.

In addition to Line 21 closed captions, video DVDs may also carry subtitles as a bitmap overlay which can be turned on and off via a set top DVD player or DVD player software, just like captions. This type of captioning is usually carried in a subtitle track labeled either "English for the hearing impaired" or, more recently, "SDH" (Subtitled for the Deaf and Hard of hearing).[9] Many popular Hollywood DVD-Video's can carry both subtitles and closed captions (see Stepmom DVD by Columbia Pictures). On some DVDs, the Line 21 captions may contain the same text as the subtitles; on others, only the Line 21 captions include the additional non-speech information needed for deaf and hard of hearing viewers. European Region 2 DVDs do not carry Line 21 captions, and instead list the subtitle languages available—English is often listed twice, one as the representation of the dialogue alone, and a second subtitle set which carries additional "sound" information for the deaf and hard of hearing audience. (Many deaf/HOH subtitle files on DVDs are reworkings of original teletext subtitle files.)

HD DVD and Blu-ray disc media cannot carry Line 21 closed captioning due to the design of High-Definition Multimedia Interface (HDMI) specifications that were designed to replace older analog and digital standards, such as VGA, S-Video, and DVI. Both Blu-ray disc and HD DVD can use either DVD bitmap subtitles (with extended definition) or 'advanced subtitles' to carry SDH type subtitling, the latter being an XML based textual format which includes font, styling and positioning information as well as a unicode representation of the text. Advanced subtitling can also include additional media accessibility features such as "descriptive audio".

Movies from Universal Studios don't have Closed captioning, albeit they use subtitles.

Movies

There are several competing technologies used to provide captioning for movies in theaters. Just as with television captioning, they fall into two broad categories: open and closed. The definition of "closed" captioning in this context is a bit different from television, as it refers to any technology that allows some of the viewers to use captions while others in the same theater at the same time do not see captions.

Open captioning in a theater can be accomplished through burned-in captions, projected bitmaps, or (rarely) a display located above or below the movie screen. Typically, this display is a large LED sign.

Probably the best-known closed captioning option for theaters is the Rear Window Captioning System from the National Center for Accessible Media. Upon entering the theater, viewers requiring captions are given a panel of flat translucent glass or plastic on a gooseneck stalk, which can be mounted in front of the viewer's seat. In the back of the theater is an LED display that shows the captions in mirror image. The panel reflects the captions for the viewer, but is nearly invisible to surrounding patrons. The panel can be positioned so that the viewer watches the movie through the panel and captions appear either on or near the movie image. A company called Cinematic Captioning Systems has a similar reflective system called Bounce Back. A major problem for distributors has been that these systems are each proprietary, and require separate distributions to the theatre to enable them to work. Proprietary systems also incur royalties.

Digital Theater Systems, the company behind the DTS surround sound standard have a digital captioning device called the DTS-CSS or Cinema Subtitling System. It is a combination of a laser projector which places the captioning (words, sounds) anywhere on the screen and a thin playback device with a CD that holds many languages.

Other closed captioning technologies for movies include hand-held displays similar to a PDA (personal digital assistant); eyeglasses fitted with a prism over one lens; and projected bitmap captions. The PDA and eyeglass systems use a wireless transmitter to send the captions to the display device.

Special effort has been made to build accessibility features into digital cinema. Through SMPTE, standards now exist that dictate how open and closed captions, as well as hearing-impaired and visually-impaired narrative audio, can be packaged with content. An effort is also underway in SMPTE (mid-2009) to standardize the communication of closed caption content between the digital cinema server and 3rd party closed caption systems. Thanks to this work, competitive closed caption systems can be produced for digital cinema based on uniform, royalty-free distributions. In addition, innovative 3rd party closed caption systems can emerge that require no modification to standards-compliant digital cinema servers.[10]

Video games

Closed captioning of video games is becoming more common. One of the first video games to feature true closed captioning was Zork Grand Inquisitor in 1997. Many games since then have at least offered subtitles for spoken dialog during cut scenes, and many include significant in-game dialog and sound effects in the captions as well; for example, with subtitles turned on in the Metal Gear Solid series of stealth games, not only are subtitles available during cut scenes, but any dialog spoken during real-time gameplay will be captioned as well, allowing players who can't hear the dialog to know what enemy guards are saying and when the main character has been detected. Also, in the video game Half-Life 2, when closed captions are activated, dialog and nearly all sound effects either made by the player or from other sources (e.g. gunfire, explosions) will be captioned.

Video games don't offer Line 21 captioning, decoded and displayed by the television itself but rather a built-in subtitle display, more akin to that of a DVD. The game systems themselves have no role in the captioning either: each game must have its subtitle display programmed individually.

Reid Kimball, a game designer who is hard of hearing, is attempting to educate game developers about closed captioning for games. Reid started the Games[CC] group to close caption games and serve as a research and development team to aid the industry. Kimball designed the Dynamic Closed Captioning system,[citation needed] writes articles, and speaks at developer conferences. Games[CC]'s first closed captioning project called Doom3[CC] was nominated for an award as Best Doom3 Mod of the Year for IGDA's Choice Awards 2006 show.

Online Video Streaming

Internet Video Streaming Service, YouTube, offers captioning services in videos. The author of the video can upload a SubViewer (*.SUB), SubRip (*.SRT) or *.SBV file. [11]YouTube is currently testing an Automatic Caption Feature, which will transcribe audio and not require the author to add a captioning file. This feature is only available on certain videos, especially those created by Google and YouTube. [12]

Flash video also supports captions via the Timed Text DFXP .XML format. The latest Flash authoring software adds free player skins and caption components that enable viewers to turn captions on/off during playback from a webpage. Previous versions of Flash relied on the Captionate 3rd party component and skin to caption Flash video. Custom Flash players designed in Flex can be tailored to support the Timed Text DFXP .XML format, Captionate .XML, or SAMI file (see Hulu captioning).

Windows Media Video can support closed captions for both video on demand streaming or live streaming scenarios. Typically Windows Media captions support the SAMI file format but can also carry embedded closed caption data.

QuickTime video supports true 608 caption data via QuickTime's proprietary Closed Caption Track. These captions can be turned on and off and appear in the same style as TV closed captions with all the standard formatting (pop-on, roll-up, paint-on) and can be positioned and split anywhere on the video screen. QuickTime Closed Caption tracks can be viewed in Mac or Windows versions of QuickTime player, iTunes, QuickTime web browser plug-in and iPod Nano, iPod Classic, iPod Touch, and iPhone.

Theatre

Live plays can be open captioned by a captioner who displays lines from the script and including non-speech elements on a large display screen near the stage.[13]

Telephones

A captioned telephone (also called captioned relay or Cap-Tel) is a telephone that displays real-time captions of the current conversation. The captions are typically displayed on a screen embedded into the telephone base.

Media monitoring services

In the United States especially, most media monitoring services capture and index closed captioning text from news and public affairs programs, allowing them to search the text for client references.

The use of closed captioning for television news monitoring was pioneered in 1993 by Tulsa-based NewsTrak of Oklahoma (later known as Broadcast News of Mid-America, acquired by video news release pioneer Medialink Worldwide Incorporated in 1997). US patent 7,009,657 describes a "method and system for the automatic collection and conditioning of closed caption text originating from multiple geographic locations" as used by news monitoring services.

HDTV interoperability issues

Americas

The US ATSC HDTV system originally specified two different kinds of closed captioning datastream standards—the original (available by Line 21) and another more modern version encoded in MPEG-2, the CEA-708 standard.[7]

The US FCC mandates that broadcasters deliver (and generate, if necessary) both datastream formats.[7] The Canadian CRTC has not mandated that broadcasters either broadcast both datastream formats or exclusively in one format.

Incompatibility issues with HDTV

Many viewers find that when they switch to an HDTV they are unable to view closed caption (CC)information, even though the broadcaster is sending it and the TV is able to display it.Originally, CC information was included in the picture ("line 21"), but there is no equivalent capability inthe HDTV 720p/1080i interconnects between the display and a "source".A "source", in this case, can be a DVD player or an HD tuner (a cable box is an HD tuner).When CC information is encoded in the MPEG-2 data stream, only the device that decodes the MPEG-2 data(a source) has access to the closed caption information; there is no standard for transmittingthe CC information to an HD display separately.Thus, if there is CC information, the source device needs to overlay the CC information on the picture prior to transmitting to the display over the interconnect.Many source devices do not have the ability to overlay CC information, or controlling the CC overlay is extremely complicated.For example, the Motorola DCT-5xxx and -6xxx cable set-top boxes have the ability to decode CC information located on the mpg stream and overlay it on the picture, but turning CC on and off requires turning off the unit and going into a special setup menu (it is not on the standard configuration menu and it cannot be controlled using the remote).Historically, DVD players and cable box tuners did not need to do this overlaying, they simply passed this information on to the TV,and they are not mandated to perform this overlaying.Many modern HDTVs can be directly connected to cables, but then they often cannot receive scrambled channels that the user is paying for.Thus, the lack of a standard way of sending CC information between components, along with the lack of amandate to add this information to a picture, results in CC being unavailable to many hard-of-hearing and deaf users."HDMI not allowing Closed Captioning?"

Europe

The European teletext systems are the source for closed captioning signals, thus when teletext is embedded into DVB-T or DVB-S the closed captioning signal is included.[citation needed] However, for DVB-T and DVB-S, it is not necessary for a teletext page signal to also be present (ITV1, for example, does not carry analogue teletext signals on Sky Digital, but does carry the embedded version, accessible from the "Services" menu of the receiver, or more recently by turning them off/on from a mini menu accessible from the "help" button).

DTV standard captioning improvements

The CEA-708 specification provides for dramatically improved captioning

  • An enhanced character set with more accented letters and non-Latin letters, and more special symbols
  • Viewer-adjustable text size (called the "caption volume control" in the specification), allowing individuals to adjust their TVs to display small, normal, or large captions
  • More text and background colors, including both transparent and translucent backgrounds to optionally replace the big black block
  • More text styles, including edged or drop-shadowed text rather than the letters on a solid background
  • More text fonts, including monospaced and proportional spaced, serif and sans-serif, and some playful cursive fonts
  • Higher bandwidth, to allow more data per minute of video
  • More language channels, to allow the encoding of more independent caption streams

As of 2009, however, most closed captioning for DTV environments is done using tools designed for analog captioning (working to the CEA-608 NTSC spec rather than the CEA-708 DTV spec). The captions are then run through transcoders made by companies like EEG Enterprises or Evertz, which convert the analog Line 21 caption format to the digital format. This means that none of the CEA-708 features are used unless they were also contained in CEA-608.

Non-Linear Video Editing Systems and Closed Captioning

In mid 2009, Apple released Final Cut Pro version 7 and began support for inserting closed caption data into SD and HD tape masters via firewire and compatible video capture cards.[14] Up until this time it was not possible for video editors to insert caption data with both CEA-608 and CEA-708 to their tape masters. The typical workflow included first printing the SD or HD video to a tape and sending it to a professional closed caption service company that had a stand alone closed caption hardware encoder.

This new closed captioning workflow known as eCaptioning involves making a proxy video from the non-linear system to import into a third-party non-linear closed captioning software. Once the closed captioning software project is completed, it must export a closed caption file compatible with the non-linear editing system. In the case of Final Cut Pro 7, three different file formats can be accepted: a .SCC file (Scenarist Closed Caption file) for Standard Definition video, a QuickTime 608 Closed Caption track (a special 608 coded track in the .mov file wrapper) for Standard Definition video, and finally a QuickTime 708 Closed Caption track (a special 708 coded track in the .mov file wrapper) for High Definition video output.

Alternately, Matrox video systems devised another mechanism for inserting closed caption data by allowing the video editor to include CEA-608 and CEA-708 in a discrete audio channel on the video editing timeline. This allows real-time preview of the captions while editing and is compatible with Final Cut Pro 6 and 7.[15]

Other non-linear editing systems indirectly support closed captioning only in Standard Definition line-21. Video files on the editing timeline must be composited with a line-21 VBI graphic layer known in the industry as a "blackmovie" with closed caption data.[16] Alternately, video editors working with the DV25 and DV50 firewire workflows must encode their DV .avi or .mov file with VAUX data which includes CEA-608 closed caption data.

History

Open captioning

Regular open captioned broadcasts began on PBS’s “The French Chef” in 1972.[17] WGBH began open captioning of ZOOM, ABC World News Tonight, and Once upon a Classic shortly thereafter.

Technical development of closed captioning

Closed captioning was first demonstrated at the First National Conference on Television for the Hearing Impaired in Nashville, Tennessee in 1971.[17] A second demonstration of closed captioning was held at Gallaudet College (now Gallaudet University) on February 15, 1972 where ABC and the National Bureau of Standards demonstrated closed captions embedded within a normal broadcast of Mod Squad.

The closed captioning system was successfully encoded and broadcast in 1973 with the cooperation of PBS station WETA.[17] As a result of these tests, the FCC in 1976 set aside line 21 for the transmission of closed captions. PBS engineers then developed the caption editing consoles that would be used to caption prerecorded programs.

Real-time captioning, a process for captioning live broadcasts, was developed in 1982.[17] In real-time captioning, court reporters trained to type at speeds of over 225 words per minute give viewers instantaneous access to live news, sports and entertainment. As a result, the viewer sees the captions within two to three seconds of the words being spoken.

Full-scale closed captioning

The National Captioning Institute was created in 1979 in order to get the cooperation of the commercial television networks.[3]

The first use of regularly scheduled uses of closed captioning on American television was on March 16, 1980. Sears had developed and sold the Telecaption adapter, a decoding unit that could be connected to a standard television set. The first programs seen with captioning were the ABC Sunday Night Movie, Disney's Wonderful World on NBC, and Masterpiece Theatre on PBS. The captioned Disney feature, showing at 7:00 pm EST, was the film Son of Flubber, while the movie at 9:00 EST was Semi-Tough.[18]

Legislative development in the U.S.

On January 23, 1990, the Television Decoder Circuitry Act of 1990 was passed by US Congress.[17] This Act gave the Federal Communications Commission (FCC) power to enact rules on the implementation of Closed Captioning. This Act required all analog television receivers with screens of at least 13 inches or greater, either sold or manufactured, to have the ability to display closed captioning in July 1, 1993.[19]

Also in 1990, The Americans with Disabilities Act (ADA) was passed to ensure equal opportunity for persons with disabilities.[4] The ADA prohibits discrimination against persons with disabilities in public accommodations or commercial facilities. Title III of the ADA requires that public facilities, such as hospitals, bars, shopping centers and museums (but not movie theaters), provide access to verbal information on televisions, films or slide shows.

The Telecommunications Act of 1996 expanded on the Decoder Circuity Act to place the same requirements on digital television receivers by July 1, 2002.[20] All TV programming distributors in the U.S. must provide closed caption for Spanish language video programming by January 1, 2010.[21]

Legislative development in Australia

The government of Australia provided seed funding in 1981 for the establishment of the Australian Caption Centre (ACC) and the purchase of equipment. Captioning by the ACC commenced in 1982 and a further grant from the Australian government enabled the ACC to achieve and maintain financial self-sufficiency. The ACC, now known as Media Access Australia, sold its commercial captioning division to Red Bee Media in December 2005. Red Bee Media continues to provide captioning services to Australia today.[22][23][24]

Logo

The current and most familiar logo for closed captioning consists of two Cs (for "closed captioned") inside a television screen. It was created by Jack Foley while he was a senior graphic designer at WGBH.[citation needed] The other logo, trademarked by the National Captioning Institute, was a speech balloon in the shape of a TV; 2 such versions exist: one with a tail on the left, the other with a tail on the right .[25]

See also

Notes

  1. ^ [1] Ofcom, UK: Television access services
  2. ^ Alex Varley, Chief Executive, Media Access Australia (June 2008). "Submission to DBCDE’s investigation into Access to Electronic Media for the Hearing and Vision Impaired" (PDF). Australia: Media Access Australia. pp. 16. http://www.dbcde.gov.au/__data/assets/pdf_file/0020/84710/Media_Access_Australia_-_Response_to_Media_Access_Review_2008.pdf. Retrieved 2009-01-29. "The use of captions and audio description is not limited to deaf and blind people. Captions can be used in situations of “temporary” deafness, such as watching televisions in public areas where the sound has been turned down (commonplace in America and starting to appear more in Australia)." 
  3. ^ Mayor's Disability Council (May 16, 2008). "Resolution in Support of Board of Supervisors’ Ordinance Requiring Activation of Closed Captioning on Televisions in Public Areas". City and County of San Francisco. http://www.sfgov.org/site/sfmdc_page.asp?id=86619. Retrieved 2009-01-29. "that television receivers located in any part of a facility open to the general public have closed captioning activated at all times when the facility is open and the television receiver is in use." 
  4. ^ Alex Varley, Chief Executive, Media Access Australia (April 18, 2005). "Settlement Agreement Between The United States And Norwegian American Hospital Under The Americans With Disabilities Act". U.S. Department of Justice. http://www.ada.gov/norwegian.htm. Retrieved 2009-01-29. "...will have closed captioning operating in all public areas where there are televisions with closed captioning; televisions in public areas without built-in closed captioning capability will be replaced with televisions that have such capability" 
  5. ^ http://teletext.mb21.co.uk/timeline/early-ceefax-subtitling.shtml
  6. ^ http://www.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP065.pdf
  7. ^ a b c [2] - ATSC Closed Captioning FAQ (cached copy)
  8. ^ http://www.dvddemystified.com/dvdfaq.html#3.4
  9. ^ http://www.dvddemystified.com/dvdfaq.html#1.45
  10. ^ http://mkpe.com/publications/d-cinema/misc/enabling_the_disabled.php
  11. ^ http://www.google.com/support/youtube/bin/answer.py?hl=en&answer=100077
  12. ^ http://www.youtube.com/watch?v=kTvHIDKLFqc
  13. ^ Stagetext.org
  14. ^ http://www.apple.com/finalcutstudio/whats-new.html
  15. ^ http://www.cpcweb.com/mxo2/
  16. ^ http://www.cpcweb.com/nle/
  17. ^ a b c d e "A Brief History of Captioned Television". http://www.ncicap.org/caphist.asp. 
  18. ^ "Today on TV", Chicago Daily Herald, March 11, 1980, Section 2-5
  19. ^ "Television Decoder Circuitry Act of 1990". http://www.access-board.gov/sec508/guide/1194.24-decoderact.htm. 
  20. ^ "FCC Consumer Facts on Closed Captioning". http://www.fcc.gov/cgb/consumerfacts/closedcaption.html. 
  21. ^ "Part 79 – Closed Captioning of Video Programming". http://www.fcc.gov/cgb/dro/captioning_regs.html. 
  22. ^ Alex Varley, Chief Executive, Media Access Australia (June 2008). "Submission to DBCDE’s investigation into Access to Electronic Media for the Hearing and Vision Impaired" (PDF). Australia: Media Access Australia. pp. 12,18,43. http://www.dbcde.gov.au/__data/assets/pdf_file/0020/84710/Media_Access_Australia_-_Response_to_Media_Access_Review_2008.pdf. Retrieved 2009-02-07. 
  23. ^ "About Media Access Australia". Australia: Media Access Australia. http://www.mediaaccess.org.au/index.php?option=com_content&view=article&id=359&Itemid=100. Retrieved 2009-02-07. 
  24. ^ "About Red Bee Media Australia". Australia: Red Bee Media Australia Pty Limited. http://www.redbeemedia.com.au/aboutus-australia.html. Retrieved 2009-02-07. 
  25. ^ http://www.ncicap.org/ncilogo.asp National Captioning Institute Logos

References

  • Realtime Captioning... The VITAC Way by Amy Bowlen and Kathy DiLorenzo (no ISBN)
  • Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television by Gregory J. Downey (ISBN 9780801887109)
  • The Closed Captioning Handbook by Gary D. Robson (ISBN 0-240-80561-5)
  • Alternative Realtime Careers: A Guide to Closed Captioning and CART for Court Reporters by Gary D. Robson (ISBN 1-881859-51-7)
  • A New Civil Right: Telecommunications Equality for Deaf and Hard of Hearing Americans by Karen Peltz Strauss (ISBN 9781563682919)
  • Enabling The Disabled by Michael Karagosian (no ISBN)

External links

Closed captioning

From Wikipedia, the free encyclopedia

  (Redirected from Closed-captions)
Jump to: navigation, search
Jack Foley created the "CC in a TV" symbol while senior graphic designer at WGBH.

Closed captioning is a term describing several systems developed to display text on a television or video screen to provide additional or interpretive information to viewers who wish to access it. Closed captions typically display a transcription of the audio portion of a program as it occurs (either verbatim or in edited form), sometimes including non-speech elements.

Contents

Terminology

The term "closed" in closed captioning indicates that not all viewers see the captions—only those who choose to decode or activate them. This distinguishes from "open captions" (sometimes called "burned-in" or "hardcoded" captions), which are visible to all viewers.

Most of the world does not distinguish captions from subtitles. In the United States and Canada, these terms do have different meanings, however: "subtitles" assume the viewer can hear but cannot understand the language or accent, or the speech is not entirely clear, so they only transcribe dialogue and some on-screen text. "Captions" aim to describe to the hearing-impaired all significant audio content—spoken dialogue and non-speech information such as the identity of speakers and, occasionally, their manner of speaking—along with music or sound effects using words or symbols.

The United Kingdom, Ireland, and most other countries do not distinguish between subtitles and closed captions, and use "subtitles" as the general term—the equivalent of "captioning" is usually referred to as "Subtitles for the hard of hearing". Their presence is referenced on screen by notation which says "Subtitles" or previously "Subtitles 888" (the latter is in reference to the conventional teletext channel for captions).

Application

Most commonly, closed captions are used by deaf or hard of hearing individuals to assist comprehension. They can also be used as a tool by those learning to read, learning to speak a non-native language, or in an environment where the audio is difficult to hear or is intentionally muted. Captions can also be used by viewers who simply wish to read a transcript along with the program audio.

In the United States, the National Captioning Institute noted that 'English-as-a-second-language' (ESL) learners were the largest group buying decoders in the late 1980s and early 1990s before built-in decoders became a standard feature of US television sets. This suggested that the largest audience of closed captioning was people whose native language was not English. In the United Kingdom, of 7.5 million people using TV subtitles (closed captioning), 6 million have no hearing impairment [1].

Closed captions are also used in public environments, such as bars, and restaurants, where patrons may not be able to hear over the background noise, or where multiple televisions are displaying different programs.[2][3][4]

Some television sets can be set to automatically turn captioning on when the volume is muted.

Television and video

For live programs, spoken words comprising the television program's soundtrack are transcribed by a human operator (a Speech-to-Text Reporter) using stenotype or stenomask type of machines, whose phonetic output is instantly translated into text by a computer and displayed on the screen. This technique was developed in the 1970s as an initiative of the BBC's Ceefax teletext service.[5] In collaboration with the BBC, a university student took on the research project of writing the first phonetics-to-text conversion program for this purpose. Sometimes the captions of live broadcasts, like news bulletins, sports events, live entertainment shows, and other live shows fall behind by a few seconds. This delay is because the machine does not know what the person is going to say next, so after the person on the show says the sentence, the captions appear.[6] Automatic computer speech recognition now works well when trained to recognize a single voice, and so since 2003 the BBC does live subtitling by having someone re-speak what is being broadcast.

In some cases the transcript is available beforehand and captions are simply displayed during the program after being edited. For programs that have a mix of pre-prepared and live content, such as news bulletins, a combination of the above techniques is used.

For prerecorded programs, commercials, and home videos, audio is transcribed and captions are prepared, positioned, and timed in advance.

For all types of NTSC programming, captions are "encoded" into Line 21 of the vertical blanking interval – a part of the TV picture that sits just above the visible portion and is usually unseen. For ATSC (digital television) programming, three streams are encoded in the video: two are backward compatible Line 21 captions, and the third is a set of up to 63 additional caption streams encoded in EIA-708 format.[7]

Captioning is transmitted and stored differently in PAL and SECAM countries, where teletext is used rather than Line 21, but the methods of preparation are similar. For home videotapes, a variation of the Line 21 system is used in PAL countries. Teletext captions can't be stored on a standard VHS tape (due to limited bandwidth), although they are available on S-VHS tapes and DVDs.

For older televisions, a set-top box or other decoder is usually required. In the US, since the passage of the Television Decoder Circuitry Act, manufacturers of most television receivers sold have been required to include closed captioning display capability. High-definition TV sets, receivers, and tuner cards are also covered, though the technical specifications are different. (High-definition display screens, as opposed to high-definition TVs, may lack captioning.) Canada has no similar law, but receives the same sets as the US in most cases.

There are three styles of Line 21 closed captioning:[citation needed]

  • Roll-up or scroll-up or scrolling: The words appear from left to right, up to one line at a time; when a line is filled, the whole line scrolls up to make way for a new line, and the line on top is erased. The captions usually appear at the bottom of the screen, but can actually be placed anywhere to avoid covering graphics or action. This method is used for live events, where a sequential word-by-word captioning process is needed.
A still frame showing simulated closed captioning in the pop-on style
  • Pop-on or pop-up or block: A caption appears anywhere on the screen as a whole, followed by another caption or no captions. This method is used for most pre-taped television and film programming. One error for some programs that use this style is a white space will appear at the beginning of the program. Another is when the screen momentarily will, as if it was the "roll up" style, type random letters on screen, and then revert back to normal. Also, the capitalization varies based on the caption provider. Though most of the time they're all capitalized, some caption providers will have capital and lower case letters.
  • Paint-on: The caption, whether it be a single word or a line, appears on the screen letter-by-letter from left to right, but ends up as a stationary block like pop-on captions. Rarely used, it is most often seen in very first captions when little time is available to read the caption or in "overlay" captions added to an existing caption.

A single program may include scroll-up and pop-on captions (e.g., scroll-up for narration and pop-on for song lyrics). A musical note symbol (hash sign in UK, Ireland and Australia) is used to indicate song lyrics or background music. Generally, lyrics are preceded and followed by music notes (or hash signs), while song titles are bracketed like a sound effect. Standards vary from country to country and company to company.

For live programs, some soap operas, and other shows captioned using scroll-up, Line 21 caption text include the symbols '>>' to indicate a new speaker (the name of the new speaker sometimes appears as well), and '>>>' in news reports to identify a new story. In some cases, '>>' means one person is talking and '>>>' means two or more people are talking. Capitals are frequently used because many older home caption decoder fonts had no descenders for the lowercase letters g, j, p, q, and y, though virtually all modern TVs have caption character sets with descenders. Text can be italicized, among a few other style choices. Captions can be presented in different colors as well. Coloration is rarely used in North America, but can sometimes be seen on music videos on MTV or VH-1, and in the captioning's production credits. More often, coloration is used in the United Kingdom, Ireland, Australia and New Zealand for speaker differentiation.

There were many shortcomings in the original Line 21 specification from a typographic standpoint, since, for example, it lacked many of the characters required for captioning in languages other than English. Since that time, the core Line 21 character set has been expanded to include quite a few more characters, handling most requirements for languages common in North and South America such as French, Spanish, and Portuguese, though those extended characters are not required in all decoders and are thus unreliable in everyday use. The problem has been almost eliminated with the EIA-708 standard for digital television, which boasts a far more comprehensive character set.

Captions are often edited to make them easier to read and to reduce the amount of text displayed onscreen. This editing can be very minor, with only a few occasional unimportant missed lines, to severe, where virtually every line spoken by the actors is condensed. The measure used to guide this editing is words per minute, commonly varying from 180 to 300, depending on the type of program. Offensive words are also captioned, but if the program is censored for TV broadcast, the broadcaster might not have arranged for the captioning to be edited or censored also. The "TV Guardian", a television set top box, is available to parents who wish to censor offensive language of programs–the video signal is fed into the box and if it detects an offensive word in the captioning, the audio signal is bleeped or muted for that period of time.

Caption channels

Telemundo bug touting CC1 and CC3 captions.

The Line 21 data stream can consist of data from several data channelsmultiplexed together. Field 1 has four data channels: two Captions (CC1, CC2)and two Text (T1, T2). Field 2 has five additional data channels: twoCaptions (CC3, CC4), two Text (T3, T4), and Extended Data Services (XDS). XDSdata structure is defined in CEA–608.

As CC1 and CC2 share bandwidth, if there is a lot of data in CC1, there will be little room for CC2 data. Similarly CC3 and CC4 share the second field of line 21. Since some early caption decoders supported only CC1 and CC2, captions in a second language were often placed in CC2. This led to bandwidth problems, however, and the current FCC recommendation is that bilingual programming should have the second caption language in CC3. Telemundo, for example, provides English subtitles for many of its Spanish programs in CC3.

DVDs

NTSC DVDs may carry closed captions in data packets of the MPEG-2 video streams inside of the Video-TS folder. Once played out of the analog outputs of a set top DVD player, the caption data is converted to the Line 21 format.[8] They are sent to the TV by the player and can be displayed with a TV's built-in decoder or a set-top decoder as usual. When viewed on a personal computer caption data can be viewed by software that can read and decode the caption data packets in the MPEG-2 streams of the DVD-Video disc. Both Windows Media Player and Apple's DVD Player have the ability to read and decode caption data.

In addition to Line 21 closed captions, video DVDs may also carry subtitles as a bitmap overlay which can be turned on and off via a set top DVD player or DVD player software, just like captions. This type of captioning is usually carried in a subtitle track labeled either "English for the hearing impaired" or, more recently, "SDH" (Subtitled for the Deaf and Hard of hearing).[9] Many popular Hollywood DVD-Video's can carry both subtitles and closed captions (see Stepmom DVD by Columbia Pictures). On some DVDs, the Line 21 captions may contain the same text as the subtitles; on others, only the Line 21 captions include the additional non-speech information needed for deaf and hard of hearing viewers. European Region 2 DVDs do not carry Line 21 captions, and instead list the subtitle languages available—English is often listed twice, one as the representation of the dialogue alone, and a second subtitle set which carries additional "sound" information for the deaf and hard of hearing audience. (Many deaf/HOH subtitle files on DVDs are reworkings of original teletext subtitle files.)

HD DVD and Blu-ray disc media cannot carry Line 21 closed captioning due to the design of High-Definition Multimedia Interface (HDMI) specifications that were designed to replace older analog and digital standards, such as VGA, S-Video, and DVI. Both Blu-ray disc and HD DVD can use either DVD bitmap subtitles (with extended definition) or 'advanced subtitles' to carry SDH type subtitling, the latter being an XML based textual format which includes font, styling and positioning information as well as a unicode representation of the text. Advanced subtitling can also include additional media accessibility features such as "descriptive audio".

Movies from Universal Studios don't have Closed captioning, albeit they use subtitles.

Movies

There are several competing technologies used to provide captioning for movies in theaters. Just as with television captioning, they fall into two broad categories: open and closed. The definition of "closed" captioning in this context is a bit different from television, as it refers to any technology that allows some of the viewers to use captions while others in the same theater at the same time do not see captions.

Open captioning in a theater can be accomplished through burned-in captions, projected bitmaps, or (rarely) a display located above or below the movie screen. Typically, this display is a large LED sign.

Probably the best-known closed captioning option for theaters is the Rear Window Captioning System from the National Center for Accessible Media. Upon entering the theater, viewers requiring captions are given a panel of flat translucent glass or plastic on a gooseneck stalk, which can be mounted in front of the viewer's seat. In the back of the theater is an LED display that shows the captions in mirror image. The panel reflects the captions for the viewer, but is nearly invisible to surrounding patrons. The panel can be positioned so that the viewer watches the movie through the panel and captions appear either on or near the movie image. A company called Cinematic Captioning Systems has a similar reflective system called Bounce Back. A major problem for distributors has been that these systems are each proprietary, and require separate distributions to the theatre to enable them to work. Proprietary systems also incur royalties.

Digital Theater Systems, the company behind the DTS surround sound standard have a digital captioning device called the DTS-CSS or Cinema Subtitling System. It is a combination of a laser projector which places the captioning (words, sounds) anywhere on the screen and a thin playback device with a CD that holds many languages.

Other closed captioning technologies for movies include hand-held displays similar to a PDA (personal digital assistant); eyeglasses fitted with a prism over one lens; and projected bitmap captions. The PDA and eyeglass systems use a wireless transmitter to send the captions to the display device.

Special effort has been made to build accessibility features into digital cinema. Through SMPTE, standards now exist that dictate how open and closed captions, as well as hearing-impaired and visually-impaired narrative audio, can be packaged with content. An effort is also underway in SMPTE (mid-2009) to standardize the communication of closed caption content between the digital cinema server and 3rd party closed caption systems. Thanks to this work, competitive closed caption systems can be produced for digital cinema based on uniform, royalty-free distributions. In addition, innovative 3rd party closed caption systems can emerge that require no modification to standards-compliant digital cinema servers.[10]

Video games

Closed captioning of video games is becoming more common. One of the first video games to feature true closed captioning was Zork Grand Inquisitor in 1997. Many games since then have at least offered subtitles for spoken dialog during cut scenes, and many include significant in-game dialog and sound effects in the captions as well; for example, with subtitles turned on in the Metal Gear Solid series of stealth games, not only are subtitles available during cut scenes, but any dialog spoken during real-time gameplay will be captioned as well, allowing players who can't hear the dialog to know what enemy guards are saying and when the main character has been detected. Also, in the video game Half-Life 2, when closed captions are activated, dialog and nearly all sound effects either made by the player or from other sources (e.g. gunfire, explosions) will be captioned.

Video games don't offer Line 21 captioning, decoded and displayed by the television itself but rather a built-in subtitle display, more akin to that of a DVD. The game systems themselves have no role in the captioning either: each game must have its subtitle display programmed individually.

Reid Kimball, a game designer who is hard of hearing, is attempting to educate game developers about closed captioning for games. Reid started the Games[CC] group to close caption games and serve as a research and development team to aid the industry. Kimball designed the Dynamic Closed Captioning system,[citation needed] writes articles, and speaks at developer conferences. Games[CC]'s first closed captioning project called Doom3[CC] was nominated for an award as Best Doom3 Mod of the Year for IGDA's Choice Awards 2006 show.

Online Video Streaming

Internet Video Streaming Service, YouTube, offers captioning services in videos. The author of the video can upload a SubViewer (*.SUB), SubRip (*.SRT) or *.SBV file. [11]YouTube is currently testing an Automatic Caption Feature, which will transcribe audio and not require the author to add a captioning file. This feature is only available on certain videos, especially those created by Google and YouTube. [12]

Flash video also supports captions via the Timed Text DFXP .XML format. The latest Flash authoring software adds free player skins and caption components that enable viewers to turn captions on/off during playback from a webpage. Previous versions of Flash relied on the Captionate 3rd party component and skin to caption Flash video. Custom Flash players designed in Flex can be tailored to support the Timed Text DFXP .XML format, Captionate .XML, or SAMI file (see Hulu captioning).

Windows Media Video can support closed captions for both video on demand streaming or live streaming scenarios. Typically Windows Media captions support the SAMI file format but can also carry embedded closed caption data.

QuickTime video supports true 608 caption data via QuickTime's proprietary Closed Caption Track. These captions can be turned on and off and appear in the same style as TV closed captions with all the standard formatting (pop-on, roll-up, paint-on) and can be positioned and split anywhere on the video screen. QuickTime Closed Caption tracks can be viewed in Mac or Windows versions of QuickTime player, iTunes, QuickTime web browser plug-in and iPod Nano, iPod Classic, iPod Touch, and iPhone.

Theatre

Live plays can be open captioned by a captioner who displays lines from the script and including non-speech elements on a large display screen near the stage.[13]

Telephones

A captioned telephone (also called captioned relay or Cap-Tel) is a telephone that displays real-time captions of the current conversation. The captions are typically displayed on a screen embedded into the telephone base.

Media monitoring services

In the United States especially, most media monitoring services capture and index closed captioning text from news and public affairs programs, allowing them to search the text for client references.

The use of closed captioning for television news monitoring was pioneered in 1993 by Tulsa-based NewsTrak of Oklahoma (later known as Broadcast News of Mid-America, acquired by video news release pioneer Medialink Worldwide Incorporated in 1997). US patent 7,009,657 describes a "method and system for the automatic collection and conditioning of closed caption text originating from multiple geographic locations" as used by news monitoring services.

HDTV interoperability issues

Americas

The US ATSC HDTV system originally specified two different kinds of closed captioning datastream standards—the original (available by Line 21) and another more modern version encoded in MPEG-2, the CEA-708 standard.[7]

The US FCC mandates that broadcasters deliver (and generate, if necessary) both datastream formats.[7] The Canadian CRTC has not mandated that broadcasters either broadcast both datastream formats or exclusively in one format.

Incompatibility issues with HDTV

Many viewers find that when they switch to an HDTV they are unable to view closed caption (CC)information, even though the broadcaster is sending it and the TV is able to display it.Originally, CC information was included in the picture ("line 21"), but there is no equivalent capability inthe HDTV 720p/1080i interconnects between the display and a "source".A "source", in this case, can be a DVD player or an HD tuner (a cable box is an HD tuner).When CC information is encoded in the MPEG-2 data stream, only the device that decodes the MPEG-2 data(a source) has access to the closed caption information; there is no standard for transmittingthe CC information to an HD display separately.Thus, if there is CC information, the source device needs to overlay the CC information on the picture prior to transmitting to the display over the interconnect.Many source devices do not have the ability to overlay CC information, or controlling the CC overlay is extremely complicated.For example, the Motorola DCT-5xxx and -6xxx cable set-top boxes have the ability to decode CC information located on the mpg stream and overlay it on the picture, but turning CC on and off requires turning off the unit and going into a special setup menu (it is not on the standard configuration menu and it cannot be controlled using the remote).Historically, DVD players and cable box tuners did not need to do this overlaying, they simply passed this information on to the TV,and they are not mandated to perform this overlaying.Many modern HDTVs can be directly connected to cables, but then they often cannot receive scrambled channels that the user is paying for.Thus, the lack of a standard way of sending CC information between components, along with the lack of amandate to add this information to a picture, results in CC being unavailable to many hard-of-hearing and deaf users."HDMI not allowing Closed Captioning?"

Europe

The European teletext systems are the source for closed captioning signals, thus when teletext is embedded into DVB-T or DVB-S the closed captioning signal is included.[citation needed] However, for DVB-T and DVB-S, it is not necessary for a teletext page signal to also be present (ITV1, for example, does not carry analogue teletext signals on Sky Digital, but does carry the embedded version, accessible from the "Services" menu of the receiver, or more recently by turning them off/on from a mini menu accessible from the "help" button).

DTV standard captioning improvements

The CEA-708 specification provides for dramatically improved captioning

  • An enhanced character set with more accented letters and non-Latin letters, and more special symbols
  • Viewer-adjustable text size (called the "caption volume control" in the specification), allowing individuals to adjust their TVs to display small, normal, or large captions
  • More text and background colors, including both transparent and translucent backgrounds to optionally replace the big black block
  • More text styles, including edged or drop-shadowed text rather than the letters on a solid background
  • More text fonts, including monospaced and proportional spaced, serif and sans-serif, and some playful cursive fonts
  • Higher bandwidth, to allow more data per minute of video
  • More language channels, to allow the encoding of more independent caption streams

As of 2009, however, most closed captioning for DTV environments is done using tools designed for analog captioning (working to the CEA-608 NTSC spec rather than the CEA-708 DTV spec). The captions are then run through transcoders made by companies like EEG Enterprises or Evertz, which convert the analog Line 21 caption format to the digital format. This means that none of the CEA-708 features are used unless they were also contained in CEA-608.

Non-Linear Video Editing Systems and Closed Captioning

In mid 2009, Apple released Final Cut Pro version 7 and began support for inserting closed caption data into SD and HD tape masters via firewire and compatible video capture cards.[14] Up until this time it was not possible for video editors to insert caption data with both CEA-608 and CEA-708 to their tape masters. The typical workflow included first printing the SD or HD video to a tape and sending it to a professional closed caption service company that had a stand alone closed caption hardware encoder.

This new closed captioning workflow known as eCaptioning involves making a proxy video from the non-linear system to import into a third-party non-linear closed captioning software. Once the closed captioning software project is completed, it must export a closed caption file compatible with the non-linear editing system. In the case of Final Cut Pro 7, three different file formats can be accepted: a .SCC file (Scenarist Closed Caption file) for Standard Definition video, a QuickTime 608 Closed Caption track (a special 608 coded track in the .mov file wrapper) for Standard Definition video, and finally a QuickTime 708 Closed Caption track (a special 708 coded track in the .mov file wrapper) for High Definition video output.

Alternately, Matrox video systems devised another mechanism for inserting closed caption data by allowing the video editor to include CEA-608 and CEA-708 in a discrete audio channel on the video editing timeline. This allows real-time preview of the captions while editing and is compatible with Final Cut Pro 6 and 7.[15]

Other non-linear editing systems indirectly support closed captioning only in Standard Definition line-21. Video files on the editing timeline must be composited with a line-21 VBI graphic layer known in the industry as a "blackmovie" with closed caption data.[16] Alternately, video editors working with the DV25 and DV50 firewire workflows must encode their DV .avi or .mov file with VAUX data which includes CEA-608 closed caption data.

History

Open captioning

Regular open captioned broadcasts began on PBS’s “The French Chef” in 1972.[17] WGBH began open captioning of ZOOM, ABC World News Tonight, and Once upon a Classic shortly thereafter.

Technical development of closed captioning

Closed captioning was first demonstrated at the First National Conference on Television for the Hearing Impaired in Nashville, Tennessee in 1971.[17] A second demonstration of closed captioning was held at Gallaudet College (now Gallaudet University) on February 15, 1972 where ABC and the National Bureau of Standards demonstrated closed captions embedded within a normal broadcast of Mod Squad.

The closed captioning system was successfully encoded and broadcast in 1973 with the cooperation of PBS station WETA.[17] As a result of these tests, the FCC in 1976 set aside line 21 for the transmission of closed captions. PBS engineers then developed the caption editing consoles that would be used to caption prerecorded programs.

Real-time captioning, a process for captioning live broadcasts, was developed in 1982.[17] In real-time captioning, court reporters trained to type at speeds of over 225 words per minute give viewers instantaneous access to live news, sports and entertainment. As a result, the viewer sees the captions within two to three seconds of the words being spoken.

Full-scale closed captioning

The National Captioning Institute was created in 1979 in order to get the cooperation of the commercial television networks.[3]

The first use of regularly scheduled uses of closed captioning on American television was on March 16, 1980. Sears had developed and sold the Telecaption adapter, a decoding unit that could be connected to a standard television set. The first programs seen with captioning were the ABC Sunday Night Movie, Disney's Wonderful World on NBC, and Masterpiece Theatre on PBS. The captioned Disney feature, showing at 7:00 pm EST, was the film Son of Flubber, while the movie at 9:00 EST was Semi-Tough.[18]

Legislative development in the U.S.

On January 23, 1990, the Television Decoder Circuitry Act of 1990 was passed by US Congress.[17] This Act gave the Federal Communications Commission (FCC) power to enact rules on the implementation of Closed Captioning. This Act required all analog television receivers with screens of at least 13 inches or greater, either sold or manufactured, to have the ability to display closed captioning in July 1, 1993.[19]

Also in 1990, The Americans with Disabilities Act (ADA) was passed to ensure equal opportunity for persons with disabilities.[4] The ADA prohibits discrimination against persons with disabilities in public accommodations or commercial facilities. Title III of the ADA requires that public facilities, such as hospitals, bars, shopping centers and museums (but not movie theaters), provide access to verbal information on televisions, films or slide shows.

The Telecommunications Act of 1996 expanded on the Decoder Circuity Act to place the same requirements on digital television receivers by July 1, 2002.[20] All TV programming distributors in the U.S. must provide closed caption for Spanish language video programming by January 1, 2010.[21]

Legislative development in Australia

The government of Australia provided seed funding in 1981 for the establishment of the Australian Caption Centre (ACC) and the purchase of equipment. Captioning by the ACC commenced in 1982 and a further grant from the Australian government enabled the ACC to achieve and maintain financial self-sufficiency. The ACC, now known as Media Access Australia, sold its commercial captioning division to Red Bee Media in December 2005. Red Bee Media continues to provide captioning services to Australia today.[22][23][24]

Logo

The current and most familiar logo for closed captioning consists of two Cs (for "closed captioned") inside a television screen. It was created by Jack Foley while he was a senior graphic designer at WGBH.[citation needed] The other logo, trademarked by the National Captioning Institute, was a speech balloon in the shape of a TV; 2 such versions exist: one with a tail on the left, the other with a tail on the right .[25]

See also

Notes

  1. ^ [1] Ofcom, UK: Television access services
  2. ^ Alex Varley, Chief Executive, Media Access Australia (June 2008). "Submission to DBCDE’s investigation into Access to Electronic Media for the Hearing and Vision Impaired" (PDF). Australia: Media Access Australia. pp. 16. http://www.dbcde.gov.au/__data/assets/pdf_file/0020/84710/Media_Access_Australia_-_Response_to_Media_Access_Review_2008.pdf. Retrieved 2009-01-29. "The use of captions and audio description is not limited to deaf and blind people. Captions can be used in situations of “temporary” deafness, such as watching televisions in public areas where the sound has been turned down (commonplace in America and starting to appear more in Australia)." 
  3. ^ Mayor's Disability Council (May 16, 2008). "Resolution in Support of Board of Supervisors’ Ordinance Requiring Activation of Closed Captioning on Televisions in Public Areas". City and County of San Francisco. http://www.sfgov.org/site/sfmdc_page.asp?id=86619. Retrieved 2009-01-29. "that television receivers located in any part of a facility open to the general public have closed captioning activated at all times when the facility is open and the television receiver is in use." 
  4. ^ Alex Varley, Chief Executive, Media Access Australia (April 18, 2005). "Settlement Agreement Between The United States And Norwegian American Hospital Under The Americans With Disabilities Act". U.S. Department of Justice. http://www.ada.gov/norwegian.htm. Retrieved 2009-01-29. "...will have closed captioning operating in all public areas where there are televisions with closed captioning; televisions in public areas without built-in closed captioning capability will be replaced with televisions that have such capability" 
  5. ^ http://teletext.mb21.co.uk/timeline/early-ceefax-subtitling.shtml
  6. ^ http://www.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP065.pdf
  7. ^ a b c [2] - ATSC Closed Captioning FAQ (cached copy)
  8. ^ http://www.dvddemystified.com/dvdfaq.html#3.4
  9. ^ http://www.dvddemystified.com/dvdfaq.html#1.45
  10. ^ http://mkpe.com/publications/d-cinema/misc/enabling_the_disabled.php
  11. ^ http://www.google.com/support/youtube/bin/answer.py?hl=en&answer=100077
  12. ^ http://www.youtube.com/watch?v=kTvHIDKLFqc
  13. ^ Stagetext.org
  14. ^ http://www.apple.com/finalcutstudio/whats-new.html
  15. ^ http://www.cpcweb.com/mxo2/
  16. ^ http://www.cpcweb.com/nle/
  17. ^ a b c d e "A Brief History of Captioned Television". http://www.ncicap.org/caphist.asp. 
  18. ^ "Today on TV", Chicago Daily Herald, March 11, 1980, Section 2-5
  19. ^ "Television Decoder Circuitry Act of 1990". http://www.access-board.gov/sec508/guide/1194.24-decoderact.htm. 
  20. ^ "FCC Consumer Facts on Closed Captioning". http://www.fcc.gov/cgb/consumerfacts/closedcaption.html. 
  21. ^ "Part 79 – Closed Captioning of Video Programming". http://www.fcc.gov/cgb/dro/captioning_regs.html. 
  22. ^ Alex Varley, Chief Executive, Media Access Australia (June 2008). "Submission to DBCDE’s investigation into Access to Electronic Media for the Hearing and Vision Impaired" (PDF). Australia: Media Access Australia. pp. 12,18,43. http://www.dbcde.gov.au/__data/assets/pdf_file/0020/84710/Media_Access_Australia_-_Response_to_Media_Access_Review_2008.pdf. Retrieved 2009-02-07. 
  23. ^ "About Media Access Australia". Australia: Media Access Australia. http://www.mediaaccess.org.au/index.php?option=com_content&view=article&id=359&Itemid=100. Retrieved 2009-02-07. 
  24. ^ "About Red Bee Media Australia". Australia: Red Bee Media Australia Pty Limited. http://www.redbeemedia.com.au/aboutus-australia.html. Retrieved 2009-02-07. 
  25. ^ http://www.ncicap.org/ncilogo.asp National Captioning Institute Logos

References

  • Realtime Captioning... The VITAC Way by Amy Bowlen and Kathy DiLorenzo (no ISBN)
  • Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television by Gregory J. Downey (ISBN 9780801887109)
  • The Closed Captioning Handbook by Gary D. Robson (ISBN 0-240-80561-5)
  • Alternative Realtime Careers: A Guide to Closed Captioning and CART for Court Reporters by Gary D. Robson (ISBN 1-881859-51-7)
  • A New Civil Right: Telecommunications Equality for Deaf and Hard of Hearing Americans by Karen Peltz Strauss (ISBN 9781563682919)
  • Enabling The Disabled by Michael Karagosian (no ISBN)

External links

   Advertizing ▼

 

All translations of Closed_captions


sensagent's content

  • definitions
  • synonyms
  • antonyms
  • encyclopedia

Dictionary and translator for handheld

⇨ New : sensagent is now available on your handheld

   Advertising ▼

sensagent's office

Shortkey or widget. Free.

Windows Shortkey: sensagent. Free.

Vista Widget : sensagent. Free.

Webmaster Solution

Alexandria

A windows (pop-into) of information (full-content of Sensagent) triggered by double-clicking any word on your webpage. Give contextual explanation and translation from your sites !

Try here  or   get the code

SensagentBox

With a SensagentBox, visitors to your site can access reliable information on over 5 million pages provided by Sensagent.com. Choose the design that fits your site.

Business solution

Improve your site content

Add new content to your site from Sensagent by XML.

Crawl products or adds

Get XML access to reach the best products.

Index images and define metadata

Get XML access to fix the meaning of your metadata.


Please, email us to describe your idea.

WordGame

The English word games are:
○   Anagrams
○   Wildcard, crossword
○   Lettris
○   Boggle.

Lettris

Lettris is a curious tetris-clone game where all the bricks have the same square shape but different content. Each square carries a letter. To make squares disappear and save space for other squares you have to assemble English words (left, right, up, down) from the falling squares.

boggle

Boggle gives you 3 minutes to find as many words (3 letters or more) as you can in a grid of 16 letters. You can also try the grid of 16 letters. Letters must be adjacent and longer words score better. See if you can get into the grid Hall of Fame !

English dictionary
Main references

Most English definitions are provided by WordNet .
English thesaurus is mainly derived from The Integral Dictionary (TID).
English Encyclopedia is licensed by Wikipedia (GNU).

Copyrights

The wordgames anagrams, crossword, Lettris and Boggle are provided by Memodata.
The web service Alexandria is granted from Memodata for the Ebay search.
The SensagentBox are offered by sensAgent.

Translation

Change the target language to find translations.
Tips: browse the semantic fields (see From ideas to words) in two languages to learn more.

last searches on the dictionary :

4959 online visitors

computed in 0.062s

   Advertising ▼

I would like to report:
section :
a spelling or a grammatical mistake
an offensive content(racist, pornographic, injurious, etc.)
a copyright violation
an error
a missing statement
other
please precise:

Advertize

Partnership

Company informations

My account

login

registration

   Advertising ▼