Dictionary and translator for handheld
New : sensagent is now available on your handheld
A windows (pop-into) of information (full-content of Sensagent) triggered by double-clicking any word on your webpage. Give contextual explanation and translation from your sites !
With a SensagentBox, visitors to your site can access reliable information on over 5 million pages provided by Sensagent.com. Choose the design that fits your site.
Improve your site content
Add new content to your site from Sensagent by XML.
Crawl products or adds
Get XML access to reach the best products.
Index images and define metadata
Get XML access to fix the meaning of your metadata.
Please, email us to describe your idea.
Lettris is a curious tetris-clone game where all the bricks have the same square shape but different content. Each square carries a letter. To make squares disappear and save space for other squares you have to assemble English words (left, right, up, down) from the falling squares.
Boggle gives you 3 minutes to find as many words (3 letters or more) as you can in a grid of 16 letters. You can also try the grid of 16 letters. Letters must be adjacent and longer words score better. See if you can get into the grid Hall of Fame !
Change the target language to find translations.
Tips: browse the semantic fields (see From ideas to words) in two languages to learn more.
1.(MeSH)Communications via an interactive conference between two or more participants at different sites, using computer networks (COMPUTER COMMUNICATION NETWORKS) or other telecommunication links to transmit audio, video, and data.
ABEL (videoconferencing) • List of videoconferencing services • Press videoconferencing • Public videoconferencing facilites list • Videoconferencing facilities list • Virtual Room Videoconferencing System
Videoconferencing (n.) [MeSH]
Videoconferencing is the conduct of a videoconference (also known as a video conference or videoteleconference) by a set of telecommunication technologies which allow two or more locations to communicate by simultaneous two-way video and audio transmissions. It has also been called 'visual collaboration' and is a type of groupware.
Videoconferencing differs from videophone calls in that it's designed to serve a conference or multiple locations rather than individuals. It is an intermediate form of videotelephony, first deployed commercially in the United States by AT&T during the early 1970s as part of their development of Picturephone technology.
With the introduction of relatively low cost, high capacity broadband telecommunication services in the late 1990s, coupled with powerful computing processors and video compression techniques, videoconferencing usage has made significant inroads in business, education, medicine and media. Like all long distance communications technologies (such as phone and Internet), by reducing the need to travel to bring people together the technology also contributes to reductions in carbon emissions, thereby helping to reduce global warming.
Videoconferencing uses audio and video telecommunications to bring people at different sites together. This can be as simple as a conversation between people in private offices (point-to-point) or involve several (multipoint) sites in large rooms at multiple locations. Besides the audio and visual transmission of meeting activities, allied videoconferencing technologies can be used to share documents and display information on whiteboards.
Simple analog videophone communication could be established as early as the invention of the television. Such an antecedent usually consisted of two closed-circuit television systems connected via coax cable or radio. An example of that was the German Reich Postzentralamt (post office) video telephone network serving Berlin and several German cities via coaxial cables between 1936 and 1940.
During the first manned space flights, NASA used two radio-frequency (UHF or VHF) video links, one in each direction. TV channels routinely use this type of videotelephony when reporting from distant locations. The news media were to become regular users of mobile links to satellites using specially equipped trucks, and much later via special satellite videophones in a briefcase.
This technique was very expensive, though, and could not be used for applications such as telemedicine, distance education, and business meetings. Attempts at using normal telephony networks to transmit slow-scan video, such as the first systems developed by AT&T, first researched in the 1950s, failed mostly due to the poor picture quality and the lack of efficient video compression techniques. The greater 1 MHz bandwidth and 6 Mbit/s bit rate of the Picturephone in the 1970s also did not achieve commercial success, mostly due to its high cost, but also due to a lack of network effect —with only a few hundred Picturephones in the world, users had extremely few contacts they could actually call to, and interoperability with other videophone systems did not exist.
It was only in the 1980s that digital telephony transmission networks became possible, such as with ISDN networks, assuring a minimum bit rate (usually 128 kilobits/s) for compressed video and audio transmission. During this time, there was also research into other forms of digital video and audio communication. Many of these technologies, such as the Media space, are not as widely used today as videoconferencing but were still an important area of research. The first dedicated systems started to appear in the market as ISDN networks were expanding throughout the world. One of the first commercial videoconferencing systems sold to companies came from PictureTel Corp., which had an Initial Public Offering in November, 1984.
In 1984 Concept Communication in the United States replaced the then-100 pound, US$100,000 computers necessary for teleconferencing with a $12,000 circuit board which doubled the video frame rate from to 30 frames per second, and which was reduced the equipment in size to a circuit board that fit into standard personal computers. The company's founder, William J. Tobin also secured a patent for a codec for full-motion videoconferencing, first demonstrated at AT&T Bell Labs in 1986.
Videoconferencing systems throughout the 1990s rapidly evolved from very expensive proprietary equipment, software and network requirements to a standards based technology that is readily available to the general public at a reasonable cost.
Finally, in the 1990s, IP (Internet Protocol) based videoconferencing became possible, and more efficient video compression technologies were developed, permitting desktop, or personal computer (PC)-based videoconferencing. In 1992 CU-SeeMe was developed at Cornell by Tim Dorcey et al. In 1995 the first public videoconference between North America and Africa took place, linking a technofair in San Francisco with a techno-rave and cyberdeli in Cape Town. At the Winter Olympics opening ceremony in Nagano, Japan, Seiji Ozawa conducted the Ode to Joy from Beethoven's Ninth Symphony simultaneously across five continents in near-real time.
While videoconferencing technology was initially used primarily within internal corporate communication networks, one of the first community service usages of the technology started in 1992 through a unique partnership with PictureTel and IBM Corporations which at the time were promoting a jointly developed desktop based videoconferencing product known as the PCS/1. Over the next 15 years, Project DIANE (Diversified Information and Assistance Network) grew to utilize a variety of videoconferencing platforms to create a multistate cooperative public service and distance education network consisting of several hundred schools, neighborhood centers, libraries, science museums, zoos and parks, public assistance centers, and other community oriented organizations.
In the 2000s, videotelephony was popularized via free Internet services such as Skype and iChat, web plugins and on-line telecommunication programs which promoted low cost, albeit low-quality, videoconferencing to virtually every location with an Internet connection.
In May 2005, the first high definition video conferencing systems, produced by LifeSize Communications, were displayed at the Interop trade show in Las Vegas, Nevada, able to provide 30 frames per second at a 1280 by 720 display resolution. Polycom introduced its first high definition video conferencing system to the market in 2006. Currently, high definition resolution has now become a standard feature, with most major suppliers in the videoconferencing market offering it.
Recent technological developments by Librestream have extended the capabilities of video conferencing systems beyond the boardroom for use with hand-held mobile devices that combine the use of video, audio and on-screen drawing capabilities broadcasting in real-time over secure networks, independent of location. Mobile collaboration systems allow multiple people in previously unreachable locations, such as workers on an off-shore oil rig, the ability to view and discuss issues with colleagues thousands of miles away. Traditional video conferencing system manufacturers have begun providing mobile applications as well, such as AVer Information’s VCLink app which allows for live and still image streaming.
The core technology used in a videoconferencing system is digital compression of audio and video streams in real time. The hardware or software that performs compression is called a codec (coder/decoder). Compression rates of up to 1:500 can be achieved. The resulting digital stream of 1s and 0s is subdivided into labeled packets, which are then transmitted through a digital network of some kind (usually ISDN or IP). The use of audio modems in the transmission line allow for the use of POTS, or the Plain Old Telephone System, in some low-speed applications, such as videotelephony, because they convert the digital pulses to/from analog waves in the audio spectrum range.
The other components required for a videoconferencing system include:
There are basically two kinds of videoconferencing systems:
The components within a Conferencing System can be divided up into several different layers: User Interface, Conference Control, Control or Signal Plane and Media Plane.
Video Conferencing User Interfaces could either be graphical or voice responsive. Many of us have encountered both types of interfaces, normally we encounter graphical interfaces on the computer or television, and Voice Responsive we normally get on the phone, where we are told to select a number of choices by either saying it or pressing a number. User interfaces for conferencing have a number of different uses; it could be used for scheduling, setup, and making the call. Through the User Interface the administrator is able to control the other three layers of the system.
Conference Control performs resource allocation, management and routing. This layer along with the User Interface creates meetings (scheduled or unscheduled) or adds and removes participants from a conference.
Control (Signaling) Plane contains the stacks that signal different endpoints to create a call and/or a conference. Signals can be, but aren’t limited to, H.323 and Session Initiation Protocol (SIP) Protocols. These signals control incoming and outgoing connections as well as session parameters.
The Media Plane controls the audio and video mixing and streaming. This layer manages Real-Time Transport Protocols, User Datagram Packets (UDP) and Real-Time Transport Control Protocols (RTCP). The RTP and UDP normally carry information such the payload type which is the type of codec, frame rate, video size and many others. RTCP on the other hand acts as a quality control Protocol for detecting errors during streaming.
Simultaneous videoconferencing among three or more remote points is possible by means of a Multipoint Control Unit (MCU). This is a bridge that interconnects calls from several sources (in a similar way to the audio conference call). All parties call the MCU unit, or the MCU unit can also call the parties which are going to participate, in sequence. There are MCU bridges for IP and ISDN-based videoconferencing. There are MCUs which are pure software, and others which are a combination of hardware and software. An MCU is characterised according to the number of simultaneous calls it can handle, its ability to conduct transposing of data rates and protocols, and features such as Continuous Presence, in which multiple parties can be seen on-screen at once. MCUs can be stand-alone hardware devices, or they can be embedded into dedicated videoconferencing units.
The MCU consists of two logical components:
The MC controls the conferencing while it is active on the signaling plane, which is simply where the system manages conferencing creation, endpoint signaling and in-conferencing controls. This component negotiates parameters with every endpoint in the network and controls conferencing resources While the MC controls resources and signaling negotiations, the MP operates on the media plane and receives media from each endpoint. The MP generates output streams from each endpoint and redirects the information to other endpoints in the conference.
Some systems are capable of multipoint conferencing with no MCU, stand-alone, embedded or otherwise. These use a standards-based H.323 technique known as "decentralized multipoint", where each station in a multipoint call exchanges video and audio directly with the other stations with no central "manager" or other bottleneck. The advantages of this technique are that the video and audio will generally be of higher quality because they don't have to be relayed through a central point. Also, users can make ad-hoc multipoint calls without any concern for the availability or control of an MCU. This added convenience and quality comes at the expense of some increased network bandwidth, because every station must transmit to every other station directly.
Videoconferencing systems use several common operating modes:
In VAS mode, the MCU switches which endpoint can be seen by the other endpoints by the levels of one’s voice. If there are four people in a conference, the only one that will be seen in the conference is the site which is talking; the location with the loudest voice will be seen by the other participants.
Continuous Presence mode displays multiple participants at the same time. The MP in this mode takes the streams from the different endpoints and puts them all together into a single video image. In this mode, the MCU normally sends the same type of images to all participants. Typically these types of images are called “layouts” and can vary depending on the number of participants in a conference.
A fundamental feature of professional videoconferencing systems is Acoustic Echo Cancellation (AEC). Echo can be defined as the reflected source wave interference with new wave created by source. AEC is an algorithm which is able to detect when sounds or utterances reenter the audio input of the videoconferencing codec, which came from the audio output of the same system, after some time delay. If unchecked, this can lead to several problems including:
Computer security experts have shown that poorly configured or inadequately supervised videoconferencing system can permit an easy 'virtual' entry by computer hackers and criminals into company premises and corporate boardrooms, via their own videoconferencing systems.
Some observers argue that three outstanding issues have prevented videoconferencing from becoming a standard form of communication, despite the ubiquity of videoconferencing-capable systems. These issues are:
The issue of eye-contact may be solved with advancing technology, and presumably the issue of appearance consciousness will fade as people become accustomed to videoconferencing.
The International Telecommunications Union (ITU) (formerly: Consultative Committee on International Telegraphy and Telephony (CCITT)) has three umbrellas of standards for videoconferencing:
The Unified Communications Interoperability Forum (UCIF), a non-profit alliance between communications vendors, launched in May 2010. The organization's vision is to maximize the interoperability of UC based on existing standards. Founding members of UCIF include HP, Microsoft, Polycom, Logitech/LifeSize Communications and Juniper Networks.
High speed Internet connectivity has become more widely available at a reasonable cost and the cost of video capture and display technology has decreased. Consequently, personal videoconferencing systems based on a webcam, personal computer system, software compression and broadband Internet connectivity have become affordable to the general public. Also, the hardware used for this technology has continued to improve in quality, and prices have dropped dramatically. The availability of freeware (often as part of chat programs) has made software based videoconferencing accessible to many.
For over a century, futurists have envisioned a future where telephone conversations will take place as actual face-to-face encounters with video as well as audio. Sometimes it is simply not possible or practical to have face-to-face meetings with two or more people. Sometimes a telephone conversation or conference call is adequate. Other times, e-mail exchanges are adequate. However, videoconferencing adds another possible alternative, and can be considered when:
Deaf, hard-of-hearing and mute individuals have a particular interest in the development of affordable high-quality videoconferencing as a means of communicating with each other in sign language. Unlike Video Relay Service, which is intended to support communication between a caller using sign language and another party using spoken language, videoconferencing can be used directly between two deaf signers.
Mass adoption and use of videoconferencing is still relatively low, with the following often claimed as causes:
These are some of the reasons many systems are often used for internal corporate use only, as they are less likely to result in lost sales. One alternative to companies lacking dedicated facilities is the rental of videoconferencing-equipped meeting rooms in cities around the world. Clients can book rooms and turn up for the meeting, with all technical aspects being prearranged and support being readily available if needed.
In the United States, videoconferencing has allowed testimony to be used for an individual who is unable or prefers not to attend the physical legal settings, or would be subjected to severe psychological stress in doing so, however there is a controversy on the use of testimony by foreign or unavailable witnesses via video transmission, regarding the violation of the Confrontation Clause of the Sixth Amendment of the U.S. Constitution.
In Hall County, Georgia, videoconferencing systems are used for initial court appearances. The systems link jails with court rooms, reducing the expenses and security risks of transporting prisoners to the courtroom.
The U.S. Social Security Administration (SSA), which oversees the world's largest administrative judicial system under its Office of Disability Adjudication and Review (ODAR), has made extensive use of videoconferencing to conduct hearings at remote locations. In Fiscal Year (FY) 2009, the U.S. Social Security Administration (SSA) conducted 86,320 videoconferenced hearings, a 55% increase over FY 2008. In August 2010, the SSA opened its fifth and largest videoconferencing-only National Hearing Center (NHC), in St. Louis, Missouri. This continues the SSA's effort to use video hearings as a means to clear its substantial hearing backlog. Since 2007, the SSA has also established NHCs in Albuquerque, New Mexico, Baltimore, Maryland, Falls Church, Virginia, and Chicago, Illinois.
Videoconferencing provides students with the opportunity to learn by participating in two-way communication forums. Furthermore, teachers and lecturers worldwide can be brought to remote or otherwise isolated educational facilities. Students from diverse communities and backgrounds can come together to learn about one another, although language barriers will continue to persist. Such students are able to explore, communicate, analyze and share information and ideas with one another. Through videoconferencing students can visit other parts of the world to speak with their peers, and visit museums and educational facilities. Such virtual field trips can provide enriched learning opportunities to students, especially those in geographically isolated locations, and to the economically disadvantaged. Small schools can use these technologies to pool resources and provide courses, such as in foreign languages, which could not otherwise be offered.
A few examples of benefits that videoconferencing can provide in campus environments include:
Videoconferencing is a highly useful technology for real-time telemedicine and telenursing applications, such as diagnosis, consulting, transmission of medical images, etc... With videoconferencing, patients may contact nurses and physicians in emergency or routine situations; physicians and other paramedical professionals can discuss cases across large distances. Rural areas can use this technology for diagnostic purposes, thus saving lives and making more efficient use of health care money. For example, a rural medical center in Ohio, United States, used videoconferencing to successfully cut the number of transfers of sick infants to a hospital 70 miles (110 km) away. This had previously cost nearly $10,000 per transfer.
Special peripherals such as microscopes fitted with digital cameras, videoendoscopes, medical ultrasound imaging devices, otoscopes, etc., can be used in conjunction with videoconferencing equipment to transmit data about a patient. Recent developments in mobile collaboration on hand-held mobile devices have also extended video-conferencing capabilities to locations previously unreachable, such as a remote community, long-term care facility, or a patient's home.
Videoconferencing can enable individuals in distant locations to participate in meetings on short notice, with time and money savings. Technology such as VoIP can be used in conjunction with desktop videoconferencing to enable low-cost face-to-face business meetings without leaving the desk, especially for businesses with widespread offices. The technology is also used for telecommuting, in which employees work from home. One research report based on a sampling of 1,800 corporate employees showed that, as of June 2010, 54% of the respondents with access to video conferencing used it “all of the time” or “frequently”.
Videoconferencing is also currently being introduced on online networking websites, in order to help businesses form profitable relationships quickly and efficiently without leaving their place of work. This has been leveraged by banks to connect busy banking professionals with customers in various locations using video banking technology.
Videoconferencing on hand-held mobile devices (mobile collaboration technology) is being used in industries such as manufacturing, energy, healthcare, insurance, government and public safety. Live, visual interaction removes traditional restrictions of distance and time, often in locations previously unreachable, such as a manufacturing plant floor a continent away.
Although videoconferencing has frequently proven its value, research has shown that some non-managerial employees prefer not to use it due to several factors, including anxiety. Some such anxieties can be avoided if managers use the technology as part of the normal course of business.
Researchers also find that attendees of business and medical videoconferences must work harder to interpret information delivered during a conference than they would if they attended face-to-face. They recommend that those coordinating videoconferences make adjustments to their conferencing procedures and equipment.
The concept of press videoconferencing was developed in October 2007 by the PanAfrican Press Association (APPA), a Paris France based non-governmental organization, to allow African journalists to participate in international press conferences on developmental and good governance issues.
Press videoconferencing permits international press conferences via videoconferencing over the Internet. Journalists can participate on an international press conference from any location, without leaving their offices or countries. They need only be seated by a computer connected to the Internet in order to ask their questions to the speaker.
In 2004, the International Monetary Fund introduced the Online Media Briefing Center, a password-protected site available only to professional journalists. The site enables the IMF to present press briefings globally and facilitates direct questions to briefers from the press. The site has been copied by other international organizations since its inception. More than 4,000 journalists worldwide are currently registered with the IMF.
One of the first demonstrations of the ability for telecommunications to help sign language users communicate with each other occurred when AT&T's videophone (trademarked as the "Picturephone") was introduced to the public at the 1964 New York World's Fair –two deaf users were able to communicate freely with each other between the fair and another city. Various universities and other organizations, including British Telecom's Martlesham facility, have also conducted extensive research on signing via videotelephony. The use of sign language via videotelephony was hampered for many years due to the difficulty of its use over slow analogue copper phone lines, coupled with the high cost of better quality ISDN (data) phone lines. Those factors largely disappeared with the introduction of more efficient video codecs and the advent of lower cost high-speed ISDN data and IP (Internet) services in the 1990s.
Significant improvements in video call quality of service for the deaf occurred in the United States in 2003 when Sorenson Media Inc. (formerly Sorenson Vision Inc.), a video compression software coding company, developed its VP-100 model stand-alone videophone specifically for the deaf community. It was designed to output its video to the user's television in order to lower the cost of acquisition, and to offer remote control and a powerful video compression codec for unequaled video quality and ease of use with video relay services. Favourable reviews quickly led to its popular usage at educational facilities for the deaf, and from there to the greater deaf community.
Coupled with similar high-quality videophones introduced by other electronics manufacturers, the availability of high speed Internet, and sponsored video relay services authorized by the U.S. Federal Communications Commission in 2002, VRS services for the deaf underwent rapid growth in that country.
Using such video equipment in the present day, the deaf, hard-of-hearing and speech-impaired can communicate between themselves and with hearing individuals using sign language. The United States and several other countries compensate companies to provide "Video Relay Services" (VRS). Telecommunication equipment can be used to talk to others via a sign language interpreter, who uses a conventional telephone at the same time to communicate with the deaf person's party. Video equipment is also used to do on-site sign language translation via Video Remote Interpreting (VRI). The relative low cost and widespread availability of 3G mobile phone technology with video calling capabilities have given deaf and speech-impaired users a greater ability to communicate with the same ease as others. Some wireless operators have even started free sign language gateways.
Sign language interpretation services via VRS or by VRI are useful in the present-day where one of the parties is deaf, hard-of-hearing or speech-impaired (mute). In such cases the interpretation flow is normally within the same principal language, such as French Sign Language (LSF) to spoken French, Spanish Sign Language (LSE) to spoken Spanish, British Sign Language (BSL) to spoken English, and American Sign Language (ASL) also to spoken English (since BSL and ASL are completely distinct to each other), and so on.
Multilingual sign language interpreters, who can also translate as well across principal languages (such as to and from SSL, to and from spoken English), are also available, albeit less frequently. Such activities involve considerable effort on the part of the translator, since sign languages are distinct natural languages with their own construction, semantics and syntax, different from the aural version of the same principal language.
With video interpreting, sign language interpreters work remotely with live video and audio feeds, so that the interpreter can see the deaf or mute party, and converse with the hearing party, and vice versa. Much like telephone interpreting, video interpreting can be used for situations in which no on-site interpreters are available. However, video interpreting cannot be used for situations in which all parties are speaking via telephone alone. VRS and VRI interpretation requires all parties to have the necessary equipment. Some advanced equipment enables interpreters to control the video camera remotely, in order to zoom in and out or to point the camera toward the party that is signing.
Videophone calls (also: videocalls and video chat), differ from videoconferencing in that they expect to serve individuals, not groups. However that distinction has become increasingly blurred with technology improvements such as increased bandwidth and sophisticated software clients that can allow for multiple parties on a call. In general everyday usage the term videoconferencing is now frequently used instead of videocall for point-to-point calls between two units. Both videophone calls and videoconferencing are also now commonly referred to as a video link.
Webcams are popular, relatively low cost devices which can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing.
A videoconference system is generally higher cost than a videophone and deploys greater capabilities. A videoconference (also known as a videoteleconference) allows two or more locations to communicate via live, simultaneous two-way video and audio transmissions. This is often accomplished by the use of a multipoint control unit (a centralized distribution and call management system) or by a similar non-centralized multipoint capability embedded in each videoconferencing unit. Again, technology improvements have circumvented traditional definitions by allowing multiple party videoconferencing via web-based applications. A separate webpage article is devoted to videoconferencing.
A telepresence system is a high-end videoconferencing system and service usually employed by enterprise-level corporate offices. Telepresence conference rooms use state-of-the art room designs, video cameras, displays, sound-systems and processors, coupled with high-to-very-high capacity bandwidth transmissions.
Typical uses of the various technologies described above include videocalling or videoconferencing on a one-to-one, one-to-many or many-to-many basis for personal, business, educational, deaf Video Relay Service and tele-medical, diagnostic and rehabilitative use or services. New services utilizing videocalling and videoconferencing, such as teachers and psychologists conducting online sessions, personal videocalls to inmates incarcerated in penitentiaries, and videoconferencing to resolve airline engineering issues at maintenance facilities, are being created or evolving on an on-going basis.
|Wikimedia Commons has media related to: Videoconferencing|
|Wikimedia Commons has media related to: Videoconferencing systems and components|