Internet Protocol Suite

From Wikipedia, the free encyclopedia

Internet protocol suite

The Internet protocol suite is the conceptual model and set of communications protocols used on the Internet and similar computer networks. It is commonly known as TCP/IP because the foundational protocols in the suite are the Transmission Control Protocol (TCP) and the Internet Protocol (IP). It is occasionally known as the Department of Defense (DoD) model, because the development of the networking method was funded by the United States Department of Defense through DARPA.

The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers which classify all related protocols according to the scope of networking involved. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer handling host-to-host communication; and the application layer, which provides process-to-process data exchange for applications.

Technical standards specifying the Internet protocol suite and many of its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.

Contents
1 History
1.1 Early research
1.2 Specification
1.3 Adoption
2 Key architectural principles
3 Abstraction layers
3.1 Link layer
3.2 Internet layer
3.3 Transport layer
3.4 Application layer
4 Layer names and number of layers in the literature
5 Comparison of TCP/IP and OSI layering
6 Implementations

History


Early Research

SRI_First_Internetworked_Connection_diagram

Diagram of the first internetworked connection

SRI_Packet_Radio_Van

A Stanford Research Institute Packet Radio Van, used for the first three-way internetworked transmission.

The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET.

By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the ARPANET, this function was delegated to the hosts. Cerf credits Hubert Zimmermann and Louis Pouzin, designer of the CYCLADES network, with important influences on this design. The protocol was implemented as the Transmission Control Program, first published in 1974.

Initially, the TCP managed both datagram transmissions and routing, but as the protocol grew, other researchers recommended a division of functionality into protocol layers. Advocates included Jonathan Postel of the University of Southern California’s Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, “We are screwing up in our design of Internet protocols by violating the principle of layering.” Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. The Transmission Control Program was split into two distinct protocols, the Transmission Control Protocol and the Internet Protocol.

The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This design is known as the end-to-end principle. Using this design, it became possible to connect almost any network to the ARPANET, irrespective of the local characteristics, thereby solving Kahn’s initial internetworking problem. One popular expression is that TCP/IP, the eventual product of Cerf and Kahn’s work, can run over “two tin cans and a string.” Years later, as a joke, the IP over Avian Carriers formal protocol specification was created and successfully tested.

A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.

Specification

From 1973 to 1974, Cerf’s networking research group at Stanford worked out details of the idea, resulting in the first TCP specification. A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time.

DARPA then contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, TCP v3 and IP v3, and TCP/IP v4. The last protocol is still in use today.

In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between sites in the US, the UK, and Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983. The migration of the ARPANET to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated.

Adoption

In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In 1985, the Internet Advisory Board (later renamed the Internet Architecture Board) held a three-day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use.

In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting. Interoperability conferences have been held every year since then. Every year from 1985 through 1993, the number of attendees tripled.

IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, despite having competing internal protocols (SNA, XNS, DECNET). In IBM, from 1984, Barry Appelman’s group did TCP/IP development. (Appelman later moved to AOL to be the head of all its development efforts.) They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies began offering TCP/IP stacks for DOS and MS Windows, such as the company FTP Software, and the Wollongong Group. The first VM/CMS TCP/IP stack came from the University of Wisconsin.

Some of these TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ‘ntcp’ multi-connection TCP which ran atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983-4. Romkey leveraged this TCP in 1986 when FTP Software was founded. Phil Karn created KA9Q TCP (a multi-connection TCP for ham radio applications) starting in 1985.

The spread of TCP/IP was fueled further in June 1989, when AT&T agreed to place the TCP/IP code developed for UNIX into the public domain. Various vendors, including IBM, included this code in their own TCP/IP stacks. Many companies sold TCP/IP stacks for Windows until Microsoft released a native TCP/IP stack in Windows 95. This event was a little late in the evolution of the Internet, but it cemented TCP/IP’s dominance over other protocols, which began to lose ground. These protocols included IBM Systems Network Architecture (SNA), Digital Equipment Corporation’s DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS).

Key Architectural Principles


An early architectural document, RFC 1122, emphasizes architectural principles over layering.

The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle.

The robustness principle states: “In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear).” “The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features.” Postel famously summarized the principle as, “Be conservative in what you do, be liberal in what you accept from others”—a saying that came to be known as “Postel’s Law.”

Abstraction Layers


IP_stack_connections.svg

Two Internet hosts connected via two routers and the corresponding layers used at each hop. The application on each host executes read and write operations as if the processes were directly connected to each other by some kind of data pipe. Every other detail of the communication is hidden from each process. The underlying mechanisms that transmit data between the host computers are located in the lower protocol layers.

1024px-UDP_encapsulation.svg

Encapsulation of application data descending through the layers described in RFC 1122

Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers, being further encapsulated at each level.

The layers of the protocol suite near the top are logically closer to the user application, while those near the bottom are logically closer to the physical transmission of the data. Viewing layers as providing or consuming a service is a method of abstraction to isolate upper layer protocols from the details of transmitting bits over, for example, Ethernet and collision detection, while the lower layers avoid having to know the details of each and every application and its protocol.

Even when the layers are examined, the assorted architectural documents—there is no single architectural model such as ISO 7498, the Open Systems Interconnection (OSI) model—have fewer and less rigidly defined layers than the OSI model, and thus provide an easier fit for real-world protocols. One frequently referenced document, RFC 1958, does not contain a stack of layers. The lack of emphasis on layering is a major difference between the IETF and OSI approaches. It only refers to the existence of the internetworking layer and generally to upper layers; this document was intended as a 1996 snapshot of the architecture: “The Internet and its architecture have grown in evolutionary fashion from modest beginnings, rather than from a Grand Plan. While this process of evolution is one of the main reasons for the technology’s success, it nevertheless seems useful to record a snapshot of the current principles of the Internet architecture.”

RFC 1122, entitled Host Requirements, is structured in paragraphs referring to layers, but the document refers to many other architectural principles not emphasizing layering. It loosely defines a four-layer model, with the layers having names, not numbers, as follows:

  • The application layer is the scope within which applications create user data and communicate this data to other applications on another or the same host. The applications, or processes, make use of the services provided by the underlying, lower layers, especially the Transport Layer which provides reliable or unreliable pipes to other processes. The communications partners are characterized by the application architecture, such as the client-server model and peer-to-peer networking. This is the layer in which all higher level protocols, such as SMTP, FTP, SSH, HTTP, operate. Processes are addressed via ports which essentially represent services.
  • The transport layer performs host-to-host communications on either the same or different hosts and on either the local network or remote networks separated by routers. It provides a channel for the communication needs of applications. UDP is the basic transport layer protocol, providing an unreliable datagram service. The Transmission Control Protocol provides flow-control, connection establishment, and reliable transmission of data.
  • The internet layer has the task of exchanging datagrams across network boundaries. It provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. It is therefore also referred to as the layer that establishes internetworking, indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next IP router that has the connectivity to a network closer to the final data destination.
  • The link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer includes the protocols used to describe the local network topology and the interfaces needed to effect transmission of Internet layer datagrams to next-neighbor hosts.

The Internet protocol suite and the layered protocol stack design were in use before the OSI model was established. Since then, the TCP/IP model has been compared with the OSI model in books and classrooms, which often results in confusion because the two models use different assumptions and goals, including the relative importance of strict layering.

This abstraction also allows upper layers to provide services that the lower layers do not provide. While the original OSI model was extended to include connectionless services (OSIRM CL), IP is not designed to be reliable and is a best effort delivery protocol. This means that all transport layer implementations must choose whether or how to provide reliability. UDP provides data integrity via a checksum but does not guarantee delivery; TCP provides both data integrity and delivery guarantee by retransmitting until the receiver acknowledges the reception of the packet.

This model lacks the formalism of the OSI model and associated documents, but the IETF does not use a formal model and does not consider this a limitation, as illustrated in the comment by David D. Clark, “We reject: kings, presidents and voting. We believe in: rough consensus and running code.” Criticisms of this model, which have been made with respect to the OSI model, often do not consider ISO’s later extensions to that model.
For multi-access links with their own addressing systems (e.g. Ethernet) an address mapping protocol is needed. Such protocols can be considered to be below IP but above the existing link system. While the IETF does not use the terminology, this is a subnetwork dependent convergence facility according to an extension to the OSI model, the internal organization of the network layer (IONL).

ICMP & IGMP operate on top of IP but do not transport data like UDP or TCP. Again, this functionality exists as layer management extensions to the OSI model, in its Management Framework (OSIRM MF)

The SSL/TLS library operates above the transport layer (uses TCP) but below application protocols. Again, there was no intention, on the part of the designers of these protocols, to comply with OSI architecture.

The link is treated as a black box. The IETF explicitly does not intend to discuss transmission systems, which is a less academic but practical alternative to the OSI model.
The following is a description of each layer in the TCP/IP networking model starting from the lowest level.

Link Layer

The link layer has the networking scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP literature. It is the lowest component layer of the Internet protocols, as TCP/IP is designed to be hardware independent. As a result, TCP/IP may be implemented on top of virtually any hardware networking technology.

The link layer is used to move packets between the Internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on a given link can be controlled both in the software device driver for the network card, as well as on firmware or specialized chipsets. These perform data link functions such as adding a packet header to prepare it for transmission, then actually transmit the frame over a physical medium. The TCP/IP model includes specifications of translating the network addressing methods used in the Internet Protocol to link layer addresses, such as Media Access Control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist in the link layer, but are not explicitly defined.

This is also the layer where packets may be selected to be sent over a virtual private network or other networking tunnel. In this scenario, the link layer data may be considered application data which traverses another instantiation of the IP stack for transmission or reception over another IP connection. Such a connection, or virtual link, may be established with a transport protocol or even an application scope protocol that serves as a tunnel in the link layer of the protocol stack. Thus, the TCP/IP model does not dictate a strict hierarchical encapsulation sequence.

The TCP/IP model’s link layer corresponds to the Open Systems Interconnection (OSI) model physical and data link layers, layers one and two of the OSI model.

Internet Layer

The internet layer has the responsibility of sending packets across potentially multiple networks. Internetworking requires sending data from the source network to the destination network. This process is called routing.

The Internet Protocol performs two basic functions:

  • Host addressing and identification: This is accomplished with a hierarchical IP addressing system.
  • Packet routing: This is the basic task of sending packets of data (datagrams) from source to destination by forwarding them to the next network router closer to the final destination.

The internet layer is not only agnostic of data structures at the transport layer, but it also does not distinguish between operation of the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively.

Some of the protocols carried by IP, such as ICMP which is used to transmit diagnostic information, and IGMP which is used to manage IP Multicast data, are layered on top of IP but perform internetworking functions. This illustrates the differences in the architecture of the TCP/IP stack of the Internet and the OSI model. The TCP/IP model’s internet layer corresponds to layer three of the Open Systems Interconnection (OSI) model, where it is referred to as the network layer.

The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding the transport layer datagrams to an appropriate next-hop router for further relaying to its destination. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet. The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts’ computers, and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006.

Transport Layer

The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes process-to-process connectivity, meaning it provides end-to-end services that are independent of the structure of user data and the logistics of exchanging information for any particular specific purpose. Its responsibility includes end-to-end message transfer independent of the underlying network, along with error control, segmentation, flow control, congestion control, and application addressing (port numbers). End-to-end message transmission or connecting applications at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP.

For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service announcements or directory services.

Because IP provides only a best effort delivery, some transport layer protocols offer reliability. However, IP can run over a reliable data link protocol such as the High-Level Data Link Control (HDLC).

For example, the TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream:

• data arrives in-order
• data has minimal error (i.e., correctness)
• duplicate data is discarded
• lost or discarded packets are resent
• includes traffic congestion control

The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented—not byte-stream-oriented like TCP—and provides multiple streams multiplexed over a single connection. It also provides multi-homing support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP), but can also be used for other applications.

The User Datagram Protocol is a connectionless datagram protocol. Like IP, it is a best effort, “unreliable” protocol. Reliability is addressed through error detection using a weak checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is designed for real-time data such as streaming audio and video.

The applications at any given network address are distinguished by their TCP or UDP port. By convention certain well known ports are associated with specific applications.
The TCP/IP model’s transport or host-to-host layer corresponds to the fourth layer in the Open Systems Interconnection (OSI) model, also called the transport layer.

Application Layer

The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower level protocols. This may include some basic network support services such as protocols for routing and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP or UDP messages), which in turn use lower layer protocols to effect actual data transfer.

The TCP/IP model does not consider the specifics of formatting and presenting data, and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). Such functions are the realm of libraries and application programming interfaces.

Application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate, although the applications are usually aware of key qualities of the transport layer connection such as the end point IP addresses and port numbers. Application layer protocols are often associated with particular client-server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application.

The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications must interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for network address translator (NAT) traversal to consider the application payload.

The application layer in the TCP/IP model is often compared as equivalent to a combination of the fifth (Session), sixth (Presentation), and the seventh (Application) layers of the Open Systems Interconnection (OSI) model.

Furthermore, the TCP/IP reference model distinguishes between user protocols and support protocols. Support protocols provide services to a system. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol.

Layer names and number of layers in the literature


The following table shows various networking models. The number of layers varies between three and seven.

Layer names and number of layers in the literature

Some of the networking models are from textbooks, which are secondary sources that may conflict with the intent of RFC 1122 and other IETF primary sources.

Comparison of TCP/IP and OSI Layering


The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the eXternal Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport.

Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or the entire TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether a hardware layer is assumed below the link layer.

Several authors have attempted to incorporate the OSI model’s layers 1 and 2 into the TCP/IP model, since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model’s layers 1 and 2.

The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet protocol and architecture development is not intended to be OSI-compliant. RFC 3439, addressing Internet architecture, contains a section entitled: “Layering Considered Harmful”.

For example, the session and presentation layers of the OSI suite are considered to be included to the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session layer functionality is also realized with the port numbering of the TCP and UDP protocols, which cover the transport layer in the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange.

Conflicts are apparent also in the original OSI model, ISO 7498, when not considering the annexes to this model, e.g., the ISO 7498/4 Management Framework, or the ISO 8648 Internal Organization of the Network layer (IONL). When the IONL and Management Framework documents are considered, the ICMP and IGMP are defined as layer management protocols for the network layer. In like manner, the IONL provides a structure for “subnetwork dependent convergence facilities” such as ARP and RARP.

IETF protocols can be encapsulated recursively, as demonstrated by tunneling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunneling at the network layer.

Implementations


The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and IGMPv6 and is often accompanied by an integrated IPSec security layer.

Application programmers are typically concerned only with interfaces in the application layer and often also in the transport layer, while the layers below are services provided by the TCP/IP stack in the operating system. Most IP implementations are accessible to programmers through sockets and APIs.

Unique implementations include Lightweight TCP/IP, an open source stack designed for embedded systems, and KA9Q NOS, a stack and associated protocols for amateur packet radio systems and personal computers connected via serial lines.

Microcontroller firmware in the network adapter typically handles link issues, supported by driver software in the operating system. Non-programmable analog and digital electronics are normally in charge of the physical components below the link layer, typically using an application-specific integrated circuit (ASIC) chipset for each network interface or other physical standard. High-performance routers are to a large extent based on fast non-programmable digital electronics, carrying out link level switching.

OSI Model

From Wikipedia, the free encyclopedia

OSI model by Layer

The Open Systems Interconnection model (OSI model) is a conceptual model that characterizes and standardizes the communication functions of a telecommunication or computing system without regard to their underlying internal structure and technology. Its goal is the interoperability of diverse communication systems with standard protocols. The model partitions a communication system into abstraction layers. The original version of the model defined seven layers.

A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that comprise the contents of that path. Two instances at the same layer are visualized as connected by a horizontal connection in that layer.

800px-OSI-model-Communication.svg

The model is a product of the Open Systems Interconnection project at the International Organization for Standardization (ISO), maintained by the identification ISO/IEC 7498-1.
Communication in the OSI-Model (example with layers 3 to 5)

Contents
1 History
2 Description of OSI layers
2.1 Layer 1: Physical Layer
2.2 Layer 2: Data Link Layer
2.3 Layer 3: Network Layer
2.4 Layer 4: Transport Layer
2.5 Layer 5: Session Layer
2.6 Layer 6: Presentation Layer
2.7 Layer 7: Application Layer
3 Cross-layer functions
4 Interfaces
5 Examples
6 Comparison with TCP/IP model

History


In the late 1970s, one project was administered by the International Organization for Standardization (ISO), while another was undertaken by the International Telegraph and Telephone Consultative Committee (CCITT, from French: Comité Consultatif International Téléphonique et Télégraphique). These two international standards bodies each developed a document that defined similar networking models.

In 1983, these two documents were merged to form a standard called The Basic Reference Model for Open Systems Interconnection. The standard is usually referred to as the Open Systems Interconnection Reference Model, the OSI Reference Model, or simply the OSI model. It was published in 1984 by both the ISO, as standard ISO 7498, and the renamed CCITT (now called the Telecommunications Standardization Sector of the International Telecommunication Union or ITU-T) as standard X.200.

OSI had two major components, an abstract model of networking, called the Basic Reference Model or seven-layer model, and a set of specific protocols.

The concept of a seven-layer model was provided by the work of Charles Bachman at Honeywell Information Services. Various aspects of OSI design evolved from experiences with the ARPANET, NPLNET, EIN, CYCLADES network and the work in IFIP WG6.1. The new design was documented in ISO 7498 and its various addenda. In this model, a networking system was divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacted directly only with the layer immediately beneath it, and provided facilities for use by the layer above it.

Protocols enable an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions abstractly described the functionality provided to an (N)-layer by an (N-1) layer, where N was one of the seven layers of protocols operating in the local host.

The OSI standards documents are available from the ITU-T as the X.200-series of recommendations. Some of the protocol specifications were also available as part of the ITU-T X series. The equivalent ISO and ISO/IEC standards for the OSI model were available from ISO. Not all are free of charge.

Description of OSI Layers


The recommendation X.200 describes seven layers, labeled 1 to 7. Layer 1 is the lowest layer in this model.

Description of OSI layers

At each level N, two entities at the communicating devices (layer N peers) exchange protocol data units (PDUs) by means of a layer N protocol. Each PDU contains a payload, called the service data unit (SDU), along with protocol-related headers or footers. Data processing by two communicating OSI-compatible devices is done as such:

  1. The data to be transmitted is composed at the topmost layer of the transmitting device (layer N) into a protocol data unit (PDU).
  2. The PDU is passed to layer N-1, where it is known as the service data unit (SDU).
  3. At layer N-1 the SDU is concatenated with a header, a footer, or both, producing a layer N-1 PDU. It is then passed to layer N-2.
  4. The process continues until reaching the lowermost level, from which the data is transmitted to the receiving device.
  5. At the receiving device the data is passed from the lowest to the highest layer as a series of SDUs while being successively stripped from each layer’s header or footer, until reaching the topmost layer, where the last of the data is consumed.

Some orthogonal aspects, such as management and security, involve all of the layers (See ITU-T X.800 Recommendation). These services are aimed at improving the CIA triad – confidentiality, integrity, and availability – of the transmitted data. In practice, the availability of a communication service is determined by the interaction between network design and network management protocols. Appropriate choices for both of these are needed to protect against denial of service.

Layer 1: Physical Layer

The physical layer defines the electrical and physical specifications of the data connection. It defines the relationship between a device and a physical transmission medium (for example, an electrical cable, an optical fiber cable, or a radio frequency link). This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and similar characteristics for connected devices and frequency (5 GHz or 2.4 GHz etc.) for wireless devices. It is responsible for transmission and reception of unstructured raw data in a physical medium. Bit rate control is done at the physical layer. It may define transmission mode as simplex, half duplex, and full duplex. It defines the network topology as bus, mesh, or ring being some of the most common.
The physical layer is the layer of low-level networking equipment, such as some hubs, cabling, and repeaters. The physical layer is never concerned with protocols or other such higher-layer items. Examples of hardware in this layer are network adapters, repeaters, network hubs, modems, and fiber media converters.

Layer 2: Data Link Layer

The data link layer provides node-to-node data transfer—a link between two directly connected nodes. It detects and possibly corrects errors that may occur in the physical layer. It defines the protocol to establish and terminate a connection between two physically connected devices. It also defines the protocol for flow control between them.
IEEE 802 divides the data link layer into two sublayers:

  • Medium access control (MAC) layer – responsible for controlling how devices in a network gain access to a medium and permission to transmit data.
  • Logical link control (LLC) layer – responsible for identifying and encapsulating network layer protocols, and controls error checking and frame synchronization.

The MAC and LLC layers of IEEE 802 networks such as 802.3 Ethernet, 802.11 Wi-Fi, and 802.15.4 ZigBee operate at the data link layer.

The Point-to-Point Protocol (PPP) is a data link layer protocol that can operate over several different physical layers, such as synchronous and asynchronous serial lines.

The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer that provides both error correction and flow control by means of a selective-repeat sliding-window protocol.

Layer 3: Network Layer

The network layer provides the functional and procedural means of transferring variable length data sequences (called datagrams) from one node to another connected in “different networks”. A network is a medium to which many nodes can be connected, on which every node has an address and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver the message to the destination node, possibly routing it through intermediate nodes. If the message is too large to be transmitted from one node to another on the data link layer between those nodes, the network may implement message delivery by splitting the message into several fragments at one node, sending the fragments independently, and reassembling the fragments at another node. It may, but does not need to, report delivery errors.

Message delivery at the network layer is not necessarily guaranteed to be reliable; a network layer protocol may provide reliable message delivery, but it need not do so.
A number of layer-management protocols, a function defined in the management annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them.

Layer 4: Transport Layer

The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source to a destination host via one or more networks, while maintaining the quality of service functions.

An example of a transport-layer protocol in the standard Internet stack is Transmission Control Protocol (TCP), usually built on top of the Internet Protocol (IP).

The transport layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and re-transmit those that fail. The transport layer also provides the acknowledgement of the successful data transmission and sends the next data if no errors occurred. The transport layer creates packets out of the message received from the application layer. Packetizing is a process of dividing the long message into smaller messages.

OSI defines five classes of connection-mode transport protocols ranging from class 0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery, and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0-4 classes are shown in the following table:

Layer 4 Transport Layer

An easy way to visualize the transport layer is to compare it with a post office, which deals with the dispatch and classification of mail and parcels sent. Do remember, however, that a post office manages the outer envelope of mail. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunneling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM’s SNA or Novell’s IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete frames or packets to deliver to an endpoint. L2TP carries PPP frames inside transport packet.

Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer-4 protocols within OSI.

Layer 5: Session Layer

The session layer controls the dialogues (connections) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for graceful close of sessions, which is a property of the Transmission Control Protocol, and also for session checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The session layer is commonly implemented explicitly in application environments that use remote procedure calls.

Layer 6: Presentation Layer

The presentation layer establishes context between application-layer entities, in which the application-layer entities may use different syntax and semantics if the presentation service provides a mapping between them. If a mapping is available, presentation service data units are encapsulated into session protocol data units and passed down the protocol stack.

This layer provides independence from data representation by translating between application and network formats. The presentation layer transforms data into the form that the application accepts. This layer formats data to be sent across a network. It is sometimes called the syntax layer. The presentation layer can include compression functions. The Presentation Layer negotiates the Transfer Syntax.

The original presentation structure used the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML. ASN.1 effectively makes an application protocol invariant with respect to syntax.

Layer 7: Application Layer

The application layer is the OSI layer closest to the end user, which means both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application-layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. The most important distinction in the application layer is the distinction between the application-entity and the application. For example, a reservation website might have two application-entities: one using HTTP to communicate with its users, and one for a remote database protocol to record reservations. Neither of these protocols have anything to do with reservations. That logic is in the application itself. The application layer per se has no means to determine the availability of resources in the network.

Cross-layer Functions


Cross-layer functions are services that are not tied to a given layer, but may affect more than one layer. Examples include the following:

  • Security service (telecommunication) as defined by ITU-T X.800 recommendation.
  • Management functions, i.e. functions that permit to configure, instantiate, monitor, terminate the communications of two or more entities: there is a specific application-layer protocol, common management information protocol (CMIP) and its corresponding service, common management information service (CMIS), they need to interact with every layer in order to deal with their instances.
  • Multiprotocol Label Switching (MPLS) MPLS, ATM, and X.25 are 3a protocols. OSI divides the Network Layer into 3 roles: 3a) Subnetwork Access, 3b) Subnetwork Dependent Convergence and 3c) Subnetwork Independent Convergence. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram-based service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. Sometimes one sees reference to a Layer 2.5. This is a fiction create by those who are unfamiliar with the OSI Model and ISO 8648, Internal Organization of the Network Layer in particular.
  • ARP determines the mapping of an IPv4 address to the underlying MAC address. This is not a translation function. If it were IPv4 and the MAC address would be at the same layer. The implementation of the MAC protocol decodes the MAC PDU and delivers the User-Data to the IP-layer. Because Ethernet is a multi-access media, a device sending a PDU on an Ethernet segment needs to know what IP address maps to what MAC address.
  • DHCP, DHCP assigns IPv4 addresses to new systems joining a network. There is no means to derive or obtain an IPv4 address from an Ethernet address.
  • Domain Name Service is an Application Layer service which is used to look up the IP address of a given domain name. Once a reply is received from the DNS server, it is then possible to form a Layer 4 connection or flow to the desired host. There are no connections at Layer 3.
  • Cross MAC and PHY Scheduling is essential in wireless networks because of the time varying nature of wireless channels. By scheduling packet transmission only in favorable channel conditions, which requires the MAC layer to obtain channel state information from the PHY layer, network throughput can be significantly improved and energy waste can be avoided.

Interfaces


Neither the OSI Reference Model nor OSI protocols specify any programming interfaces, other than deliberately abstract service specifications. Protocol specifications precisely define the interfaces between different computers, but the software interfaces inside computers, known as network sockets are implementation-specific.

For example, Microsoft Windows’ Winsock, and Unix’s Berkeley sockets and System V Transport Layer Interface, are interfaces between applications (layer 5 and above) and the transport (layer 4). NDIS and ODI are interfaces between the media (layer 2) and the network protocol (layer 3).

Interface standards, except for the physical layer to media, are approximate implementations of OSI service specifications.

Examples


Interfaces

Comparison with TCP/IP Model


The design of protocols in the TCP/IP model of the Internet does not concern itself with strict hierarchical encapsulation and layering. RFC 3439 contains a section entitled “Layering considered harmful”. TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the end-to-end transport connection; the internetworking range; and the scope of the direct links to other nodes on the local network.

Despite using a different concept for layering than the OSI model, these layers are often compared with the OSI layering scheme in the following way:

  • The Internet application layer includes the OSI application layer, presentation layer, and most of the session layer.
  • Its end-to-end transport layer includes the graceful close function of the OSI session layer as well as the OSI transport layer.
  • The internetworking layer (Internet layer) is a subset of the OSI network layer.
  • The link layer includes the OSI data link layer and sometimes the physical layers, as well as some protocols of the OSI’s network layer.

These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in such things as the internal organization of the network layer document.

The presumably strict layering of the OSI model as it is usually described does not present contradictions in TCP/IP, as it is permissible that protocol usage does not follow the hierarchy implied in a layered model. Such examples exist in some routing protocols (for example OSPF), or in the description of tunneling protocols, which provide a link layer for an application, although the tunnel host protocol might well be a transport or even an application-layer protocol in its own right.

Video-Telephony

From Wikipedia, the free encyclopedia (Redirected from Videoconferencing)

1024px-Teliris_VL_Modular

An upscale Teliris VirtuaLive telepresence system in use (2007).

OLYMPUS DIGITAL CAMERA

A 1969 AT&T Picturephone, the result of decades long R&D at a cost of over US$500M

Videotelephony comprises the technologies for the reception and transmission of audio-video signals by users at different locations, for communication between people in real-time. A videophone is a telephone with a video display, capable of simultaneous video and audio for communication between people in real-time. Videoconferencing implies the use of this technology for a group or organizational meeting rather than for individuals, in a videoconference. Telepresence may refer either to a high-quality videotelephony system (where the goal is to create the illusion that remote participants are in the same room) or to meetup technology which goes beyond video into robotics (such as moving around the room or physicially manipulating objects). Videoconferencing has also been called “visual collaboration” and is a type of groupware.

At the dawn of its commercial deployment from the 1950s through the 1990s, videotelephony also included “image phones” which would exchange still images between units every few seconds over conventional POTS-type telephone lines, essentially the same as slow scan TV systems. The development of advanced video codecs, more powerful CPUs, and high-bandwidth Internet telecommunication services in the late 1990s allowed videophones to provide high quality low-cost colour service between users almost anyplace in the world that the Internet is available.

Although not as widely used in everyday communications as audio-only and text communication, useful applications include sign language transmission for deaf and speech-impaired people, distance education, telemedicine, and overcoming mobility issues. It is also used in commercial and corporate settings to facilitate meetings and conferences, typically between parties that already have established relationships. News media organizations have begun to use desktop technologies like Skype to provide higher-quality audio than the phone network, and video links at much lower cost than sending professional equipment or using a professional studio. More popular videotelephony technologies use the Internet rather than the traditional landline phone network, even accounting for modern digital packetized phone network protocols, and even though videotelephony software commonly runs on smartphones.

By reducing the need to travel to bring people together, this technology also contributes to reductions in carbon emissions, thereby helping to reduce global warming.[dubious – discuss]

Contents
1 History
2 Major categories
2.1 Categories by cost and quality of service
3 Security concerns
4 Adoption
5 Technology
5.1 Components and types
5.2 Videoconferencing modes
5.3 Echo cancellation
5.4 Bandwidth requirements
5.5 Standards
5.6 Call setup
5.7 Conferencing layers
5.8 Multipoint videoconferencing
5.9 Cloud-based video conferencing
6 Impact
6.1 Impact on government and law
6.2 Impact on education
6.3 Impact on medicine and health
6.4 Impact on business
6.5 Impact on media relations
6.6 Use in sign language communications
6.6.1 21st century improvements
6.6.2 Present day usage
7 Descriptive names and terminology
8 Popular culture
9 See also
10 Notes
11 Bibliography
12 Further reading
12.1 Unsorted
12.2 General
12.3 Environmental benefits[dubious – discuss]
12.4 Historical and technical

History


France_in_XXI_Century._Correspondance_cinema

Artist’s conception: 21st century videotelephony imagined in the early 20th century (1910).

Bell_telephone_magazine_(1922)_(14756466975)

Video telephone booth, 1930

The concept of videotelephony was first popularized in the late 1870s in both the United States and Europe, although the basic sciences to permit its very earliest trials would take nearly a half century to be discovered. This was first embodied in the device which came to be known as the video telephone, or videophone, and it evolved from intensive research and experimentation in several telecommunication fields, notably electrical telegraphy, telephony, radio, and television.

Simple analog videophone communication could be established as early as the invention of the television. Such an antecedent usually consisted of two closed-circuit television systems connected via coax cable or radio. An example of that was the German Reich Postzentralamt (post office) video telephone network serving Berlin and several German cities via coaxial cables between 1936 and 1940.

The development of the crucial video technology first started in the latter half of the 1920s in the United Kingdom and the United States, spurred notably by John Logie Baird and AT&T’s Bell Labs. This occurred in part, at least with AT&T, to serve as an adjunct supplementing the use of the telephone. A number of organizations believed that videotelephony would be superior to plain voice communications. However video technology was to be deployed in analog television broadcasting long before it could become practical—or popular—for videophones.

On_Line_System_Videoconferencing_FJCC_1968

Multiple user videoconferencing first being demonstrated with Stanford Research Institute’s NLS computer technology (1968).

During the first manned space flights, NASA used two radio-frequency (UHF or VHF) video links, one in each direction. TV channels routinely use this type of videotelephony when reporting from distant locations. The news media were to become regular users of mobile links to satellites using specially equipped trucks, and much later via special satellite videophones in a briefcase.

This technique was very expensive, though, and could not be used for applications such as telemedicine, distance education, and business meetings. Attempts at using normal telephony networks to transmit slow-scan video, such as the first systems developed by AT&T Corporation, first researched in the 1950s, failed mostly due to the poor picture quality and the lack of efficient video compression techniques. The greater 1 MHz bandwidth and 6 Mbit/s bit rate of the AT&T Picturephone in the 1970s also did not achieve commercial success, mostly due to its high cost, but also due to a lack of network effect — with only a few hundred Picturephones in the world, users had extremely few contacts they could actually call to, and interoperability with other videophone systems would not exist for decades.

Videotelephony developed in parallel with conventional voice telephone systems from the mid-to-late 20th century. Very expensive videoconferencing systems rapidly evolved throughout the 1980s and 1990s from proprietary equipment, software and network requirements to standards-based technologies that were available for anyone to purchase at a reasonable cost.

Only in the late 20th century with the advent of powerful video codecs combined with high-speed Internet broadband and ISDN service did videotelephony become a practical technology for regular use.

In the 1980s, digital telephony transmission networks became possible, such as with ISDN networks, assuring a minimum bit rate (usually 128 kilobits/s) for compressed video and audio transmission. During this time, there was also research into other forms of digital video and audio communication. Many of these technologies, such as the Media space, are not as widely used today as videoconferencing but were still an important area of research. The first dedicated systems started to appear as ISDN networks were expanding throughout the world. One of the first commercial videoconferencing systems sold to companies came from PictureTel Corp., which had an Initial Public Offering in November, 1984.

In 1984, Concept Communication in the United States replaced the then-100 pound, US$100,000 computers necessary for teleconferencing, with a $12,000 circuit board that doubled the video frame rate from 15 up to 30 frames per second, and which reduced the equipment to the size of a circuit board fitting into standard personal computers. The company also secured a patent for a codec for full-motion videoconferencing, first demonstrated at AT&T Bell Labs in 1986.

CUcollaboration

Global Schoolhouse students communicating via CU-SeeMe, with a video framerate between 3–9 frames per second (1993).

Videoconferencing systems throughout the 1990s rapidly evolved from very expensive proprietary equipment, software and network requirements to a standards-based technology readily available to the general public at a reasonable cost.

Finally, in the 1990s, Internet Protocol-based videoconferencing became possible, and more efficient video compression technologies were developed, permitting desktop, or personal computer (PC)-based videoconferencing. In 1992 CU-SeeMe was developed at Cornell by Tim Dorcey et al. In 1995 the first public videoconference between North America and Africa took place, linking a technofair in San Francisco with a techno-rave and cyberdeli in Cape Town. At the Winter Olympics opening ceremony in Nagano, Japan, Seiji Ozawa conducted the Ode to Joy from Beethoven’s Ninth Symphony simultaneously across five continents in near-real time.

While videoconferencing technology was initially used primarily within internal corporate communication networks, one of the first community service usages of the technology started in 1992 through a unique partnership with PictureTel and IBM Corporations which at the time were promoting a jointly developed desktop based videoconferencing product known as the PCS/1. Over the next 15 years, Project DIANE (Diversified Information and Assistance Network) grew to utilize a variety of videoconferencing platforms to create a multi-state cooperative public service and distance education network consisting of several hundred schools, libraries, science museums, zoos and parks, and many other community oriented organizations.

In the 2000s, videotelephony was popularized via free Internet services such as Skype and iChat, web plugins and on-line telecommunication programs that promoted low cost, albeit lower-quality, videoconferencing to virtually every location with an Internet connection.

Medvedev_and_Nurgaliev_TANDBERG_Tactical_MXP

Russian President Dmitry Medvedev attending the Singapore APEC summit, holding a videoconference with Rashid Nurgaliyev via a Tactical MXP, after an arms depot explosion in Russia (2009).

With the rapid improvements and popularity of the Internet, videotelephony has become widespread through the deployment of video-enabled mobile phones, plus videoconferencing and computer webcams which utilize Internet telephony. In the upper echelons of government, business and commerce, telepresence technology, an advanced form of videoconferencing, has helped reduce the need to travel.

In May 2005, the first high definition video conferencing systems, produced by LifeSize Communications, were displayed at the Interop trade show in Las Vegas, Nevada, able to provide video at 30 frames per second with a 1280 by 720 display resolution. Polycom introduced its first high definition video conferencing system to the market in 2006. As of the 2010s, high definition resolution for videoconferencing became a popular feature, with most major suppliers in the videoconferencing market offering it.

Technological developments by videoconferencing developers in the 2010s have extended the capabilities of video conferencing systems beyond the boardroom for use with hand-held mobile devices that combine the use of video, audio and on-screen drawing capabilities broadcasting in real-time over secure networks, independent of location. Mobile collaboration systems now allow people in previously unreachable locations, such as workers on an off-shore oil rig, the ability to view and discuss issues with colleagues thousands of miles away. Traditional videoconferencing system manufacturers have begun providing mobile applications as well, such as those that allow for live and still image streaming.

The highest ever video call (other than those from aircraft and spacecraft) took place on May 19, 2013 when British adventurer Daniel Hughes used a smartphone with a BGAN satellite modem to make a videocall to the BBC from the summit of Mount Everest, at 8,848 m above sea level.

Major Categories


612px-IP_Video_Phone_1535-DSCN1202-2

A modern Avaya Nortel 1535 IP model broadband videophone (2008), using VoIP.

1024px-Polycom_VSX_7000_with_2_video_conferencing_screens

An older dual-display Polycom videoconferencing system (2008).

1024px-Webcam_grayscale

A typical low-cost webcam for use with personal computers (2006).

Videotelephony can be categorized by its functionality, that is to its intended purpose, and also by its method of transmissions.

Videophones were the earliest form of videotelephony, dating back to initial tests in 1927 by AT&T. During the late 1930s the post offices of several European governments established public videophone services for person-to-person communications utilizing dual cable circuit telephone transmission technology. In the present day standalone videophones and UMTS video-enabled mobile phones are usually used on a person-to-person basis.

Videoconferencing saw its earliest use with AT&T’s Picturephone service in the early 1970s. Transmissions were analog over short distances, but converted to digital forms for longer calls, again using telephone transmission technology. Popular corporate video-conferencing systems in the present day have migrated almost exclusively to digital ISDN and IP transmission modes due to the need to convey the very large amounts of data generated by their cameras and microphones. These systems are often intended for use in conference mode, that is by many people in several different locations, all of whom can be viewed by every participant at each location.

Telepresence systems are a newer, more advanced subset of videoconferencing systems, meant to allow higher degrees of video and audio fidelity. Such high-end systems are typically deployed in corporate settings.

Mobile collaboration systems are another recent development, combining the use of video, audio, and on-screen drawing capabilities using newest generation hand-held electronic devices broadcasting over secure networks, enabling multi-party conferencing in real-time, independent of location.

A more recent technology encompassing these functions is TV cams. TV cams enable people to make video “phone” calls using video calling services, like Skype on their TV, without using a PC connection. TV cams are specially designed video cameras that feed images in real time to another TV camera or other compatible computing devices like smartphones, tablets and computers.

Webcams are popular, relatively low cost devices which can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing.

Each of the systems has its own advantages and disadvantages, including video quality, capital cost, degrees of sophistication, transmission capacity requirements, and cost of use.

Categories by cost and quality of service

Videoconferencing_via_Skype_across_the_Internet_between_Interview_Coach_Mark_Efinger_and_a_woman_being_coached

Web cameras in laptop computers can allow two people to have a “face-to-face” conversation despite being separated by a vast distance. In this example, a coach helps a student prepare for a college interview.

From the least to the most expensive systems:

  • Web camera videophone and videoconferencing systems that serve as complements to personal computers, connected to other participants by computer and VoIP networks – lowest direct cost assuming the users already possess computers at their respective locations. Quality of service can range from low to very high, including high definition video available on the latest model webcams. A related and similar device is a TV camera which is usually small, sits on top of a TV and can connect to it via its HDMI port, similar to how a webcam attaches to a computer via a USB port.
  • Videophones – low to midrange cost. The earliest standalone models operated over either plain POTS telephone lines on the PSTN telephone networks or more expensive ISDN lines, while newer models have largely migrated to Internet protocol line service for higher image resolutions and sound quality. Quality of service for standalone videophones can vary from low to high;
    Videoconferencing systems – midrange cost, usually utilizing multipoint control units or other bridging services to allow multiple parties on a videoconference calls. Quality of service can vary from moderate to high.
  • Telepresence systems – highest capabilities and highest cost. Full high-end systems can involve specially built teleconference rooms to allow expansive views with very high levels of audio and video fidelity, to permit an ‘immersive’ videoconference. When the proper type and capacity transmission lines are provided between facilities, the quality of service reaches state-of-the-art levels.

Security Concerns


Computer security experts have shown that poorly configured or inadequately supervised videoconferencing system can permit an easy “virtual” entry by computer hackers and criminals into company premises and corporate boardrooms, via their own videoconferencing systems.

Adoption


For over a century, futurists have envisioned a future where telephone conversations will take place as actual face-to-face encounters with video as well as audio. Sometimes it is simply not possible or practical to have face-to-face meetings with two or more people. Sometimes a telephone conversation or conference call is adequate. Other times, e-mail exchanges are adequate. However, videoconferencing adds another possible alternative, and can be considered when:

  • A live conversation is needed
  • Non-verbal (visual) information is an important component of the conversation
  • The parties of the conversation can’t physically come to the same location
  • The expense or time of travel is a consideration

Some observers argue that three outstanding issues have prevented videoconferencing from becoming a widely adopted form of communication, despite the ubiquity of videoconferencing-capable systems.

  • Eye contact: Eye contact plays a large role in conversational turn-taking, perceived attention and intent, and other aspects of group communication. While traditional telephone conversations give no eye contact cues, many videoconferencing systems are arguably worse in that they provide an incorrect impression that the remote interlocutor is avoiding eye contact. Some telepresence systems have cameras located in the screens that reduce the amount of parallax observed by the users. This issue is also being addressed through research that generates a synthetic image with eye contact using stereo reconstruction.
  • Telcordia Technologies, formerly Bell Communications Research, owns a patent for eye-to-eye videoconferencing using rear projection screens with the video camera behind it, evolved from a 1960s U.S. military system that provided videoconferencing services between the White House and various other government and military facilities. This technique eliminates the need for special cameras or image processing.
  • Appearance consciousness: A second psychological problem with videoconferencing is being on camera, with the video stream possibly even being recorded. The burden of presenting an acceptable on-screen appearance is not present in audio-only communication. Early studies by Alphonse Chapanis found that the addition of video actually impaired communication, possibly because of the consciousness of being on camera.
  • Signal latency: The information transport of digital signals in many steps need time. In a telecommunicated conversation, an increased latency (time lag) larger than about 150–300 ms becomes noticeable and is soon observed as unnatural and distracting. Therefore, next to a stable large bandwidth, a small total round-trip time is another major technical requirement for the communication channel for interactive videoconferencing.
  • Bandwidth and quality of service: In some countries it is difficult or expensive to get a high quality connection that is fast enough for good-quality video conferencing. Technologies such as ADSL have limited upload speeds and cannot upload and download simultaneously at full speed. As Internet speeds increase higher quality and high definition video conferencing will become more readily available.
  • Complexity of systems: Most users are not technical and want a simple interface. In hardware systems, an unplugged cord or a flat battery in a remote control is seen as failure, contributing to perceived unreliability which drives users back to traditional meetings. Successful systems are backed by support teams who can pro-actively support and provide fast assistance when required.
  • Perceived lack of interoperability: not all systems can readily interconnect, for example ISDN and IP systems require a gateway. Popular software solutions cannot easily connect to hardware systems. Some systems use different standards, features and qualities which can require additional configuration when connecting to dissimilar systems. Free software systems circumvent this limitation by making it relatively easy for a single user to communicate over multiple incompatible platforms.
  • Expense of commercial systems: well-designed telepresence systems require specially designed rooms which can cost hundreds of thousands of dollars to fit out their rooms with codecs, integration equipment (such as Multipoint Control Units), high fidelity sound systems and furniture. Monthly charges may also be required for bridging services and high capacity broadband service.

These are some of the reasons many systems are often used for internal corporate use only, as they are less likely to result in lost sales. One alternative to companies lacking dedicated facilities is the rental of videoconferencing-equipped meeting rooms in cities around the world. Clients can book rooms and turn up for the meeting, with all technical aspects being prearranged and support being readily available if needed. The issue of eye-contact may be solved with advancing technology, including smartphones which have the screen and camera in essentially the same place. The ubiquity of smartphones, tablet computers, and computers with built-in audio and webcams in developed countries obviates the need to buy expensive hardware.

Technology


This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2013) (Learn how and when to remove this template message)

Components and Types


1024px-Polycom_VSX_7000_with_2_video_conferencing_screens

Dual display: An older Polycom VSX 7000 system and camera used for videoconferencing, with two displays for simultaneous broadcast from separate locations (2008).

1024px-Room220

Various components and the camera of a LifeSize Communications Room 220 high definition multipoint system (2010).

GoogleHangoutsMeeting

A video conference meeting facilitated by Google Hangouts.

The core technology used in a videotelephony system is digital compression of audio and video streams in real time. The hardware or software that performs compression is called a codec (coder/decoder). Compression rates of up to 1:500 can be achieved. The resulting digital stream of 1s and 0s is subdivided into labeled packets, which are then transmitted through a digital network of some kind (usually ISDN or IP).

The other components required for a videoconferencing system include:

  • Video input: (PTZ / 360° / Fisheye) video camera or webcam
  • Video output: computer monitor, television or projector
  • Audio input: microphones, CD/DVD player, cassette player, or any other source of PreAmp audio outlet.
  • Audio output: usually loudspeakers associated with the display device or telephone
    Data transfer: analog or digital telephone network, LAN or Internet
  • Computer: a data processing unit that ties together the other components, does the compressing and decompressing, and initiates and maintains the data linkage via the network.

There are basically two kinds of videoconferencing and videophone systems:

  1. Dedicated systems have all required components packaged into a single piece of equipment, usually a console with a high quality remote controlled video camera. These cameras can be controlled at a distance to pan left and right, tilt up and down, and zoom. They became known as PTZ cameras. The console contains all electrical interfaces, the control computer, and the software or hardware-based codec. Omnidirectional microphones are connected to the console, as well as a TV monitor with loudspeakers and/or a video projector. There are several types of dedicated videoconferencing devices:
    1. Large group videoconferencing are non-portable, large, more expensive devices used for large rooms and auditoriums.
    2. Small group videoconferencing are non-portable or portable, smaller, less expensive devices used for small meeting rooms.
    3. Individual videoconferencing are usually portable devices, meant for single users, have fixed cameras, microphones and loudspeakers integrated into the console.
  2. Desktop systems are add-ons (hardware boards or software codec) to normal PCs and laptops, transforming them into videoconferencing devices. A range of different cameras and microphones can be used with the codec, which contains the necessary codec and transmission interfaces. Most of the desktops systems work with the H.323 standard. Videoconferences carried out via dispersed PCs are also known as e-meetings. These can also be nonstandard, Microsoft Lync, Skype for Business, Google Hangouts, or Yahoo Messenger or standards based, Cisco Jabber.
  3. WebRTC Platforms are video conferencing solutions that are not resident by using a software application but is available through the standard web browser. Solutions such as Adobe Connect and Cisco WebEX can be accessed by going to a URL sent by the meeting organizer and various degrees of security can be attached to the virtual “room”. Often the user will be required to download a piece of software, called an “Add In” to enable the browser to access the local camera, microphone and establish a connection to the meeting. WebRTC technology doesn’t require any software or Add On installation, instead a WebRTC compliant internet browser itself acts as a client to facilitate 1-to-1 and 1-to-many videoconferencing calls. Several enhanced flavours of WebRTC technology are being provided by Third Party vendors.

Videoconferencing Modes

Videoconferencing systems use two methods to determine which video feed or feeds to display.

Continuous Presence simply displays all participants at the same time, usually with the exception that the viewer either does not see their own feed, or sees their own feed in miniature.

Voice-Activated Switch selectively chooses a feed to display at each endpoint, with the goal of showing the person who is currently speaking. This is done by choosing the feed (other than the viewer) which has the loudest audio input (perhaps with some filtering to avoid switching for very short-lived volume spikes). Often if no remote parties are currently speaking, the feed with the last speaker remains on the screen.

Echo Cancellation

Acoustic echo cancellation (AEC) is a processing algorithm that uses the knowledge of audio output to monitor audio input and filter from it noises that echo back after some time delay. If unattended, these echoes can be re-amplified several times, leading to problems including:

The remote party hearing their own voice coming back at them (usually significantly delayed)

  1. Strong reverberation, which makes the voice channel useless
  2. Howling created by feedback

Echo cancellation is a processor-intensive task that usually works over a narrow range of sound delays.

Bandwidth Requirements

1024px-Bildtelefon_T-View_100

Deutsche Telekom T-View 100 ISDN type videophone meant for home offices and small businesses with a lens cover which can be rotated upward to assure privacy when needed (2007).

Videophones have historically employed a variety of transmission and reception bandwidths, which can be understood as data transmission speeds. The lower the transmission/reception bandwidth, the lower the data transfer rate, resulting in a more limited and poorer image quality. Data transfer rates and live video image quality are related, but are also subject to other factors such as data compression techniques. Some early videophones employed very low data transmission rates with a resulting sketchy video quality.

Broadband bandwidth is often called “high-speed”, because it usually has a high rate of data transmission. In general, any connection of 256 kbit/s (0.256 Mbit/s) or greater is more concisely considered broadband Internet. The International Telecommunication Union Standardization Sector (ITU-T) recommendation I.113 has defined broadband as a transmission capacity at 1.5 to 2 Mbit/s. The United States Federal Communications Commission definition of broadband is 25 Mbit/s.

Currently, adequate video for some purposes becomes possible at data rates lower than the ITU-T broadband definition, with rates of 768 kbit/s and 384 kbit/s used for some video conferencing applications, and rates as low as 100 kbit per second used for videophones using H.264/MPEG-4 AVC compression protocols. The newer MPEG-4 video and audio compression format can deliver high-quality video at 2 Mbit/s, which is at the low end of cable modem and ADSL broadband performance.

Standards

511px-TANDBERG_E20

The Tandberg E20 is an example of a SIP-only device. Such devices need to route calls through a Video Communication Server to be able to reach H.323 systems, a process known as “interworking” (2009).

The International Telecommunications Union (ITU) has three umbrellas of standards for videoconferencing:

  • ITU H.320 is known as the standard for public switched telephone networks (PSTN) or videoconferencing over integrated services digital networks. While still prevalent in Europe, ISDN was never widely adopted in the United States and Canada.
  • ITU H.264 Scalable Video Coding (SVC) is a compression standard that enables videoconferencing systems to achieve highly error resilient Internet Protocol (IP) video transmissions over the public Internet without quality-of-service enhanced lines. This standard has enabled wide scale deployment of high definition desktop videoconferencing and made possible new architectures, which reduces latency between the transmitting sources and receivers, resulting in more fluid communication without pauses. In addition, an attractive factor for IP videoconferencing is that it is easier to set up for use along with web conferencing and data collaboration. These combined technologies enable users to have a richer multimedia environment for live meetings, collaboration and presentations.
  • ITU V.80: videoconferencing is generally compatibilized with H.324 standard point-to-point videotelephony over regular plain old telephone service (POTS) phone lines.

The Unified Communications Interoperability Forum (UCIF), a non-profit alliance between communications vendors, launched in May 2010. The organization’s vision is to maximize the interoperability of UC based on existing standards. Founding members of UCIF include HP, Microsoft, Polycom, Logitech/LifeSize Communications and Juniper Networks.

Call Setup

Videoconferencing in the late 20th century was limited to the H.323 protocol (notably Cisco’s SCCP implementation was an exception), but newer videophones often use SIP, which is often easier to set up in home networking environments. It is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol (HTTP) and the Simple Mail Transfer Protocol (SMTP). H.323 is still used, but more commonly for business videoconferencing, while SIP is more commonly used in personal consumer videophones. A number of call-setup methods based on instant messaging protocols such as Skype also now provide video.

Another protocol used by videophones is H.324, which mixes call setup and video compression. Videophones that work on regular phone lines typically use H.324, but the bandwidth is limited by the modem to around 33 kbit/s, limiting the video quality and frame rate. A slightly modified version of H.324 called 3G-324M defined by 3GPP is also used by some cellphones that allow video calls, typically for use only in UMTS networks.

There is also H.320 standard, which specified technical requirements for narrow-band visual telephone systems and terminal equipment, typically for videoconferencing and videophone services. It applied mostly to dedicated circuit-based switched network (point-to-point) connections of moderate or high bandwidth, such as through the medium-bandwidth ISDN digital phone protocol or a fractionated high bandwidth T1 lines. Modern products based on H.320 standard usually support also H.323 standard.

The IAX2 protocol also supports videophone calls natively, using the protocol’s own capabilities to transport alternate media streams. A few hobbyists obtained the Nortel 1535 Color SIP Videophone cheaply in 2010 as surplus after Nortel’s bankruptcy and deployed the sets on the Asterisk (PBX) platform. While additional software is required to patch together multiple video feeds for conference calls or convert between dissimilar video standards, SIP calls between two identical handsets within the same PBX were relatively straightforward.

Conferencing Layers

The components within a videoconferencing system can be divided up into several different layers: User Interface, Conference Control, Control or Signaling Plane, and Media Plane.

Videoconferencing User Interfaces (VUI) can be either graphical or voice responsive. Many in the industry have encountered both types of interfaces, and normally graphical interfaces are encountered on a computer. User interfaces for conferencing have a number of different uses; they can be used for scheduling, setup, and making a videocall. Through the user interface the administrator is able to control the other three layers of the system.

Conference Control performs resource allocation, management and routing. This layer along with the User Interface creates meetings (scheduled or unscheduled) or adds and removes participants from a conference.

Control (Signaling) Plane contains the stacks that signal different endpoints to create a call and/or a conference. Signals can be, but aren’t limited to, H.323 and Session Initiation Protocol (SIP) Protocols. These signals control incoming and outgoing connections as well as session parameters.

The Media Plane controls the audio and video mixing and streaming. This layer manages Real-Time Transport Protocols, User Datagram Packets (UDP) and Real-Time Transport Control Protocol (RTCP). The RTP and UDP normally carry information such the payload type which is the type of codec, frame rate, video size and many others. RTCP on the other hand acts as a quality control Protocol for detecting errors during streaming.

Multipoint Videoconferencing

Simultaneous videoconferencing among three or more remote points is possible in a hardware-based system by means of a Multipoint Control Unit (MCU). This is a bridge that interconnects calls from several sources (in a similar way to the audio conference call). All parties call the MCU, or the MCU can also call the parties which are going to participate, in sequence. There are MCU bridges for IP and ISDN-based videoconferencing. There are MCUs which are pure software, and others which are a combination of hardware and software. An MCU is characterised according to the number of simultaneous calls it can handle, its ability to conduct transposing of data rates and protocols, and features such as Continuous Presence, in which multiple parties can be seen on-screen at once. MCUs can be stand-alone hardware devices, or they can be embedded into dedicated videoconferencing units.

The MCU consists of two logical components:

  1. A single multipoint controller (MC), and
  2. Multipoint Processors (MP), sometimes referred to as the mixer.

The MC controls the conferencing while it is active on the signaling plane, which is simply where the system manages conferencing creation, endpoint signaling and in-conferencing controls. This component negotiates parameters with every endpoint in the network and controls conferencing resources. While the MC controls resources and signaling negotiations, the MP operates on the media plane and receives media from each endpoint. The MP generates output streams from each endpoint and redirects the information to other endpoints in the conference.

Some systems are capable of multipoint conferencing with no MCU, stand-alone, embedded or otherwise. These use a standards-based H.323 technique known as “decentralized multipoint”, where each station in a multipoint call exchanges video and audio directly with the other stations with no central “manager” or other bottleneck. The advantages of this technique are that the video and audio will generally be of higher quality because they don’t have to be relayed through a central point. Also, users can make ad-hoc multipoint calls without any concern for the availability or control of an MCU. This added convenience and quality comes at the expense of some increased network bandwidth, because every station must transmit to every other station directly.

Cloud-based Video Conferencing

Cloud-based video conferencing can be used without the hardware generally required by other video conferencing systems, and can be designed for use by SMEs, or larger international companies like Facebook. Cloud-based systems can handle either 2D or 3D video broadcasting. Cloud-based systems can also implement mobile calls, VOIP, and other forms of video calling. They can also come with a video recording function to archive past meetings.

Impact


1024px-Video_Call

A mobile video call between Sweden and Singapore made on a Sony-Ericsson K800 (2007)

High speed Internet connectivity has become more widely available at a reasonable cost and the cost of video capture and display technology has decreased. Consequently, personal videoconferencing systems based on a webcam, personal computer system, software compression and broadband Internet connectivity have become affordable to the general public. Also, the hardware used for this technology has continued to improve in quality, and prices have dropped dramatically. The availability of freeware (often as part of chat programs) has made software based videoconferencing accessible to many.

The widest deployment of video telephony now occurs in mobile phones. Nearly all mobile phones supporting UMTS networks can work as videophones using their internal cameras, and are able to make video calls wirelessly to other UMTS users in the same country or internationally. As of the second quarter of 2007, there are over 131 million UMTS users (and hence potential videophone users), on 134 networks in 59 countries. Mobile phones can also use broadband wireless Internet, whether through the cell phone network or over a local wifi connection, along with software-based videophone apps to make calls to any video-capable Internet user, whether mobile or fixed.

Deaf, hard-of-hearing and mute individuals have a particular interest in the development of affordable high-quality videotelephony as a means of communicating with each other in sign language. Unlike Video Relay Service, which is intended to support communication between a caller using sign language and another party using spoken language, videoconferencing can be used directly between two deaf signers.

Videophones are increasingly used in the provision of telemedicine to the elderly and to those in remote locations, where the ease and convenience of quickly obtaining diagnostic and consultative medical services are readily apparent. In one single instance quoted in 2006: “A nurse-led clinic at Letham has received positive feedback on a trial of a video-link which allowed 60 pensioners to be assessed by medics without travelling to a doctor’s office or medical clinic.” A further improvement in telemedical services has been the development of new technology incorporated into special videophones to permit remote diagnostic services, such as blood sugar level, blood pressure and vital signs monitoring. Such units are capable of relaying both regular audiovideo plus medical data over either standard (POTS) telephone or newer broadband lines.

1024px-Tandberg_Image_Gallery_-_telepresence-t3-side-view-hires

A Tandberg T3 high resolution telepresence room in use (2008).

Videotelephony has also been deployed in corporate teleconferencing, also available through the use of public access videoconferencing rooms. A higher level of videoconferencing that employs advanced telecommunication technologies and high-resolution displays is called telepresence.

Today the principles, if not the precise mechanisms of a videophone are employed by many users worldwide in the form of webcam videocalls using personal computers, with inexpensive webcams, microphones and free videocalling Web client programs. Thus an activity that was disappointing as a separate service has found a niche as a minor feature in software products intended for other purposes.

According to Juniper Research, smartphone videophone users will reach 29 million by 2015 globally.

A study conducted by Pew Research in 2010, revealed that 7% of Americans have made a mobile video call.

Impact on Government and Law

In the United States, videoconferencing has allowed testimony to be used for an individual who is unable or prefers not to attend the physical legal settings, or would be subjected to severe psychological stress in doing so, however there is a controversy on the use of testimony by foreign or unavailable witnesses via video transmission, regarding the violation of the Confrontation Clause of the Sixth Amendment of the U.S. Constitution.

In a military investigation in State of North Carolina, Afghan witnesses have testified via videoconferencing.

In Hall County, Georgia, videoconferencing systems are used for initial court appearances. The systems link jails with court rooms, reducing the expenses and security risks of transporting prisoners to the courtroom.

The U.S. Social Security Administration (SSA), which oversees the world’s largest administrative judicial system under its Office of Disability Adjudication and Review (ODAR), has made extensive use of videoconferencing to conduct hearings at remote locations. In Fiscal Year (FY) 2009, the U.S. Social Security Administration (SSA) conducted 86,320 videoconferenced hearings, a 55% increase over FY 2008. In August 2010, the SSA opened its fifth and largest videoconferencing-only National Hearing Center (NHC), in St. Louis, Missouri. This continues the SSA’s effort to use video hearings as a means to clear its substantial hearing backlog. Since 2007, the SSA has also established NHCs in Albuquerque, New Mexico, Baltimore, Maryland, Falls Church, Virginia, and Chicago, Illinois.

Impact on Education

1024px-American,_Indonesian_Students_Link_Hands_Via_Distance_Learning

Indonesian and U.S. students participating in an educational videoconference (2010).
See also: Distance education

Videoconferencing provides students with the opportunity to learn by participating in two-way communication forums. Furthermore, teachers and lecturers worldwide can be brought to remote or otherwise isolated educational facilities. Students from diverse communities and backgrounds can come together to learn about one another, although language barriers will continue to persist. Such students are able to explore, communicate, analyze and share information and ideas with one another. Through videoconferencing, students can visit other parts of the world to speak with their peers, and visit museums and educational facilities. Such virtual field trips can provide enriched learning opportunities to students, especially those in geographically isolated locations, and to the economically disadvantaged. Small schools can use these technologies to pool resources and provide courses, such as in foreign languages, which could not otherwise be offered.

A few examples of benefits that videoconferencing can provide in campus environments include:

  • faculty members keeping in touch with classes while attending conferences;
  • guest lecturers brought in classes from other institutions;
  • researchers collaborating with colleagues at other institutions on a regular basis without loss of time due to travel;
  • schools with multiple campuses collaborating and sharing professors;
  • schools from two separate nations engaging in cross-cultural exchanges;
  • faculty members participating in thesis defenses at other institutions;
  • administrators on tight schedules collaborating on budget preparation from different parts of campus;
  • faculty committee auditioning scholarship candidates;
  • researchers answering questions about grant proposals from agencies or review committees;
  • student interviews with employers in other cities, and
  • teleseminars.

Impact on Medicine and Health

Videoconferencing is a highly useful technology for real-time telemedicine and telenursing applications, such as diagnosis, consulting, transmission of medical images, etc. With videoconferencing, patients may contact nurses and physicians in emergency or routine situations; physicians and other paramedical professionals can discuss cases across large distances. Rural areas can use this technology for diagnostic purposes, thus saving lives and making more efficient use of health care money. For example, a rural medical center in Ohio, United States, used videoconferencing to successfully cut the number of transfers of sick infants to a hospital 70 miles (110 km) away. This had previously cost nearly $10,000 per transfer.

Special peripherals such as microscopes fitted with digital cameras, videoendoscopes, medical ultrasound imaging devices, otoscopes, etc., can be used in conjunction with videoconferencing equipment to transmit data about a patient. Recent developments in mobile collaboration on hand-held mobile devices have also extended video-conferencing capabilities to locations previously unreachable, such as a remote community, long-term care facility, or a patient’s home.

Impact on Business

Videoconferencing can enable individuals in distant locations to participate in meetings on short notice, with time and money savings. Technology such as VoIP can be used in conjunction with desktop videoconferencing to enable low-cost face-to-face business meetings without leaving the desk, especially for businesses with widespread offices. The technology is also used for telecommuting, in which employees work from home. One research report based on a sampling of 1,800 corporate employees showed that, as of June 2010, 54% of the respondents with access to video conferencing used it “all of the time” or “frequently”.

Intel Corporation have used videoconferencing to reduce both costs and environmental impacts of its business operations.

Videoconferencing is also currently being introduced on online networking websites, in order to help businesses form profitable relationships quickly and efficiently without leaving their place of work. This has been leveraged by banks to connect busy banking professionals with customers in various locations using video banking technology.

Videoconferencing on hand-held mobile devices (mobile collaboration technology) is being used in industries such as manufacturing, energy, healthcare, insurance, government and public safety. Live, visual interaction removes traditional restrictions of distance and time, often in locations previously unreachable, such as a manufacturing plant floor a continent away.

In the increasingly globalized film industry, videoconferencing has become useful as a method by which creative talent in many different locations can collaborate closely on the complex details of film production. For example, for the 2013 award-winning animated film Frozen, Burbank-based Walt Disney Animation Studios hired the New York City-based husband-and-wife songwriting team of Robert Lopez and Kristen Anderson-Lopez to write the songs, which required two-hour-long transcontinental videoconferences nearly every weekday for about 14 months.

With the development of lower cost endpoints, cloud based infrastructure and technology trends such as WebRTC, Video Conferencing is moving from just a business-to-business offering, to a business-to-business and business-to-consumer offering.

Although videoconferencing has frequently proven its value, research has shown that some non-managerial employees prefer not to use it due to several factors, including anxiety. Some such anxieties can be avoided if managers use the technology as part of the normal course of business. Remote workers can also adopt certain behaviors and best practices to stay connected with their co-workers and company.

Researchers also find that attendees of business and medical videoconferences must work harder to interpret information delivered during a conference than they would if they attended face-to-face. They recommend that those coordinating videoconferences make adjustments to their conferencing procedures and equipment.

Impact on Media Relations

The concept of press videoconferencing was developed in October 2007 by the PanAfrican Press Association (APPA), a Paris France-based non-governmental organization, to allow African journalists to participate in international press conferences on developmental and good governance issues.

Press videoconferencing permits international press conferences via videoconferencing over the Internet. Journalists can participate on an international press conference from any location, without leaving their offices or countries. They need only be seated by a computer connected to the Internet in order to ask their questions to the speaker.

In 2004, the International Monetary Fund introduced the Online Media Briefing Center, a password-protected site available only to professional journalists. The site enables the IMF to present press briefings globally and facilitates direct questions to briefers from the press. The site has been copied by other international organizations since its inception. More than 4,000 journalists worldwide are currently registered with the IMF.

Use in sign language communications

Video_interpreter.svg

Video Interpreter sign used at VRS/VRI service locations

One of the first demonstrations of the ability for telecommunications to help sign language users communicate with each other occurred when AT&T’s videophone (trademarked as the “Picturephone”) was introduced to the public at the 1964 New York World’s Fair –two deaf users were able to communicate freely with each other between the fair and another city. Various universities and other organizations, including British Telecom’s Martlesham facility, have also conducted extensive research on signing via videotelephony.

The use of sign language via videotelephony was hampered for many years due to the difficulty of its use over slow analogue copper phone lines, coupled with the high cost of better quality ISDN (data) phone lines. Those factors largely disappeared with the introduction of more efficient and powerful video codecs and the advent of lower cost high-speed ISDN data and IP (Internet) services in the 1990s.

21st century improvements

Significant improvements in video call quality of service for the deaf occurred in the United States in 2003 when Sorenson Media Inc. (formerly Sorenson Vision Inc.), a video compression software coding company, developed its VP-100 model stand-alone videophone specifically for the deaf community. It was designed to output its video to the user’s television in order to lower the cost of acquisition, and to offer remote control and a powerful video compression codec for unequaled video quality and ease of use with video relay services. Favourable reviews quickly led to its popular usage at educational facilities for the deaf, and from there to the greater deaf community.

Coupled with similar high-quality videophones introduced by other electronics manufacturers, the availability of high speed Internet, and sponsored video relay services authorized by the U.S. Federal Communications Commission in 2002, VRS services for the deaf underwent rapid growth in that country.

Present day usage

1024px-thumbnail

A deaf or hard-of-hearing person using a Video Relay Service at his workplace to communicate with a hearing person in London (2007).

Using such video equipment in the present day, the deaf, hard-of-hearing and speech-impaired can communicate between themselves and with hearing individuals using sign language. The United States and several other countries compensate companies to provide “Video Relay Services” (VRS). Telecommunication equipment can be used to talk to others via a sign language interpreter, who uses a conventional telephone at the same time to communicate with the deaf person’s party. Video equipment is also used to do on-site sign language translation via Video Remote Interpreting (VRI). The relative low cost and widespread availability of 3G mobile phone technology with video calling capabilities have given deaf and speech-impaired users a greater ability to communicate with the same ease as others. Some wireless operators have even started free sign language gateways.

Sign language interpretation services via VRS or by VRI are useful in the present day where one of the parties is deaf, hard-of-hearing or speech-impaired (mute). In such cases the interpretation flow is normally within the same principal language, such as French Sign Language (LSF) to spoken French, Spanish Sign Language (LSE) to spoken Spanish, British Sign Language (BSL) to spoken English, and American Sign Language (ASL) also to spoken English (since BSL and ASL are completely distinct to each other), and so on.

Multilingual sign language interpreters, who can also translate as well across principal languages (such as to and from SSL, to and from spoken English), are also available, albeit less frequently. Such activities involve considerable effort on the part of the translator, since sign languages are distinct natural languages with their own construction, semantics and syntax, different from the aural version of the same principal language.

With video interpreting, sign language interpreters work remotely with live video and audio feeds, so that the interpreter can see the deaf or mute party, and converse with the hearing party, and vice versa. Much like telephone interpreting, video interpreting can be used for situations in which no on-site interpreters are available. However, video interpreting cannot be used for situations in which all parties are speaking via telephone alone. VRS and VRI interpretation requires all parties to have the necessary equipment. Some advanced equipment enables interpreters to control the video camera remotely, in order to zoom in and out or to point the camera toward the party that is signing.

Descriptive Names and Terminology


The name “videophone” never became as standardized as its earlier counterpart “telephone”, resulting in a variety of names and terms being used worldwide, and even within the same region or country. Videophones are also known as “video phones”, “videotelephones” (or “video telephones”) and often by an early trademarked name Picturephone, which was the world’s first commercial videophone produced in volume. The compound name “videophone” slowly entered into general use after 1950, although “video telephone” likely entered the lexicon earlier after video was coined in 1935.

Videophone calls (also: videocalls, video chat as well as Skype and Skyping in verb form), differ from videoconferencing in that they expect to serve individuals, not groups. However that distinction has become increasingly blurred with technology improvements such as increased bandwidth and sophisticated software clients that can allow for multiple parties on a call. In general everyday usage the term videoconferencing is now frequently used instead of videocall for point-to-point calls between two units. Both videophone calls and videoconferencing are also now commonly referred to as a video link.

Webcams are popular, relatively low cost devices which can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing.

A videoconference system is generally higher cost than a videophone and deploys greater capabilities. A videoconference (also known as a videoteleconference) allows two or more locations to communicate via live, simultaneous two-way video and audio transmissions. This is often accomplished by the use of a multipoint control unit (a centralized distribution and call management system) or by a similar non-centralized multipoint capability embedded in each videoconferencing unit. Again, technology improvements have circumvented traditional definitions by allowing multiple party videoconferencing via web-based applications.

A telepresence system is a high-end videoconferencing system and service usually employed by enterprise-level corporate offices. Telepresence conference rooms use state-of-the art room designs, video cameras, displays, sound-systems and processors, coupled with high-to-very-high capacity bandwidth transmissions.

Typical use of the various technologies described above include calling or conferencing on a one-on-one, one-to-many or many-to-many basis for personal, business, educational, deaf Video Relay Service and tele-medical, diagnostic and rehabilitative use or services. New services utilizing videocalling and videoconferencing, such as teachers and psychologists conducting online sessions, personal videocalls to inmates incarcerated in penitentiaries, and videoconferencing to resolve airline engineering issues at maintenance facilities, are being created or evolving on an ongoing basis.

Other names for videophone that have been used in English are: Viewphone (the British Telecom equivalent to AT&T’s Picturephone), and visiophone, a common French translation that has also crept into limited English usage, as well as over twenty less common names and expressions. Latin-based translations of videophone in other languages include vidéophone (French), Bildtelefon (German), videotelefono (Italian), both videófono and videoteléfono (Spanish), both beeldtelefoon and videofoon (Dutch), and videofonía (Catalan).

A telepresence robot (also telerobotics) is a robotically controlled and motorized video conferencing display to help give a better sense of remote physical presence for communication and collaboration in an office, home, school, etc. when one cannot be there in person. The robotic avatar device can move about and look around at the command of the remote person it represents.

Popular Culture


On_Space_Station_V,_Dr._Heywood_Floyd_places_a_videocall_to_his_daughter_on_Earth

The scene of Dr. Heywood Floyd in 2001: A Space Odyssey placing a videocall to his daughter on Earth helped popularize the videophone just as AT&T began to commercialize its Picturephone (1968).

]In science fiction literature, names commonly associated with videophones include viewphone, vidphone, vidfone, and visiphone. In many science fiction movies and TV programs that are set in the future, videophones were used as a primary method of communication. One of the first movies where a videophone was used was Fritz Lang’s Metropolis (1927).

Other notable examples of videophones in popular culture include an iconic scene from 2001: A Space Odyssey set on Space Station V. The movie was released shortly before AT&T began its efforts to commercialize its Picturephone Mod II service in several cities and depicts a videocall to Earth using an advanced AT&T videophone—which it predicts will cost $1.70 for a two-minute call in 2001 (a fraction of the company’s real rates on Earth in 1968). Film director Stanley Kubrick strove for scientific accuracy, relying on interviews with scientists and engineers at Bell Labs in the United States. Dr. Larry Rabiner of Bell Labs, discussing videophone research in the documentary 2001: The Making of a Myth, stated that in the mid-to-late-1960s videophones “…captured the imagination of the public and… of Mr. Kubrick and the people who reported to him”. In one 2001 movie scene a central character, Dr. Heywood Floyd, calls home to contact his family, a social feature noted in the Making of a Myth. Floyd talks with and views his daughter from a space station in orbit above the Earth, discussing what type of present he should bring home for her.

A portable videophone is also featured prominently in the 2009 science fiction movie Moon, where the story’s protagonist, Sam Bell, also calls home as well to communicate with loved ones. Bell, the lone occupant of a mining station on the far side of the Earth’s moon, finally succeeds in making his videocall after an extended work period, but becomes traumatized when viewing his daughter.

Other popular science fiction stories with videophones include Space: 1999, Star Trek, Total Recall, Blade Runner, and Firefly. The videophone was a staple, everyday technology in the futuristic 1960s Hanna Barbera cartoon The Jetsons.

Moderntimestelecon

Modern Times: An early depiction of videotelephony as shown in the Charlie Chaplin movie, with an employee receiving instructions from a factory executive (1936).

Other earlier examples of videophones in popular culture included a videophone that was featured in the Warner Bros. cartoon, Plane Daffy, in which the female spy Hatta Mari used a videophone to communicate with Adolf Hitler (1944), as well as a device with the same functionality has been used by the comic strip character Dick Tracy. Called the ‘2-Way Wrist TV’, the fictional detective often used the phone to communicate with police headquarters (1964–1977).

By the early 2010s videotelephony and videophones had become commonplace and unremarkable in various forms of media, in part due to their real and ubiquitous presence in common electronic devices and laptop computers. Additionally, TV programming increasingly utilized videophones to interview subjects of interest and to present live coverage by news correspondents, via the Internet or by satellite links. In the mass market media, the popular U.S. TV talk show hostess Oprah Winfrey incorporated videotelephony into her TV program on a regular basis from May 21, 2009, with an initial episode called Where the Skype Are You?, as part of a marketing agreement with the Internet telecommunication company Skype.

Additionally, videophones have been featured in:

  • E.M. Forster’s 1909 short story The Machine Stops is set in a dystopian future in which, for the most part, human interaction has been reduced to communication via a kind of videoconferencing device called the “speaking apparatus”;
  • the 1935 British sci-fi film, The Tunnel, in which a videophone device (termed a “televisor”) is in common use in the mid-20th century.
  • several episodes of Thunderbirds (1965–66). These were shown to also have an audio-only setting, which was indicated by the words SOUND ONLY SELECTED being displayed on the screen;
  • the British cartoon DangerMouse, where the title character regularly communicated with headquarters via videophones from both his home and his car (1981–1992);
  • the movie Gremlins 2: The New Batch, where AT&T’s VideoPhone 2500 prototypes are visible (1990);
  • “Lisa’s Wedding”, an episode of The Simpsons which depicted a Picturephone (1995).
  • the popular television series Pee-Wee’s Playhouse, Pee-wee Herman often made and received calls on a videophone-like ‘magic screen’ (1986–1990);
  • the movie Back to the Future Part II, where the future Marty McFly is contacted by
  • Needles, his coworker, and then by his boss Mr. Fujitsu, via videophone (1989);
  • the movie Demolition Man, where the action was referred to as “fiberop”;
  • the movie Spaceballs, which used the potential intrusiveness to humorous effect;
  • the movie Aliens, in the early scenes on the space station;
  • the movie Seven Days in May made in 1964, in which the president of the United
  • States uses a videophone in the near future (early 1970s).
  • the novel Infinite Jest, where the videophone (specifically the fall of the videophone) is spoken of extensively (1996);
  • the animated television program Futurama, where the videophone is often used within the delivery service spaceship (1999–2013);
  • Teenage Mutant Ninja Turtles – The Turtle Comm cellphone devices have video phones on them and are often used by the four mutant turtles to contact April O’Neil. Shredder, Krang, Rocksteady and Bebop also use video phones on their cellphone devices.
  • ReBoot 1991 TV series. Videophones are often used by Bob, Dot and Enzo and even by Megabyte to contact each other in the city Mainframe.
  • the Pokémon anime series, where videophones were occasionally used (2006–2011);
    a Beyoncé Knowles pop single and music video called “Video Phone” from her
  • album I am… Sasha Fierce (2008).

SCADA

From Wikipedia, the free encyclopedia

Supervisory control and data cquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management, but uses other peripheral devices such as programmable logic controllers and discrete PID controllers to interface to the process plant or machinery. The operator interfaces which enable monitoring and the issuing of process commands, such as controller set point changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to the field sensors and actuators.

The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances. It is one of the most commonly-used types of industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare/cyberterrorism attacks.

Contents
1 The SCADA concept in control operations
2 Examples of use
3 SCADA system components
3.1 Supervisory computers
3.2 Remote terminal units
3.3 Programmable logic controllers
3.4 Communication infrastructure
3.5 Human-machine interface
4 Alarm handling
5 PLC/RTU programming
6 PLC commercial integration
7 Communication infrastructure and methods
8 SCADA architecture development
8.1 First generation: “monolithic”
8.2 Second generation: “distributed”
8.3 Third generation: “networked”
8.4 Fourth generation: “Internet of things”
9 Security issues

The SCADA Concept in Control Operations


Functional_levels_of_a_Distributed_Control_System.svg

Functional levels of a manufacturing control operation

The key attribute of a SCADA system is its ability to perform a supervisory operation over a variety of other proprietary devices.

The accompanying diagram is a general model which shows functional manufacturing levels using computerised control.

Referring to the diagram,

  • Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves.
  • Level 1 contains the industrialised input/output (I/O) modules, and their associated distributed electronic processors.
  • Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens.
  • Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and targets.
  • Level 4 is the production scheduling level.

Level 1 contains the programmable logic controllers (PLCs) or remote terminal units (RTUs).

Level 2 contains the SCADA software and computing platform. The SCADA software exists only at this supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop.

Levels 3 and 4 are not strictly process control in the traditional sense, but are where production control and scheduling takes place.

Data acquisition begins at the RTU or PLC level and includes instrumentation readings and equipment status reports that are communicated to level 2 SCADA as required. Data is then compiled and formatted in such a way that a control room operator using the HMI (Human Machine Interface) can make supervisory decisions to adjust or override normal RTU (PLC) controls. Data may also be fed to a historian, often built on a commodity database management system, to allow trending and other analytical auditing.

SCADA systems typically use a tag database, which contains data elements called tags or points, which relate to specific instrumentation or actuators within the process system according to such as the Piping and instrumentation diagram. Data is accumulated against these unique process control equipment tag references.

Examples of Use


1280px-Freer_Water_Control_and_Improvement_District_-_Diana_Adame

Example of SCADA used in office environment to remotely monitor a process

Both large and small systems can be built using the SCADA concept. These systems can range from just tens to thousands of control loops, depending on the application. Example processes include industrial, infrastructure, and facility-based processes, as described below:

Industrial processes include manufacturing, Process control, power generation, fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes.

Infrastructure processes may be public or private, and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electric power transmission and distribution, and wind farms.
Facility processes, including buildings, airports, ships, and space stations. They monitor and control heating, ventilation, and air conditioning systems (HVAC), access, and energy consumption.
However, SCADA systems may have security vulnerabilities, so the systems should be evaluated to identify risks and solutions implemented to mitigate those risks.

SCADA System Components


Scada_std_anim_no_lang

Typical SCADA mimic shown as an animation. For process plant, these are based upon the piping and instrumentation diagram.

791px-SCADA_schematic_overview-s.svg

A SCADA system usually consists of the following main elements:

Supervisory Computers

This is the core of the SCADA system, gathering data on the process and sending control commands to the field connected devices. It refers to the computer and software responsible for communicating with the field connection controllers, which are RTUs and PLCs, and includes the HMI software running on operator workstations. In smaller SCADA systems, the supervisory computer may be composed of a single PC, in which case the HMI is a part of this computer. In larger SCADA systems, the master station may include several HMIs hosted on client computers, multiple servers for data acquisition, distributed software applications, and disaster recovery sites. To increase the integrity of the system the multiple servers will often be configured in a dual-redundant or hot-standby formation providing continuous control and monitoring in the event of a server malfunction or breakdown.

Remote Terminal Units

Remote terminal units, also known as (RTUs), connect to sensors and actuators in the process, and are networked to the supervisory computer system. RTUs are “intelligent I/O” and often have embedded control capabilities such as ladder logic in order to accomplish boolean logic operations.

Programmable Logic Controllers [PLC]

Also known as PLCs, these are connected to sensors and actuators in the process, and are networked to the supervisory system in the same way as RTUs. PLCs have more sophisticated embedded control capabilities than RTUs, and are programmed in one or more IEC 61131-3 programming languages. PLCs are often used in place of RTUs as field devices because they are more economical, versatile, flexible and configurable.

Communication Infrastructure

This connects the supervisory computer system to the remote terminal units (RTUs) and PLCs, and may use industry standard or manufacturer proprietary protocols. Both RTUs and PLCs operate autonomously on the near-real time control of the process, using the last command given from the supervisory system. Failure of the communications network does not necessarily stop the plant process controls, and on resumption of communications, the operator can continue with monitoring and control. Some critical systems will have dual redundant data highways, often cabled via diverse routes.

Human-machine Interface

Scada_Animation.ogv

More complex SCADA animation showing control of four batch cookers

The human-machine interface (HMI) is the operator window of the supervisory system. It presents plant information to the operating personnel graphically in the form of mimic diagrams, which are a schematic representation of the plant being controlled, and alarm and event logging pages. The HMI is linked to the SCADA supervisory computer to provide live data to drive the mimic diagrams, alarm displays and trending graphs. In many installations the HMI is the graphical user interface for the operator, collects all data from external devices, creates reports, performs alarming, sends notifications, etc.

Mimic diagrams consist of line graphics and schematic symbols to represent process elements, or may consist of digital photographs of the process equipment overlain with animated symbols.

Supervisory operation of the plant is by means of the HMI, with operators issuing commands using mouse pointers, keyboards and touch screens. For example, a symbol of a pump can show the operator that the pump is running, and a flow meter symbol can show how much fluid it is pumping through the pipe. The operator can switch the pump off from the mimic by a mouse click or screen touch. The HMI will show the flow rate of the fluid in the pipe decrease in real time.

The HMI package for a SCADA system typically includes a drawing program that the operators or system maintenance personnel use to change the way these points are represented in the interface. These representations can be as simple as an on-screen traffic light, which represents the state of an actual traffic light in the field, or as complex as a multi-projector display representing the position of all of the elevators in a skyscraper or all of the trains on a railway.

A “historian”, is a software service within the HMI which accumulates time-stamped data, events, and alarms in a database which can be queried or used to populate graphic trends in the HMI. The historian is a client that requests data from a data acquisition server.

Alarm Handling


An important part of most SCADA implementations is alarm handling. The system monitors whether certain alarm conditions are satisfied, to determine when an alarm event has occurred. Once an alarm event has been detected, one or more actions are taken (such as the activation of one or more alarm indicators, and perhaps the generation of email or text messages so that management or remote SCADA operators are informed). In many cases, a SCADA operator may have to acknowledge the alarm event; this may deactivate some alarm indicators, whereas other indicators remain active until the alarm conditions are cleared.

Alarm conditions can be explicit—for example, an alarm point is a digital status point that has either the value NORMAL or ALARM that is calculated by a formula based on the values in other analogue and digital points—or implicit: the SCADA system might automatically monitor whether the value in an analogue point lies outside high and low- limit values associated with that point.

Examples of alarm indicators include a siren, a pop-up box on a screen, or a coloured or flashing area on a screen (that might act in a similar way to the “fuel tank empty” light in a car); in each case, the role of the alarm indicator is to draw the operator’s attention to the part of the system ‘in alarm’ so that appropriate action can be taken.

PLC/RTU Programming


“Smart” RTUs, or standard PLCs, are capable of autonomously executing simple logic processes without involving the supervisory computer. They employ standardized control programming languages such as under, IEC 61131-3 (a suite of 5 programming languages including function block, ladder, structured text, sequence function charts and instruction list), is frequently used to create programs which run on these RTUs and PLCs. Unlike a procedural language such as the C programming language or FORTRAN, IEC 61131-3 has minimal training requirements by virtue of resembling historic physical control arrays. This allows SCADA system engineers to perform both the design and implementation of a program to be executed on an RTU or PLC.

A programmable automation controller (PAC) is a compact controller that combines the features and capabilities of a PC-based control system with that of a typical PLC. PACs are deployed in SCADA systems to provide RTU and PLC functions. In many electrical substation SCADA applications, “distributed RTUs” use information processors or station computers to communicate with digital protective relays, PACs, and other devices for I/O, and communicate with the SCADA master in lieu of a traditional RTU.

PLC Commercial Integration


Since about 1998, virtually all major PLC manufacturers have offered integrated HMI/SCADA systems, many of them using open and non-proprietary communications protocols. Numerous specialized third-party HMI/SCADA packages, offering built-in compatibility with most major PLCs, have also entered the market, allowing mechanical engineers, electrical engineers and technicians to configure HMIs themselves, without the need for a custom-made program written by a software programmer. The Remote Terminal Unit (RTU) connects to physical equipment. Typically, an RTU converts the electrical signals from the equipment to digital values such as the open/closed status from a switch or a valve, or measurements such as pressure, flow, voltage or current. By converting and sending these electrical signals out to equipment the RTU can control equipment, such as opening or closing a switch or a valve, or setting the speed of a pump.

Communication Infrastructure and Methods


SCADA systems have traditionally used combinations of radio and direct wired connections, although SONET/SDH is also frequently used for large systems such as railways and power stations. The remote management or monitoring function of a SCADA system is often referred to as telemetry. Some users want SCADA data to travel over their pre-established corporate networks or to share the network with other applications. The legacy of the early low-bandwidth protocols remains, though.

SCADA protocols are designed to be very compact. Many are designed to send information only when the master station polls the RTU. Typical legacy SCADA protocols include Modbus RTU, RP-570, Profibus and Conitel. These communication protocols, with the exception of Modbus (Modbus has been made open by Schneider Electric), are all SCADA-vendor specific but are widely adopted and used. Standard protocols are IEC 60870-5-101 or 104, IEC 61850 and DNP3. These communication protocols are standardized and recognized by all major SCADA vendors. Many of these protocols now contain extensions to operate over TCP/IP. Although the use of conventional networking specifications, such as TCP/IP, blurs the line between traditional and industrial networking, they each fulfill fundamentally differing requirements. Network simulation can be used in conjunction with SCADA simulators to perform various ‘what-if’ analyses.

With increasing security demands (such as North American Electric Reliability Corporation (NERC) and critical infrastructure protection (CIP) in the US), there is increasing use of satellite-based communication. This has the key advantages that the infrastructure can be self-contained (not using circuits from the public telephone system), can have built-in encryption, and can be engineered to the availability and reliability required by the SCADA system operator. Earlier experiences using consumer-grade VSAT were poor. Modern carrier-class systems provide the quality of service required for SCADA.

RTUs and other automatic controller devices were developed before the advent of industry wide standards for interoperability. The result is that developers and their management created a multitude of control protocols. Among the larger vendors, there was also the incentive to create their own protocol to “lock in” their customer base. A list of automation protocols is compiled here.

OLE for process control (OPC) can connect different hardware and software, allowing communication even between devices originally not intended to be part of an industrial network.

SCADA Architecture Development


page1-464px-SCADA_C4ISR_Facilities.pdf

The United States Army’s Training Manual 5-601 covers “SCADA Systems for C4ISR Facilities”

SCADA systems have evolved through four generations as follows:

First generation: “monolithic”

Early SCADA system computing was done by large minicomputers. Common network services did not exist at the time SCADA was developed. Thus SCADA systems were independent systems with no connectivity to other systems. The communication protocols used were strictly proprietary at that time. The first-generation SCADA system redundancy was achieved using a back-up mainframe system connected to all the Remote Terminal Unit sites and was used in the event of failure of the primary mainframe system. Some first generation SCADA systems were developed as “turn key” operations that ran on minicomputers such as the PDP-11 series made by the Digital Equipment Corporation.

Second generation: “distributed”

SCADA information and command processing was distributed across multiple stations which were connected through a LAN. Information was shared in near real time. Each station was responsible for a particular task, which reduced the cost as compared to First Generation SCADA. The network protocols used were still not standardized. Since these protocols were proprietary, very few people beyond the developers knew enough to determine how secure a SCADA installation was. Security of the SCADA installation was usually overlooked.

Third generation: “networked”

Similar to a distributed architecture, any complex SCADA can be reduced to simplest components and connected through communication protocols. In the case of a networked design, the system may be spread across more than one LAN network called a process control network (PCN) and separated geographically. Several distributed architecture SCADAs running in parallel, with a single supervisor and historian, could be considered a network architecture. This allows for a more cost effective solution in very large scale systems.

Fourth generation: “Internet of things”

With the commercial availability of cloud computing, SCADA systems have increasingly adopted Internet of things technology to significantly improve interoperability , reduce infrastructure costs and increase ease of maintenance and integration . As a result, SCADA systems can now report state in near real-time and use the horizontal scale available in cloud environments to implement more complex control algorithms than are practically feasible to implement on traditional programmable logic controllers. Further, the use of open network protocols such as TLS inherent in the Internet of things technology, provides a more readily comprehensible and manageable security boundary than the heterogeneous mix of proprietary network protocols typical of many decentralized SCADA implementations.

This decentralization of data also requires a different approach to SCADA than traditional PLC based programs. When a SCADA system is used locally, the preferred methodology involves binding the graphics on the user interface to the data stored in specific PLC memory addresses. However, when the data comes from a disparate mix of sensors, controllers and databases (which may be local or at varied connected locations), the typical 1 to 1 mapping becomes problematic. A solution to this is data modeling, a concept derived from object oriented programming.

In a data model, a virtual representation of each device is constructed in the SCADA software. These virtual representations (“models”) can contain not just the address mapping of the device represented, but also any other pertinent information (web based info, database entries, media files, etc.) that may be used by other facets of the SCADA/IoT implementation. As the increased complexity of the Internet of things renders traditional SCADA increasingly “house-bound,” and as communication protocols evolve to favor platform-independent, service-oriented architecture (such as OPC UA), it is likely that more SCADA software developers will implement some form of data modeling.

Security Issues


SCADA systems that tie together decentralized facilities such as power, oil, gas pipelines, water distribution and wastewater collection systems were designed to be open, robust, and easily operated and repaired, but not necessarily secure. The move from proprietary technologies to more standardized and open solutions together with the increased number of connections between SCADA systems, office networks and the Internet has made them more vulnerable to types of network attacks that are relatively common in computer security. For example, United States Computer Emergency Readiness Team (US-CERT) released a vulnerability advisory warning that unauthenticated users could download sensitive configuration information including password hashes from an Inductive Automation Ignition system utilizing a standard attack type leveraging access to the Tomcat Embedded Web server. Security researcher Jerry Brown submitted a similar advisory regarding a buffer overflow vulnerability in a Wonderware InBatchClient ActiveX control. Both vendors made updates available prior to public vulnerability release. Mitigation recommendations were standard patching practices and requiring VPN access for secure connectivity. Consequently, the security of some SCADA-based systems has come into question as they are seen as potentially vulnerable to cyber attacks.

In particular, security researchers are concerned about:

  • the lack of concern about security and authentication in the design, deployment and operation of some existing SCADA networks
  • the belief that SCADA systems have the benefit of security through obscurity through the use of specialized protocols and proprietary interfaces
  • the belief that SCADA networks are secure because they are physically secured
  • the belief that SCADA networks are secure because they are disconnected from the Internet

SCADA systems are used to control and monitor physical processes, examples of which are transmission of electricity, transportation of gas and oil in pipelines, water distribution, traffic lights, and other systems used as the basis of modern society. The security of these SCADA systems is important because compromise or destruction of these systems would impact multiple areas of society far removed from the original compromise. For example, a blackout caused by a compromised electrical SCADA system would cause financial losses to all the customers that received electricity from that source. How security will affect legacy SCADA and new deployments remains to be seen.

There are many threat vectors to a modern SCADA system. One is the threat of unauthorized access to the control software, whether it is human access or changes induced intentionally or accidentally by virus infections and other software threats residing on the control host machine. Another is the threat of packet access to the network segments hosting SCADA devices. In many cases, the control protocol lacks any form of cryptographic security, allowing an attacker to control a SCADA device by sending commands over a network. In many cases SCADA users have assumed that having a VPN offered sufficient protection, unaware that security can be trivially bypassed with physical access to SCADA-related network jacks and switches. Industrial control vendors suggest approaching SCADA security like Information Security with a defense in depth strategy that leverages common IT practices.

The reliable function of SCADA systems in our modern infrastructure may be crucial to public health and safety. As such, attacks on these systems may directly or indirectly threaten public health and safety. Such an attack has already occurred, carried out on Maroochy Shire Council’s sewage control system in Queensland, Australia. Shortly after a contractor installed a SCADA system in January 2000, system components began to function erratically. Pumps did not run when needed and alarms were not reported. More critically, sewage flooded a nearby park and contaminated an open surface-water drainage ditch and flowed 500 meters to a tidal canal. The SCADA system was directing sewage valves to open when the design protocol should have kept them closed. Initially this was believed to be a system bug. Monitoring of the system logs revealed the malfunctions were the result of cyber attacks. Investigators reported 46 separate instances of malicious outside interference before the culprit was identified. The attacks were made by a disgruntled ex-employee of the company that had installed the SCADA system. The ex-employee was hoping to be hired by the utility full-time to maintain the system.

In April 2008, the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack issued a Critical Infrastructures Report which discussed the extreme vulnerability of SCADA systems to an electromagnetic pulse (EMP) event. After testing and analysis, the Commission concluded: “SCADA systems are vulnerable to an EMP event. The large numbers and widespread reliance on such systems by all of the Nation’s critical infrastructures represent a systemic threat to their continued operation following an EMP event. Additionally, the necessity to reboot, repair, or replace large numbers of geographically widely dispersed systems will considerably impede the Nation’s recovery from such an assault.”

Many vendors of SCADA and control products have begun to address the risks posed by unauthorized access by developing lines of specialized industrial firewall and VPN solutions for TCP/IP-based SCADA networks as well as external SCADA monitoring and recording equipment. The International Society of Automation (ISA) started formalizing SCADA security requirements in 2007 with a working group, WG4. WG4 “deals specifically with unique technical requirements, measurements, and other features required to evaluate and assure security resilience and performance of industrial automation and control systems devices”.

The increased interest in SCADA vulnerabilities has resulted in vulnerability researchers discovering vulnerabilities in commercial SCADA software and more general offensive SCADA techniques presented to the general security community. In electric and gas utility SCADA systems, the vulnerability of the large installed base of wired and wireless serial communications links is addressed in some cases by applying bump-in-the-wire devices that employ authentication and Advanced Encryption Standard encryption rather than replacing all existing nodes.

In June 2010, anti-virus security company VirusBlokAda reported the first detection of malware that attacks SCADA systems (Siemens’ WinCC/PCS 7 systems) running on Windows operating systems. The malware is called Stuxnet and uses four zero-day attacks to install a rootkit which in turn logs into the SCADA’s database and steals design and control files. The malware is also capable of changing the control system and hiding those changes. The malware was found on 14 systems, the majority of which were located in Iran.

In October 2013 National Geographic released a docudrama titled American Blackout which dealt with an imagined large-scale cyber attack on SCADA and the United States’ electrical grid.