Industrial Control System

From Wikipedia, the free encyclopedia

Industrial control system (ICS) is a general term that encompasses several types of control systems and associated instrumentation used for industrial process control.

Such systems can range from a few modular panel-mounted controllers to large interconnected and interactive distributed control systems with many thousands of field connections. All systems receive data received from remote sensors measuring process variables (PVs), compare these with desired set points (SPs) and derive command functions which are used to control a process though the final control elements (FCEs), such as control valves.

The larger systems are usually implemented by Supervisory Control and Data Acquisition (SCADA) systems, or distributed control systems (DCS), and programmable logic controllers (PLCs), though SCADA and PLC systems are scalable down to small systems with few control loops. Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing and telecommunications.

Contents
1 Discrete controllers
2 Distributed control systems
2.1 DCS structure
3 SCADA systems
4 Programmable logic controllers
5 History

Discrete Controllers


Industrial_PID_controllers_-_front_display

Panel mounted controllers with integral hard displays. The process value (PV), and setvalue (SV) or setpoint are on the same scale for easy comparison. The controller output CO is shown as MV (Manipulated variable) range 0-100%Smart_current_loop_positioner

A control loop using a discrete controller. Field signals are process variable (PV) from the sensor, and control output to the valve (the Final Control Equipment – FCE). A valve positioner ensures correct valve operation.

The simplest control systems are based around small discrete controllers with a single control loop each. These are usually panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. Originally these would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic.

Quite complex systems can be created with networks of these controllers communicating using industry standard protocols, which allow the use of local or remote SCADA operator interfaces, and enable the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of a PLC or DCS system is more cost-effective.

Distributed Control Systems


A Distributed Control System (DCS) is a digital processor control system for a process or plant, wherein controller functions and field connection modules are distributed throughout the system. They are used when the number of control loops makes DCS more cost effective than discrete controllers, and enable a supervisory view over large industrial processes. In a DCS a hierarchy of controllers is connected by communication networks, allowing centralised control rooms and local on-plant monitoring and control.

The introduction of DCSs enables easy configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other computer systems such as production control and scheduling. It also enabled more sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling.

DCS Structure

Functional_levels_of_a_Distributed_Control_System.svg

Functional manufacturing control levels, DCS and SCADA operate on levels 1 and 2.

A DCS typically uses custom-designed processors as controllers, and uses either proprietary interconnections or standard protocols for communication.Input and output modules form the peripheral components of the system.

The processors receive information from input modules, process the information and decide control actions to be performed by the output modules. The input modules receive information from sensing instruments in the process (or field) and the output modules transmit instructions to the final control elements, such as control valves.

The field inputs and outputs can either be continuously changing analog signals e.g. 4~ 20mA dc current loop or 2 state signals that switch either “on” or “off”, such as relay contacts or a semiconductor switch.

DCS systems can normally also support such as Foundation Fieldbus, profibus, HART, Modbus, PC Link and other digital communication bus that carries not only input and output signals but also advanced messages such as error diagnostics and status signals.

SCADA Systems


Supervisory control and data acquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management, but uses other peripheral devices such as programmable logic controllers and discrete PID controllers to interface to the process plant or machinery. The operator interfaces which enable monitoring and the issuing of process commands, such as controller set point changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to the field sensors and actuators.

The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances. It is one of the most commonly-used types of industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare/cyberterrorism attacks.

Referring to the functional hierarchy diagram in this article:

Level 1 contains the PLCs or RTUs

Level 2 contains the SCADA software and computing platform.

The SCADA software exists only at this supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop.

Programmable Logic Lontrollers


663px-Siemens_Simatic_S7-416-3

Siemens Simatic S7-400 system in a rack, left-to-right: power supply unit (PSU), CPU, interface module (IM) and communication processor (CP).

PLCs can range from small “building brick” devices with tens of I/O in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems.

They can be designed for multiple arrangements of digital and analog inputs and outputs (I/O), extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.

It was in the automotive industry in the USA that the PLC was created. Before the PLC, the control, sequencing, and safety interlock logic for manufacturing automobiles was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers. Since these could number in the hundreds or even thousands, the process for updating such facilities for the yearly model change-over was very time consuming and expensive, as electricians needed to individually rewire the relays to change their operational characteristics.

When digital computers became available, being general-purpose programmable devices, they were soon applied to control sequential and combinatorial logic in industrial processes. However these early computers required specialist programmers, and stringent operating environmental control for temperature, cleanliness, and power quality. To meet these challenges this the PLC was developed with several key attributes. It would tolerate the shop-floor environment, it would support discrete input and output, and it was easily maintained and programmed. Another option is the use of several small embedded controls attached to an industrial computer via a network. Examples are the Lantronix Xport and Digi/ME.

History


1280px-Kontrollrom_Tyssedal

A pre-DCS era central control room. Whilst the controls are centralised in one place, they are still discrete and not integrated into one system.

1280px-Leitstand_2

A DCS control room where plant information and controls are displayed on computer graphics screens. The operators are seated as they can view and control any part of the process from their screens, whilst retaining a plant overview.

Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-manned central control room. Effectively this was the centralisation of all the localised panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were individually transmitted back to plant in the form of pneumatic or electrical signals.

However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware so system changes required reconfiguration of signals by re-piping or re-wiring. It also required continual operator movement within a large control room in order to monitor the whole process. With coming of electronic processors, high speed electronic signalling networks and electronic graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant and would communicate with the graphic displays in the control room. The concept of “distributed control” was realised.

The introduction of distributed control allowed flexible interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels. For large control systems, the general commercial name “Distributed Control System” (DCS) was coined to refer to proprietary modular systems from many manufacturers which had high speed networking and a full suite of displays and control racks which all seamlessly integrated.

Whilst the DCS was tailored to meet the needs of large industrial continuous processes, in industries where combinatoric and sequential logic was the primary requirement, the PLC (programmable logic controller) evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to re-configure and fault-find, and PLC control enabled networking of signals to a central control area with electronic displays. PLC were first developed for the automotive industry on vehicle production lines, where sequential logic was becoming very complex. It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants.

SCADA’s history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA systems use open-loop control with sites that are widely separated geographically. A SCADA system uses RTUs (remote terminal units, also referred to as remote telemetry units) to send supervisory data back to a control center. Most RTU systems always did have some limited capacity to handle local controls while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local controls.

The boundaries between DCS and SCADA/PLC systems are blurring as time goes on. The technical limits that drove the designs of these various systems are no longer as much of an issue. Many PLC platforms can now perform quite well as a small DCS, using remote I/O and are sufficiently reliable that some SCADA systems actually manage closed loop control over long distances. With the increasing speed of today’s processors, many DCS products have a full line of PLC-like subsystems that weren’t offered when they were initially developed.

This led to the concept and realisation of a PAC – programmable automation controller – which is programmed in a modern programming language such as C or C++, – that is an amalgamation of these three concepts.

Industrial Control System

From Wikipedia, the free encyclopedia

Industrial control system (ICS) is a general term that encompasses several types of control systems and associated instrumentation used for industrial process control.

Such systems can range from a few modular panel-mounted controllers to large interconnected and interactive distributed control systems with many thousands of field connections. All systems receive data received from remote sensors measuring process variables (PVs), compare these with desired set points (SPs) and derive command functions which are used to control a process though the final control elements (FCEs), such as control valves.

The larger systems are usually implemented by Supervisory Control and Data Acquisition (SCADA) systems, or distributed control systems (DCS), and programmable logic controllers (PLCs), though SCADA and PLC systems are scalable down to small systems with few control loops. Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing and telecommunications.

Contents
1 Discrete controllers
2 Distributed control systems
2.1 DCS structure
3 SCADA systems
4 Programmable logic controllers
5 History

Discrete Controllers


Industrial_PID_controllers_-_front_display

 

Panel mounted controllers with integral hard displays. The process value (PV), and setvalue (SV) or setpoint are on the same scale for easy comparison. The controller output CO is shown as MV (Manipulated variable) range 0-100%Smart_current_loop_positioner

A control loop using a discrete controller. Field signals are process variable (PV) from the sensor, and control output to the valve (the Final Control Equipment – FCE). A valve positioner ensures correct valve operation.

The simplest control systems are based around small discrete controllers with a single control loop each. These are usually panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. Originally these would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic.

Quite complex systems can be created with networks of these controllers communicating using industry standard protocols, which allow the use of local or remote SCADA operator interfaces, and enable the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of a PLC or DCS system is more cost-effective.

Distributed Control Systems


A Distributed Control System (DCS) is a digital processor control system for a process or plant, wherein controller functions and field connection modules are distributed throughout the system. They are used when the number of control loops makes DCS more cost effective than discrete controllers, and enable a supervisory view over large industrial processes. In a DCS a hierarchy of controllers is connected by communication networks, allowing centralised control rooms and local on-plant monitoring and control.

The introduction of DCSs enables easy configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other computer systems such as production control and scheduling. It also enabled more sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling.

DCS Structure

Functional_levels_of_a_Distributed_Control_System.svg

Functional manufacturing control levels, DCS and SCADA operate on levels 1 and 2.

A DCS typically uses custom-designed processors as controllers, and uses either proprietary interconnections or standard protocols for communication.Input and output modules form the peripheral components of the system.

The processors receive information from input modules, process the information and decide control actions to be performed by the output modules. The input modules receive information from sensing instruments in the process (or field) and the output modules transmit instructions to the final control elements, such as control valves.

The field inputs and outputs can either be continuously changing analog signals e.g. 4~ 20mA dc current loop or 2 state signals that switch either “on” or “off”, such as relay contacts or a semiconductor switch.

DCS systems can normally also support such as Foundation Fieldbus, profibus, HART, Modbus, PC Link and other digital communication bus that carries not only input and output signals but also advanced messages such as error diagnostics and status signals.

SCADA Systems


Supervisory control and data acquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management, but uses other peripheral devices such as programmable logic controllers and discrete PID controllers to interface to the process plant or machinery. The operator interfaces which enable monitoring and the issuing of process commands, such as controller set point changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to the field sensors and actuators.

The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances. It is one of the most commonly-used types of industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare/cyberterrorism attacks.

Referring to the functional hierarchy diagram in this article:

Level 1 contains the PLCs or RTUs

Level 2 contains the SCADA software and computing platform.

The SCADA software exists only at this supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop.

Programmable Logic Lontrollers


663px-Siemens_Simatic_S7-416-3

Siemens Simatic S7-400 system in a rack, left-to-right: power supply unit (PSU), CPU, interface module (IM) and communication processor (CP).

PLCs can range from small “building brick” devices with tens of I/O in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems.

They can be designed for multiple arrangements of digital and analog inputs and outputs (I/O), extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.

It was in the automotive industry in the USA that the PLC was created. Before the PLC, the control, sequencing, and safety interlock logic for manufacturing automobiles was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers. Since these could number in the hundreds or even thousands, the process for updating such facilities for the yearly model change-over was very time consuming and expensive, as electricians needed to individually rewire the relays to change their operational characteristics.

When digital computers became available, being general-purpose programmable devices, they were soon applied to control sequential and combinatorial logic in industrial processes. However these early computers required specialist programmers, and stringent operating environmental control for temperature, cleanliness, and power quality. To meet these challenges this the PLC was developed with several key attributes. It would tolerate the shop-floor environment, it would support discrete input and output, and it was easily maintained and programmed. Another option is the use of several small embedded controls attached to an industrial computer via a network. Examples are the Lantronix Xport and Digi/ME.

History


1280px-Kontrollrom_Tyssedal

A pre-DCS era central control room. Whilst the controls are centralised in one place, they are still discrete and not integrated into one system.

1280px-Leitstand_2

A DCS control room where plant information and controls are displayed on computer graphics screens. The operators are seated as they can view and control any part of the process from their screens, whilst retaining a plant overview.

Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-manned central control room. Effectively this was the centralisation of all the localised panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were individually transmitted back to plant in the form of pneumatic or electrical signals.

However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware so system changes required reconfiguration of signals by re-piping or re-wiring. It also required continual operator movement within a large control room in order to monitor the whole process. With coming of electronic processors, high speed electronic signalling networks and electronic graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant and would communicate with the graphic displays in the control room. The concept of “distributed control” was realised.

The introduction of distributed control allowed flexible interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels. For large control systems, the general commercial name “Distributed Control System” (DCS) was coined to refer to proprietary modular systems from many manufacturers which had high speed networking and a full suite of displays and control racks which all seamlessly integrated.

Whilst the DCS was tailored to meet the needs of large industrial continuous processes, in industries where combinatoric and sequential logic was the primary requirement, the PLC (programmable logic controller) evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to re-configure and fault-find, and PLC control enabled networking of signals to a central control area with electronic displays. PLC were first developed for the automotive industry on vehicle production lines, where sequential logic was becoming very complex. It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants.

SCADA’s history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA systems use open-loop control with sites that are widely separated geographically. A SCADA system uses RTUs (remote terminal units, also referred to as remote telemetry units) to send supervisory data back to a control center. Most RTU systems always did have some limited capacity to handle local controls while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local controls.

The boundaries between DCS and SCADA/PLC systems are blurring as time goes on. The technical limits that drove the designs of these various systems are no longer as much of an issue. Many PLC platforms can now perform quite well as a small DCS, using remote I/O and are sufficiently reliable that some SCADA systems actually manage closed loop control over long distances. With the increasing speed of today’s processors, many DCS products have a full line of PLC-like subsystems that weren’t offered when they were initially developed.

This led to the concept and realisation of a PAC – programmable automation controller – which is programmed in a modern programming language such as C or C++, – that is an amalgamation of these three concepts.

Personal Area Network

From Wikipedia, the free encyclopedia

A personal area network (PAN) is a computer network used for data transmission amongst devices such as computers, telephones, tablets and personal digital assistants. PANs can be used for communication amongst the personal devices themselves (interpersonal communication), or for connecting to a higher level network and the Internet (an uplink) where one “master” device takes up the role as internet router.

A wireless personal area network (WPAN) is a low-powered PAN carried over a short-distance wireless network technology such as:

  • INSTEON
  • IrDA
  • Wireless USB
  • Bluetooth
  • Z-Wave
  • ZigBee
  • Body Area Network

The reach of a WPAN varies from a few centimeters to a few meters. A PAN may also be carried over wired computer buses such as USB and FireWire.

Although a (secured) Wi-Fi tethering connection could be used by only one single user it is not considered to be a PAN.

Contents
1 Wired PAN connection
2 Wireless personal area network
2.1 Bluetooth
2.2 Infrared Data Association

Wired PAN Connection


The data cable is an example of the above PAN. This is also a Personal Area Network because that connection is for the user’s personal use. PAN is used for personal use only.

Wireless Personal Area Network


A wireless personal area network (WPAN) is a personal area network—a network for interconnecting devices centered on an individual person’s workspace—in which the connections are wireless. Wireless PAN is based on the standard IEEE 802.15. The two kinds of wireless technologies used for WPAN are Bluetooth and Infrared Data Association.

A WPAN could serve to interconnect all the ordinary computing and communicating devices that many people have on their desk or carry with them today; or it could serve a more specialized purpose such as allowing the surgeon and other team members to communicate during an operation.

A key concept in WPAN technology is known as “plugging in”. In the ideal scenario, when any two WPAN-equipped devices come into close proximity (within several meters of each other) or within a few kilometers of a central server, they can communicate as if connected by a cable. Another important feature is the ability of each device to lock out other devices selectively, preventing needless interference or unauthorized access to information.

The technology for WPANs is in its infancy and is undergoing rapid development. Proposed operating frequencies are around 2.4 GHz in digital modes. The objective is to facilitate seamless operation among home or business devices and systems. Every device in a WPAN will be able to plug into any other device in the same WPAN, provided they are within physical range of one another. In addition, WPANs worldwide will be interconnected. Thus, for example, an archeologist on site in Greece might use a PDA to directly access databases at the University of Minnesota in Minneapolis, and to transmit findings to that database.

Bluetooth

Bluetooth uses short-range radio waves. While historically covering shorter distances associated with a PAN, the Bluetooth 5 standard, Bluetooth Mesh, have extended that range considerably. Further, long range Bluetooth routers with augmented antenna arrays connect Bluetooth devices up to 1,000 feet. Uses in a PAN remain, for example, Bluetooth devices such as keyboards, pointing devices, audio head sets, printers may connect to personal digital assistants (PDAs), cell phones, or computers wireless

A Bluetooth PAN is also called a piconet (combination of the prefix “pico,” meaning very small or one trillionth, and network), and is composed of up to 8 active devices in a master-slave relationship (a very large number of devices can be connected in “parked” mode). The first Bluetooth device in the piconet is the master, and all other devices are slaves that communicate with the master. A piconet typically has a range of 10 metres (33 ft), although ranges of up to 100 metres (330 ft) can be reached under ideal circumstances. With Bluetooth mesh networking the range and number of devices is extended by relaying information from one to another. Such a network doesn’t have a master device and may or may not be treated as a PAN.

Infrared Data Association

Infrared Data Association (IrDA) uses infrared light, which has a frequency below the human eye’s sensitivity. Infrared in general is used, for instance, in TV remotes. Typical WPAN devices that use IrDA include printers, keyboards, and other serial data interfaces.

Wireless LAN

From Wikipedia, the free encyclopedia

A wireless local area network (WLAN) is a wireless computer network that links two or more devices using wireless communication within a limited area such as a home, school, computer laboratory, or office building. This gives users the ability to move around within a local coverage area and yet still be connected to the network. Through a gateway, a WLAN can also provide a connection to the wider Internet.

Most modern WLANs are based on IEEE 802.11 standards and are marketed under the Wi-Fi brand name.

Wireless LANs have become popular for use in the home, due to their ease of installation and use. They are also popular in commercial properties that offer wireless access to their customers.

An embedded RouterBoard 112 with U.FL-RSMA pigtail and R52 mini PCI Wi-Fi card widely used by wireless Internet service providers

Contents
1 History
2 Architecture
2.1 Stations
2.2 Basic service set
2.3 Extended service set
2.4 Distribution system
3 Types of wireless LANs
3.1 Infrastructure
3.2 Peer-to-peer
3.3 Bridge
3.4 Wireless distribution system
4 Roaming
5 Applications
6 Performance and throughput

History


Wireless_network

This notebook computer is connected to a wireless access point using a PC card wireless card.

WI-FI_Range_Diagram.svg

An example of a Wi-Fi network

Norman Abramson, a professor at the University of Hawaii, developed the world’s first wireless computer communication network, ALOHAnet. The system became operational in 1971 and included seven computers deployed over four islands to communicate with the central computer on the Oahu island without using phone lines.

1024px-RouterBoard_112_with_U.FL-RSMA_pigtail_and_R52_miniPCI_Wi-Fi_card

An embedded RouterBoard 112 with U.FL-RSMA pigtail and R52 mini PCI Wi-Fi card widely used by wireless Internet service providers

WLAN_PCI_Card_cleaned

54 Mbit/s WLAN PCI Card (802.11g)

Wireless LAN hardware initially cost so much that it was only used as an alternative to cabled LAN in places where cabling was difficult or impossible. Early development included industry-specific solutions and proprietary protocols, but at the end of the 1990s these were replaced by standards, primarily the various versions of IEEE 802.11 (in products using the Wi-Fi brand name). Beginning in 1991, a European alternative known as HiperLAN/1 was pursued by the European Telecommunications Standards Institute (ETSI) with a first version approved in 1996. This was followed by a HiperLAN/2 functional specification with ATM influences accomplished February 2000. Neither European standard achieved the commercial success of 802.11, although much of the work on HiperLAN/2 has survived in the physical specification (PHY) for IEEE 802.11a, which is nearly identical to the PHY of HiperLAN/2.

In 2009 802.11n was added to 802.11. It operates in both the 2.4 GHz and 5 GHz bands at a maximum data transfer rate of 600 Mbit/s. Most newer routers are able to utilise both wireless bands, known as dualband. This allows data communications to avoid the crowded 2.4 GHz band, which is also shared with Bluetooth devices and microwave ovens. The 5 GHz band is also wider than the 2.4 GHz band, with more channels, which permits a greater number of devices to share the space. Not all channels are available in all regions.

A HomeRF group formed in 1997 to promote a technology aimed for residential use, but it disbanded at the end of 2002.

Architecture


Stations

]All components that can connect into a wireless medium in a network are referred to as stations (STA). All stations are equipped with wireless network interface controllers (WNICs). Wireless stations fall into two categories: wireless access points, and clients. Access points (APs), normally wireless routers, are base stations for the wireless network. They transmit and receive radio frequencies for wireless enabled devices to communicate with. Wireless clients can be mobile devices such as laptops, personal digital assistants, IP phones and other smartphones, or non-portable devices such as desktop computers and workstations that are equipped with a wireless network interface.

Basic Service Set

The basic service set (BSS) is a set of all stations that can communicate with each other at PHY layer. Every BSS has an identification (ID) called the BSSID, which is the MAC address of the access point servicing the BSS.

There are two types of BSS: Independent BSS (also referred to as IBSS), and infrastructure BSS. An independent BSS (IBSS) is an ad hoc network that contains no access points, which means they cannot connect to any other basic service set.

Extended Service Set

An extended service set (ESS) is a set of connected BSSs. Access points in an ESS are connected by a distribution system. Each ESS has an ID called the SSID which is a 32-byte (maximum) character string.

Distribution System

A distribution system (DS) connects access points in an extended service set. The concept of a DS can be used to increase network coverage through roaming between cells.

DS can be wired or wireless. Current wireless distribution systems are mostly based on WDS or MESH protocols, though other systems are in use.

Types of Wireless LANs


The IEEE 802.11 has two basic modes of operation: infrastructure and ad hoc mode. In ad hoc mode, mobile units transmit directly peer-to-peer. In infrastructure mode, mobile units communicate through an access point that serves as a bridge to other networks (such as Internet or LAN).

Since wireless communication uses a more open medium for communication in comparison to wired LANs, the 802.11 designers also included encryption mechanisms: Wired Equivalent Privacy (WEP, now insecure), Wi-Fi Protected Access (WPA, WPA2), to secure wireless computer networks. Many access points will also offer Wi-Fi Protected Setup, a quick (but now insecure) method of joining a new device to an encrypted network.

Infrastructure

Most Wi-Fi networks are deployed in infrastructure mode.

In infrastructure mode, a base station acts as a wireless access point hub, and nodes communicate through the hub. The hub usually, but not always, has a wired or fiber network connection, and may have permanent wireless connections to other nodes.

Wireless access points are usually fixed, and provide service to their client nodes within range.

Wireless clients, such as laptops, smartphones etc. connect to the access point to join the network.

Sometimes a network will have a multiple access points, with the same ‘SSID’ and security arrangement. In that case connecting to any access point on that network joins the client to the network. In that case, the client software will try to choose the access point to try to give the best service, such as the access point with the strongest signal.

Peer-to-peer

Wlan_adhoc

Peer-to-Peer or ad hoc wireless LAN

An ad hoc network (not the same as a WiFi Direct network) is a network where stations communicate only peer to peer (P2P). There is no base and no one gives permission to talk. This is accomplished using the Independent Basic Service Set (IBSS).

A WiFi Direct network is another type of network where stations communicate peer to peer.

In a Wi-Fi P2P group, the group owner operates as an access point and all other devices are clients. There are two main methods to establish a group owner in the Wi-Fi Direct group. In one approach, the user sets up a P2P group owner manually. This method is also known as Autonomous Group Owner (autonomous GO). In the second method, also called negotiation-based group creation, two devices compete based on the group owner intent value. The device with higher intent value becomes a group owner and the second device becomes a client. Group owner intent value can depend on whether the wireless device performs a cross-connection between an infrastructure WLAN service and a P2P group, remaining power in the wireless device, whether the wireless device is already a group owner in another group and/or a received signal strength of the first wireless device.

A peer-to-peer network allows wireless devices to directly communicate with each other. Wireless devices within range of each other can discover and communicate directly without involving central access points. This method is typically used by two computers so that they can connect to each other to form a network. This can basically occur in devices within a closed range.

If a signal strength meter is used in this situation, it may not read the strength accurately and can be misleading, because it registers the strength of the strongest signal, which may be the closest computer.

Wifi_hidden_station_problem.svg

Hidden node problem: Devices A and C are both communicating with B, but are unaware of each other

IEEE 802.11 defines the physical layer (PHY) and MAC (Media Access Control) layers based on CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance). The 802.11 specification includes provisions designed to minimize collisions, because two mobile units may both be in range of a common access point, but out of range of each other.

Bridge

A bridge can be used to connect networks, typically of different types. A wireless Ethernet bridge allows the connection of devices on a wired Ethernet network to a wireless network. The bridge acts as the connection point to the Wireless LAN.

Wireless Distribution System

A Wireless Distribution System enables the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the need for a wired backbone to link them, as is traditionally required. The notable advantage of DS over other solutions is that it preserves the MAC addresses of client packets across links between access points.

An access point can be either a main, relay or remote base station. A main base station is typically connected to the wired Ethernet. A relay base station relays data between remote base stations, wireless clients or other relay stations to either a main or another relay base station. A remote base station accepts connections from wireless clients and passes them to relay or main stations. Connections between “clients” are made using MAC addresses rather than by specifying IP assignments.

All base stations in a Wireless Distribution System must be configured to use the same radio channel, and share WEP keys or WPA keys if they are used. They can be configured to different service set identifiers. WDS also requires that every base station be configured to forward to others in the system as mentioned above.

WDS may also be referred to as repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). It should be noted, however, that throughput in this method is halved for all clients connected wirelessly.

When it is difficult to connect all of the access points in a network by wires, it is also possible to put up access points as repeaters.

Roaming


Roaming among Wireless Local Area Networks

There are two definitions for wireless LAN roaming:

  1. Internal Roaming: The Mobile Station (MS) moves from one access point (AP) to another AP within a home network if the signal strength is too weak. An authentication server (RADIUS) performs the re-authentication of MS via 802.1x (e.g. with PEAP). The billing of QoS is in the home network. A Mobile Station roaming from one access point to another often interrupts the flow of data among the Mobile Station and an application connected to the network. The Mobile Station, for instance, periodically monitors the presence of alternative access points (ones that will provide a better connection). At some point, based on proprietary mechanisms, the Mobile Station decides to re-associate with an access point having a stronger wireless signal. The Mobile Station, however, may lose a connection with an access point before associating with another access point. In order to provide reliable connections with applications, the Mobile Station must generally include software that provides session persistence.
  2. External Roaming: The MS (client) moves into a WLAN of another Wireless Internet Service Provider (WISP) and takes their services (Hotspot). The user can independently of his home network use another foreign network, if this is open for visitors. There must be special authentication and billing systems for mobile services in a foreign network.

Applications


Wireless LANs have a great deal of applications. Modern implementations of WLANs range from small in-home networks to large, campus-sized ones to completely mobile networks on airplanes and trains.

Users can access the Internet from WLAN hotspots in restaurants, hotels, and now with portable devices that connect to 3G or 4G networks. Oftentimes these types of public access points require no registration or password to join the network. Others can be accessed once registration has occurred and/or a fee is paid.

Existing Wireless LAN infrastructures can also be used to work as indoor positioning systems with no modification to the existing hardware.

Performance and Throughput


WLAN, organised in various layer 2 variants (IEEE 802.11), has different characteristics. Across all flavours of 802.11, maximum achievable throughputs are either given based on measurements under ideal conditions or in the layer 2 data rates. This, however, does not apply to typical deployments in which data are being transferred between two endpoints of which at least one is typically connected to a wired infrastructure and the other endpoint is connected to an infrastructure via a wireless link.

Throughputenvelope80211g

Graphical representation of Wi-Fi application specific (UDP) performance envelope 2.4 GHz band, with 802.11g

ThroughputEnvelope11n

Graphical representation of Wi-Fi application specific (UDP) performance envelope 2.4 GHz band, with 802.11n with 40 MHz

This means that typically data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa.

Due to the difference in the frame (header) lengths of these two media, the packet size of an application determines the speed of the data transfer. This means that an application which uses small packets (e.g. VoIP) creates a data flow with a high overhead traffic (e.g. a low goodput).

Other factors which contribute to the overall application data rate are the speed with which the application transmits the packets (i.e. the data rate) and the energy with which the wireless signal is received.

The latter is determined by distance and by the configured output power of the communicating devices.

Same references apply to the attached throughput graphs which show measurements of UDP throughput measurements. Each represents an average (UDP) throughput (the error bars are there, but barely visible due to the small variation) of 25 measurements.

Each is with a specific packet size (small or large) and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. This text and measurements do not cover packet errors but information about this can be found at above references. The table below shows the maximum achievable (application specific) UDP throughput in the same scenarios (same references again) with various difference WLAN (802.11) flavours. The measurement hosts have been 25 meters apart from each other; loss is again ignored.

Local Area Network

From Wikipedia, the free encyclopedia

Ethernet_LAN.svg

A conceptual diagram of a local area network.

A local area network (LAN) is a computer network that interconnects computers within a limited area such as a residence, school, laboratory, university campus or office building. By contrast, a wide area network (WAN) not only covers a larger geographic distance, but also generally involves leased telecommunication circuits.

Ethernet and Wi-Fi are the two most common technologies in use for local area networks. Historical technologies include ARCNET, Token ring, and AppleTalk.

Contents
1 History
2 Cabling
3 Wireless media
4 Technical aspects

History


The increasing demand and use of computers in universities and research labs in the late 1960s generated the need to provide high-speed interconnections between computer systems. A 1970 report from the Lawrence Radiation Laboratory detailing the growth of their “Octopus” network gave a good indication of the situation.

A number of experimental and early commercial LAN technologies were developed in the 1970s. Cambridge Ring was developed at Cambridge University starting in 1974. Ethernet was developed at Xerox PARC in 1973–1975, and filed as U.S. Patent 4,063,220. In 1976, after the system was deployed at PARC, Robert Metcalfe and David Boggs published a seminal paper, “Ethernet: Distributed Packet-Switching for Local Computer Networks”. ARCNET was developed by Datapoint Corporation in 1976 and announced in 1977. It had the first commercial installation in December 1977 at Chase Manhattan Bank in New York.

The development and proliferation of personal computers using the CP/M operating system in the late 1970s, and later DOS-based systems starting in 1981, meant that many sites grew to dozens or even hundreds of computers. The initial driving force for networking was generally to share storage and printers, which were both expensive at the time. There was much enthusiasm for the concept and for several years, from about 1983 onward, computer industry pundits would regularly declare the coming year to be, “The year of the LAN”.

In practice, the concept was marred by proliferation of incompatible physical layer and network protocol implementations, and a plethora of methods of sharing resources. Typically, each vendor would have its own type of network card, cabling, protocol, and network operating system. A solution appeared with the advent of Novell NetWare which provided even-handed support for dozens of competing card/cable types, and a much more sophisticated operating system than most of its competitors. Netware dominated the personal computer LAN business from early after its introduction in 1983 until the mid-1990s when Microsoft introduced Windows NT Advanced Server and Windows for Workgroups.

Of the competitors to NetWare, only Banyan Vines had comparable technical strengths, but Banyan never gained a secure base. Microsoft and 3Com worked together to create a simple network operating system which formed the base of 3Com’s 3+Share, Microsoft’s LAN Manager and IBM’s LAN Server – but none of these was particularly successful.

During the same period, Unix workstations were using TCP/IP networking. Although this market segment is now much reduced, the technologies developed in this area continue to be influential on the Internet and in both Linux and Apple Mac OS X networking—and the TCP/IP protocol has replaced IPX, AppleTalk, NBF, and other protocols used by the early PC LANs.

Cabling


Early LAN cabling had generally been based on various grades of coaxial cable. Shielded twisted pair was used in IBM’s Token Ring LAN implementation, but in 1984, StarLAN showed the potential of simple unshielded twisted pair by using Cat3 cable—the same simple cable used for telephone systems. This led to the development of 10BASE-T (and its successors) and structured cabling which is still the basis of most commercial LANs today.

While fiber-optic cabling is common for links between switches, use of fiber to the desktop is rare.

Wireless Media


Many LANs use wireless technologies that are built into Smartphones, tablet computers and laptops. In a wireless local area network, users may move unrestricted in the coverage area. Wireless networks have become popular in residences and small businesses, because of their ease of installation. Guests are often offered Internet access via a hotspot service.

Technical Aspects


Network topology describes the layout of interconnections between devices and network segments. At the data link layer and physical layer, a wide variety of LAN topologies have been used, including ring, bus, mesh and star. At the higher layers, NetBEUI, IPX/SPX, AppleTalk and others were once common, but the Internet Protocol Suite (TCP/IP) has prevailed as a standard of choice.

Simple LANs generally consist of cabling and one or more switches. A switch can be connected to a router, cable modem, or ADSL modem for Internet access. A LAN can include a wide variety of other network devices such as firewalls, load balancers, and network intrusion detection. Advanced LANs are characterized by their use of redundant links with switches using the spanning tree protocol to prevent loops, their ability to manage differing traffic types via quality of service (QoS), and to segregate traffic with VLANs.

LANs can maintain connections with other LANs via leased lines, leased services, or across the Internet using virtual private network technologies. Depending on how the connections are established and secured, and the distance involved, such linked LANs may also be classified as a metropolitan area network (MAN) or a wide area network (WAN).

Metropolitan Area Network

From Wikipedia, the free encyclopedia

A metropolitan area network (MAN) is a computer network that interconnects users with computer resources in a geographic area or region larger than that covered by even a large local area network (LAN) but smaller than the area covered by a wide area network (WAN). The term is applied to the interconnection of networks in a city into a single larger network (which may then also offer efficient connection to a wide area network). It is also used to mean the interconnection of several local area networks by bridging them with backbone lines. The latter usage is also sometimes referred to as a campus network.

Wide Area Network

From Wikipedia, the free encyclopedia

LAN_WAN_scheme.svg.png

A wide area network (WAN) is a telecommunications network or computer network that extends over a large geographical distance. Wide area networks are often established with leased telecommunication circuits.

Business, education and government entities use wide area networks to relay data to staff, students, clients, buyers, and suppliers from various locations across the world. In essence, this mode of telecommunication allows a business to effectively carry out its daily function regardless of location. The Internet may be considered a WAN.

Related terms for other types of networks are personal area networks (PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks (MANs) which are usually limited to a room, building, campus or specific metropolitan area respectively.

Contents
1 Design options
2 Connection technology
3 List of WAN types
4 See also
5 References
6 External links

Design Options


The textbook definition of a WAN is a computer network spanning regions, countries, or even the world. However, in terms of the application of computer networking protocols and concepts, it may be best to view WANs as computer networking technologies used to transmit data over long distances, and between different LANs, MANs and other localised computer networking architectures. This distinction stems from the fact that common LAN technologies operating at lower layers of the OSI model (such as the forms of Ethernet or Wi-Fi) are often designed for physically proximal networks, and thus cannot transmit data over tens, hundreds or even thousands of miles or kilometres.

WANs do not just necessarily connect physically disparate LANs. A CAN, for example, may have a localized backbone of a WAN technology, which connects different LANs within a campus. This could be to facilitate higher bandwidth applications or provide better functionality for users in the CAN.

WANs are used to connect LANs and other types of networks together so that users and computers in one location can communicate with users and computers in other locations. Many WANs are built for one particular organization and are private. Others, built by Internet service providers, provide connections from an organization’s LAN to the Internet. WANs are often built using leased lines. At each end of the leased line, a router connects the LAN on one side with a second router within the LAN on the other. Leased lines can be very expensive. Instead of using leased lines, WANs can also be built using less costly circuit switching or packet switching methods. Network protocols including TCP/IP deliver transport and addressing functions. Protocols including Packet over SONET/SDH, Multiprotocol Label Switching (MPLS), Asynchronous Transfer Mode (ATM) and Frame Relay are often used by service providers to deliver the links that are used in WANs. X.25 was an important early WAN protocol, and is often considered to be the “grandfather” of Frame Relay as many of the underlying protocols and functions of X.25 are still in use today (with upgrades) by Frame Relay.

Academic research into wide area networks can be broken down into three areas: mathematical models, network emulation and network simulation.

Performance improvements are sometimes delivered via wide area file services or WAN optimization.

Connection Technology


Many technologies are available for wide area network links. Examples include circuit switched telephone lines, radio wave transmission, and optic fiber. New developments in technologies have successively increased transmission rates. In ca. 1960, a 110 bit/s (bits per second) line was normal on the edge of the WAN, while core links of 56 kbit/s to 64 kbit/s were considered fast. As of 2014, households are connected to the Internet with Dial-Up, ADSL, Cable, Wimax, 4G or fiber. The speeds that people can currently use range from 28.8 Kilobits per second through a 28K modem over a telephone connection to speeds as high as 100 Gigabits per second over an Ethernet 100GBaseY connection.

AT&T plans to start conducting trials in the year 2017 for businesses to use 400 Gigabit Ethernet. Researchers Robert Maher, Alex Alvarado, Domaniç Lavery & Polina Bayvel of University College London were able to increase networking speeds to 1.125 Terabits per second. Christos Santis, graduate student Scott Steger, Amnon Yariv, Martin and Eileen Summerfield developed a new laser that quadruples transfer speeds over fiber optic cabling. If these two technologies were combined, then a transfer speed of up to 4.5 Terabits per second could potentially be achieved, although it is unlikely that this will be commercially implemented in the near future.

List of WAN types

  • ATM
  • Cable modem
  • Dial-up
  • DSL
  • Frame relay
  • ISDN
  • Leased line
  • SONET
  • X.25
    SD-WAN

Peer-to-Peer

From Wikipedia, the free encyclopedia

Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-to-peer network of nodes.

Peers make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in contrast to the traditional client-server model in which the consumption and supply of resources is divided. Emerging collaborative P2P systems are going beyond the era of peers doing similar things while sharing resources, and are looking for diverse peers that can bring in unique resources and capabilities to a virtual community thereby empowering it to engage in greater tasks beyond those that can be accomplished by individual peers, yet that are beneficial to all the peers.

While P2P systems had previously been used in many application domains, the architecture was popularized by the file sharing system Napster, originally released in 1999. The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that has emerged throughout society, enabled by Internet technologies in general.

Contents
1 Historical development
2 Architecture
2.1 Routing and resource discovery
2.1.1 Unstructured networks
2.1.2 Structured networks
2.1.3 Hybrid models
2.2 Security and trust
2.2.1 Routing attacks
2.2.2 Corrupted data and malware
2.3 Resilient and scalable computer networks
2.4 Distributed storage and search
3 Applications
3.1 Content delivery
3.2 File-sharing networks
3.2.1 Copyright infringements
3.3 Multimedia
3.4 Other P2P applications
4 Social implications
4.1 Incentivizing resource sharing and cooperation
4.1.1 Privacy and anonymity
5 Political implications
5.1 Intellectual property law and illegal sharing
5.2 Network neutrality
6 Current research

Historical Development


SETI@home_Multi-Beam_screensaver

SETI@home was established in 1999

While P2P systems had previously been used in many application domains, the concept was popularized by file sharing systems such as the music-sharing application Napster (originally released in 1999). The peer-to-peer movement allowed millions of Internet users to connect “directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, and filesystems.” The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the first Request for Comments, RFC 1.

Tim Berners-Lee’s vision for the World Wide Web was close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked “web” of links. The early Internet was more open than present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures. This contrasts to the broadcasting-like structure of the web as it has developed over the years. As a precursor to the Internet, ARPANET was a successful client-server network where “every participating node could request and serve content.” However, ARPANET was not self-organized, and it lacked the ability to “provide any means for context or content-based routing beyond ‘simple’ address-based routing.”

Therefore, a distributed messaging system that is often likened as an early peer-to-peer architecture was established: USENET. USENET was developed in 1979 and is a system that enforces a decentralized model of control. The basic model is a client-server model from the user or client perspective that offers a self-organizing approach to newsgroup servers. However, news servers communicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies to SMTP email in the sense that the core email-relaying network of mail transfer agents has a peer-to-peer character, while the periphery of e-mail clients and their direct connections is strictly a client-server relationship.

In May 1999, with millions more people on the Internet, Shawn Fanning introduced the music and file-sharing application called Napster. Napster was the beginning of peer-to-peer networks, as we know them today, where “participating users establish a virtual network, entirely independent from the physical network, without having to obey any administrative authorities or restrictions.”

Architecture


A peer-to-peer network is designed around the notion of equal peer nodes simultaneously functioning as both “clients” and “servers” to the other nodes on the network. This model of network arrangement differs from the client–server model where communication is usually to and from a central server. A typical example of a file transfer that uses the client-server model is the File Transfer Protocol (FTP) service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these requests.

Routing and Resource Discovery

Peer-to-peer networks generally implement some form of virtual overlay network on top of the physical network topology, where the nodes in the overlay form a subset of the nodes in the physical network. Data is still exchanged directly over the underlying TCP/IP network, but at the application layer peers are able to communicate with each other directly, via the logical overlay links (each of which corresponds to a path through the underlying physical network). Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks as unstructured or structured (or as a hybrid between the two).

Unstructured Networks

589px-Unstructured_peer-to-peer_network_diagram

Overlay network diagram for an unstructured P2P network, illustrating the ad hoc nature of the connections between nodes

Unstructured peer-to-peer networks do not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other. (Gnutella, Gossip, and Kazaa are examples of unstructured P2P protocols).

Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay. Also, because the role of all peers in the network is the same, unstructured networks are highly robust in the face of high rates of “churn”—that is, when large numbers of peers are frequently joining and leaving the network.

However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data. Flooding causes a very high amount of signaling traffic in the network, uses more CPU/memory (by requiring every peer to process all search queries), and does not ensure that search queries will always be resolved. Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that search will be successful.

Structured Networks

640px-Structured_(DHT)_peer-to-peer_network_diagram

Overlay network diagram for a structured P2P network, using a distributed hash table (DHT) to identify and locate nodes/resources

In structured peer-to-peer networks the overlay is organized into a specific topology, and the protocol ensures that any node can efficiently search the network for a file/resource, even if the resource is extremely rare.

The most common type of structured P2P networks implement a distributed hash table (DHT), in which a variant of consistent hashing is used to assign ownership of each file to a particular peer. This enables peers to search for resources on the network using a hash table: that is, (key, value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given key.

1024px-DHT_en.svg

Distributed hash tables

However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors that satisfy specific criteria. This makes them less robust in networks with a high rate of churn (i.e. with large numbers of nodes frequently joining and leaving the network). More recent evaluation of P2P resource discovery solutions under real workloads have pointed out several issues in DHT-based solutions such as high cost of advertising/discovering resources and static and dynamic load imbalance.

Notable distributed networks that use DHTs include BitTorrent’s distributed tracker, the Kad network, the Storm botnet, YaCy, and the Coral Content Distribution Network. Some prominent research projects include the Chord project, Kademlia, PAST storage utility, P-Grid, a self-organized and emerging overlay network, and CoopNet content distribution system. DHT-based networks have also been widely utilized for accomplishing efficient resource discovery for grid computing systems, as it aids in resource management and scheduling of applications.

Hybrid Models

Hybrid models are a combination of peer-to-peer and client-server models. A common hybrid model is to have a central server that helps peers find each other. Spotify is an example of a hybrid model. There are a variety of hybrid models, all of which make trade-offs between the centralized functionality provided by a structured server/client network and the node equality afforded by the pure peer-to-peer unstructured networks. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks.

Security and Trust

Peer-to-peer systems pose unique challenges from a computer security perspective.

Like any other form of software, P2P applications can contain vulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable to remote exploits.

Routing Attacks

Also, since each node plays a role in routing traffic through the network, malicious users can perform a variety of “routing attacks”, or denial of service attacks. Examples of common routing attacks include “incorrect lookup routing” whereby malicious nodes deliberately forward requests incorrectly or return false results, “incorrect routing updates” where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and “incorrect routing network partition” where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes.

Corrupted Data and Malware

The prevalence of malware varies between different peer-to-peer protocols. Studies analyzing the spread of malware on P2P networks found, for example, that 63% of the answered download requests on the Limewire network contained some form of malware, whereas only 3% of the content on OpenFT contained malware. In both cases, the top three most common types of malware accounted for the large majority of cases (99% in Limewire, and 65% in OpenFT). Another study analyzing traffic on the Kazaa network found that 15% of the 500,000 file sample taken were infected by one or more of the 365 different computer viruses that were tested for.

Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network. For example, on the FastTrack network, the RIAA managed to introduce faked chunks into downloads and downloaded files (mostly MP3 files). Files infected with the RIAA virus were unusable afterwards and contained malicious code. The RIAA is also known to have uploaded fake music and movies to P2P networks in order to deter illegal file sharing. Consequently, the P2P networks of today have seen an enormous increase of their security and file verification mechanisms. Modern hashing, chunk verification and different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts.

Resilient and Scalable Computer Networks

The decentralized nature of P2P networks increases robustness because it removes the single point of failure that can be inherent in a client-server based system. As nodes arrive and demand on the system increases, the total capacity of the system also increases, and the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client–server architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down.

Distributed Storage and Search

Yacy-resultados

Search results for the query “software libre”, using YaCy a free distributed search engine that runs on a peer-to-peer network instead making requests to centralized index servers (like Google, Yahoo, and other corporate search engines)

There are both advantages and disadvantages in P2P networks related to the topic of data backup, recovery, and availability. In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces. For example, YouTube has been pressured by the RIAA, MPAA, and entertainment industry to filter out copyrighted content. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host. A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point.

In this sense, the community of users in a P2P network is completely responsible for deciding what content is available. Unpopular files will eventually disappear and become unavailable as more people stop sharing them. Popular files, however, will be highly and easily distributed. Popular files on a P2P network actually have more stability and availability than files on central networks. In a centralized network a simple loss of connection between the server and clients is enough to cause a failure, but in P2P networks the connections between every node must be lost in order to cause a data sharing failure. In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its own backup system. Because of the lack of central authority in P2P networks, forces such as the recording industry, RIAA, MPAA, and the government are unable to delete or stop the sharing of content on P2P systems.

Applications


Content Delivery

In P2P networks, clients both provide and use resources. This means that unlike client-server systems, the content-serving capacity of peer-to-peer networks can actually increase as more users begin to access the content (especially with protocols such as Bittorrent that require users to share, refer a performance measurement study). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor.

File-sharing Networks

Many file peer-to-peer file sharing networks, such as Gnutella, G2, and the eDonkey network popularized peer-to-peer technologies.

  • Peer-to-peer content delivery networks.
  • Peer-to-peer content services, e.g. caches for improved performance such as Correli Caches
  • Software publication and distribution (Linux distribution, several games); via file sharing networks.
Copyright Infringements

Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, over conflicts with copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd.. In the last case, the Court unanimously held that defendant peer-to-peer file sharing companies Grokster and Streamcast could be sued for inducing copyright infringement.

Multimedia

  • The P2PTV and PDTP protocols.
  • Some proprietary multimedia applications, such as Spotify, use a peer-to-peer network along with streaming servers to stream audio and video to their clients.
  • Peercasting for multicasting streams.
  • Pennsylvania State University, MIT and Simon Fraser University are carrying on a project called LionShare designed for facilitating file sharing among educational institutions globally.
  • Osiris is a program that allows its users to create anonymous and autonomous web portals distributed via P2P network.

Other P2P Applications

Torrent_peers

Torrent file connect peers

  • Tradepal and M-commerce applications that power real-time marketplaces.
  • Bitcoin and alternatives such as Ethereum, Nxt and Peercoin are peer-to-peer-based digital cryptocurrencies.
  • I2P, an overlay network used to browse the Internet anonymously.
  • Infinit is an unlimited and encrypted peer to peer file sharing application for digital artists written in C++.
  • Netsukuku, a Wireless community network designed to be independent from the Internet.
  • Dalesa, a peer-to-peer web cache for LANs (based on IP multicasting).
  • Open Garden, connection sharing application that shares Internet access with other devices using Wi-Fi or Bluetooth.
  • Peerspace is a peer-to-peer marketplace for booking space for events, meetings and productions.
  • Research like the Chord project, the PAST storage utility, the P-Grid, and the CoopNet content distribution system.
  • JXTA, a peer-to-peer protocol designed for the Java platform.
  • The U.S. Department of Defense is conducting research on P2P networks as part of its modern network warfare strategy. In May, 2003, Anthony Tether, then director of
  • DARPA, testified that the United States military uses P2P networks.

Social Implications


Incentivizing Resource Sharing and Cooperation

Torrentcomp_small (1)

The BitTorrent protocol: In this animation, the colored bars beneath all of the 7 clients in the upper region above represent the file being shared, with each color representing an individual piece of the file. After the initial pieces transfer from the seed (large system at the bottom), the pieces are individually transferred from client to client. The original seeder only needs to send out one copy of the file for all the clients to receive a copy.

Cooperation among a community of participants is key to the continued success of P2P systems aimed at casual human users; these reach their full potential only when large numbers of nodes contribute resources. But in current practice P2P networks often contain large numbers of users who utilize resources shared by other nodes, but who do not share anything themselves (often referred to as the “freeloader problem”). Freeloading can have a profound impact on the network and in some cases can cause the community to collapse. In these types of networks “users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance.” Studying the social attributes of P2P networks is challenging due to large populations of turnover, asymmetry of interest and zero-cost identity. A variety of incentive mechanisms have been implemented to encourage or even force nodes to contribute resources.

Some researchers have explored the benefits of enabling virtual communities to self-organize and introduce incentives for resource sharing and cooperation, arguing that the social aspect missing from today’s P2P systems should be seen both as a goal and a means for self-organized virtual communities to be built and fostered. Ongoing research efforts for designing effective incentive mechanisms in P2P systems, based on principles from game theory, are beginning to take on a more psychological and information-processing direction.

Privacy and Anonymity

Some peer-to-peer networks (e.g. Freenet) place a heavy emphasis on privacy and anonymity—that is, ensuring that the contents of communications are hidden from eavesdroppers, and that the identities/locations of the participants are concealed. Public key cryptography can be used to provide encryption, data validation, authorization, and authentication for data/messages. Onion routing and other mix network protocols (e.g. Tarzan) can be used to provide anonymity.

Political Implications


Intellectual Property Law and Illegal Sharing

Although peer-to-peer networks can be used for legitimate purposes, rights holders have targeted peer-to-peer over the involvement with sharing copyrighted material. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, primarily over issues surrounding copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd.. In both of the cases the file sharing technology was ruled to be legal as long as the developers had no ability to prevent the sharing of the copyrighted material. To establish criminal liability for the copyright infringement on peer-to-peer systems, the government must prove that the defendant infringed a copyright willingly for the purpose of personal financial gain or commercial advantage. Fair use exceptions allow limited use of copyrighted material to be downloaded without acquiring permission from the rights holders. These documents are usually news reporting or under the lines of research and scholarly work. Controversies have developed over the concern of illegitimate use of peer-to-peer networks regarding public safety and national security. When a file is downloaded through a peer-to-peer network, it is impossible to know who created the file or what users are connected to the network at a given time. Trustworthiness of sources is a potential security threat that can be seen with peer-to-peer systems.

A study ordered by the European Union found that illegal downloading may lead to an increase in overall video game sales because newer games charge for extra features or levels. The paper concluded that piracy had a negative financial impact on movies, music, and literature. The study relied on self-reported data about game purchases and use of illegal download sites. Pains were taken to remove effects of false and misremembered responses.

Network Neutrality

Peer-to-peer applications present one of the core issues in the network neutrality controversy. Internet service providers (ISPs) have been known to throttle P2P file-sharing traffic due to its high-bandwidth usage. Compared to Web browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities, P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination packets. In October 2007, Comcast, one of the largest broadband Internet providers in the United States, started blocking P2P applications such as BitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic. Critics point out that P2P networking has legitimate legal uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards a client-server-based application architecture. The client-server model provides financial barriers-to-entry to small publishers and individuals, and can be less efficient for sharing large files. As a reaction to this bandwidth throttling, several P2P applications started implementing protocol obfuscation, such as the BitTorrent protocol encryption. Techniques for achieving “protocol obfuscation” involves removing otherwise easily identifiable properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random. The ISP’s solution to the high bandwidth is P2P caching, where an ISP stores the part of files most accessed by P2P clients in order to save access to the Internet.

Current Research


Researchers have used computer simulations to aid in understanding and evaluating the complex behaviors of individuals within the network. “Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work.” If the research cannot be reproduced, then the opportunity for further research is hindered. “Even though new simulators continue to be released, the research community tends towards only a handful of open-source simulators. The demand for features in simulators, as shown by our criteria and survey, is high. Therefore, the community should work together to get these features in open-source software. This would reduce the need for custom simulators, and hence increase repeatability and reputability of experiments.”

Besides above, there have been work done on ns-2 open source network simulator. One research issue related to free rider detection and punishment has been explored using ns-2 simulator here.

Client–Server Model

From Wikipedia, the free encyclopedia

Client-server-model.svg

A computer network diagram of clients communicating with a server via the Internet.

The client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server host runs one or more server programs which share their resources with clients. A client does not share any of its resources, but requests a server’s content or service function. Clients therefore initiate communication sessions with servers which await incoming requests. Examples of computer applications that use the client–server model are Email, network printing, and the World Wide Web.

Contents
1 Client and server role
2 Client and server communication
3 Example
4 History of Server Model
4.1 Client-host and server-host
5 Centralized computing
6 Comparison with peer-to-peer architecture

Client and Server Role


The client-server characteristic describes the relationship of cooperating programs in an application. The server component provides a function or service to one or many clients, which initiate requests for such services.

Servers are classified by the services they provide. For example, a web server serves web pages and a file server serves computer files. A shared resource may be any of the server computer’s software and electronic components, from programs and data to processors and storage devices. The sharing of resources of a server constitutes a service.

Whether a computer is a client, a server, or both, is determined by the nature of the application that requires the service functions. For example, a single computer can run web server and file server software at the same time to serve different data to clients making different kinds of requests. Client software can also communicate with server software within the same computer. Communication between servers, such as to synchronize data, is sometimes called inter-server or server-to-server communication.

Client and Server Communication


In general, a service is an abstraction of computer resources and a client does not have to be concerned with how the server performs while fulfilling the request and delivering the response. The client only has to understand the response based on the well-known application protocol, i.e. the content and the formatting of the data for the requested service.

Clients and servers exchange messages in a request–response messaging pattern. The client sends a request, and the server returns a response. This exchange of messages is an example of inter-process communication. To communicate, the computers must have a common language, and they must follow rules so that both the client and the server know what to expect. The language and rules of communication are defined in a communications protocol. All client-server protocols operate in the application layer. The application layer protocol defines the basic patterns of the dialogue. To formalize the data exchange even further, the server may implement an application programming interface (API). The API is an abstraction layer for accessing a service. By restricting communication to a specific content format, it facilitates parsing. By abstracting access, it facilitates cross-platform data exchange.

A server may receive requests from many distinct clients in a short period of time. A computer can only perform a limited number of tasks at any moment, and relies on a scheduling system to prioritize incoming requests from clients to accommodate them. To prevent abuse and maximize availability, server software may limit the availability to clients. Denial of service attacks are designed to exploit a server’s obligation to process requests by overloading it with excessive request rates.

Example


When a bank customer accesses online banking services with a web browser (the client), the client initiates a request to the bank’s web server. The customer’s login credentials may be stored in a database, and the web server accesses the database server as a client. An application server interprets the returned data by applying the bank’s business logic, and provides the output to the web server. Finally, the web server returns the result to the client web browser for display.

In each step of this sequence of client–server message exchanges, a computer processes a request and returns data. This is the request-response messaging pattern. When all the requests are met, the sequence is complete and the web browser presents the data to the customer.

This example illustrates a design pattern applicable to the client–server model: separation of concerns.

History of Server Model


An early form of client–server architecture is remote job entry, dating at least to OS/360 (announced 1964), where the request was to run a job, and the response was the output.

While formulating the client–server model in the 1960s and 1970s, computer scientists building ARPANET (at the Stanford Research Institute) used the terms server-host (or serving host) and user-host (or using-host), and these appear in the early documents RFC 5 and RFC 4. This usage was continued at Xerox PARC in the mid-1970s.

One context in which researchers used these terms was in the design of a computer network programming language called Decode-Encode Language (DEL). The purpose of this language was to accept commands from one computer (the user-host), which would return status reports to the user as it encoded the commands in network packets. Another DEL-capable computer, the server-host, received the packets, decoded them, and returned formatted data to the user-host. A DEL program on the user-host received the results to present to the user. This is a client–server transaction. Development of DEL was just beginning in 1969, the year that the United States Department of Defense established ARPANET (predecessor of Internet).

Client-host and Server-host

Client-host and server-host have subtly different meanings than client and server. A host is any computer connected to a network. Whereas the words server and client may refer either to a computer or to a computer program, server-host and user-host always refer to computers. The host is a versatile, multifunction computer; clients and servers are just programs that run on a host. In the client–server model, a server is more likely to be devoted to the task of serving.

An early use of the word client occurs in “Separating Data from Function in a Distributed File System”, a 1978 paper by Xerox PARC computer scientists Howard Sturgis, James Mitchell, and Jay Israel. The authors are careful to define the term for readers, and explain that they use it to distinguish between the user and the user’s network node (the client). (By 1992, the word server had entered into general parlance.)

Centralized Computing


The client–server model does not dictate that server-hosts must have more resources than client-hosts. Rather, it enables any general-purpose computer to extend its capabilities by using the shared resources of other hosts. Centralized computing, however, specifically allocates a large amount of resources to a small number of computers. The more computation is offloaded from client-hosts to the central computers, the simpler the client-hosts can be. It relies heavily on network resources (servers and infrastructure) for computation and storage. A diskless node loads even its operating system from the network, and a computer terminal has no operating system at all; it is only an input/output interface to the server. In contrast, a fat client, such as a personal computer, has many resources, and does not rely on a server for essential functions.

As microcomputers decreased in price and increased in power from the 1980s to the late 1990s, many organizations transitioned computation from centralized servers, such as mainframes and minicomputers, to fat clients. This afforded greater, more individualized dominion over computer resources, but complicated information technology management. During the 2000s, web applications matured enough to rival application software developed for a specific microarchitecture. This maturation, more affordable mass storage, and the advent of service-oriented architecture were among the factors that gave rise to the cloud computing trend of the 2010s.

Comparison with Peer-to-peer Architecture


In addition to the client–server model, distributed computing applications often use the peer-to-peer (P2P) application architecture.

In the client–server model, the server is often designed to operate as a centralized system that serves many clients. The computing power, memory and storage requirements of a server must be scaled appropriately to the expected work-load (i.e., the number of clients connecting simultaneously). Load-balancing and failover systems are often employed to scale the server implementation.

In a peer-to-peer network, two or more computers (peers) pool their resources and communicate in a decentralized system. Peers are coequal, or equipotent nodes in a non-hierarchical network. Unlike clients in a client–server or client–queue–client network, peers communicate with each other directly. In peer-to-peer networking, an algorithm in the peer-to-peer communications protocol balances load, and even peers with modest resources can help to share the load. If a node becomes unavailable, its shared resources remain available as long as other peers offer it. Ideally, a peer does not need to achieve high availability because other, redundant peers make up for any resource downtime; as the availability and load capacity of peers change, the protocol reroutes requests.

Both client-server and master-slave are regarded as sub-categories of distributed peer-to-peer systems.

Network Topology

From Wikipedia, the free encyclopedia

Network topology is the arrangement of the various elements (links, nodes, etc.) of a communication network.

Network topology is the topological structure of a network and may be depicted physically or logically. Physical topology is the placement of the various components of a network, including device location and cable installation, while logical topology illustrates how data flows within a network. Distances between nodes, physical interconnections, transmission rates, or signal types may differ between two networks, yet their topologies may be identical.

An example is a local area network (LAN). Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. Conversely, mapping the data flow between the components determines the logical topology of the network.

Contents
1 Topologies
2 Links
2.1 Wired technologies
2.2 Wireless technologies
2.3 Exotic technologies
3 Nodes
3.1 Network interfaces
3.2 Repeaters and hubs
3.3 Bridges
3.4 Switches
3.5 Routers
3.6 Modems
3.7 Firewalls
4 Classification
4.1 Point-to-point
4.2 Bus
4.2.1 Linear bus
4.2.2 Distributed bus
4.3 Star
4.3.1 Extended star
4.3.2 Distributed Star
4.4 Ring
4.5 Mesh
4.5.1 Fully connected network
4.5.2 Partially connected network
4.6 Hybrid
4.7 Daisy chain
5 Centralization
6 Decentralization

Topologies


800px-NetworkTopologies.svg

Diagram of different network topologies.

Two basic categories of network topologies exist, physical topologies and logical topologies.

The cabling layout used to link devices is the physical topology of the network. This refers to the layout of cabling, the locations of nodes, and the links between the nodes and the cabling. The physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, and the cost associated with cabling or telecommunications circuits.

In contrast, logical topology is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network’s logical topology is not necessarily the same as its physical topology. For example, the original twisted pair Ethernet using repeater hubs was a logical bus topology carried on a physical star topology. Token ring is a logical ring topology, but is wired as a physical star from the media access unit. Logical topologies are often closely associated with media access control methods and protocols. Some networks are able to dynamically change their logical topology through configuration changes to their routers and switches.

Links


The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cable (Ethernet, HomePNA, power line communication, G.hn), optical fiber (fiber-optic communication), and radio waves (wireless networking). In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer.

A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards (e.g. those defined by IEEE 802.11) use radio waves, or others use infrared signals as a transmission medium. Power line communication uses a building’s power cabling to transmit data.

Wired Technologies

Fibreoptic (1)

Fiber optic cables are used to transmit light from one computer/network node to another

1280px-World_map_of_submarine_cables

2007 map showing submarine optical fiber telecommunication cables around the world.

The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed.

  • Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation helps minimize interference and distortion. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
  • ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network
  • Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
  • An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea cables to interconnect continents.

Price is a main factor distinguishing wired- and wireless-technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.

Wireless Technologies

Wireless_network

Computers are very often connected to networks using wireless links

  • Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 48 km (30 mi) apart.
  • Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth’s atmosphere. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
  • Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
  • Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wifi.
  • Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.

Exotic Technologies

There have been various attempts at transporting data over exotic media:

  1. IP over Avian Carriers was a humorous April fool’s Request for Comments, issued as RFC 1149. It was implemented in real life in 2001.
  2. Extending the Internet to interplanetary dimensions via radio waves, the Interplanetary Internet.

Both cases have a large round-trip delay time, which gives slow two-way communication, but doesn’t prevent sending large amounts of information.

Nodes


Apart from any physical transmission media there may be, networks comprise additional basic system building blocks, such as network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, and firewalls. Any particular piece of equipment will frequently contain multiple building blocks and perform multiple functions.

Network Interfaces

800px-ForeRunnerLE_25_ATM_Network_Interface_(1)

An ATM network interface in the form of an accessory card. A lot of network interfaces are built-in.

A network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry.

The NIC responds to traffic addressed to a network address for either the NIC or the computer as a whole.

In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller’s permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.

Repeaters and Hubs

A repeater is an electronic device that receives a network signal, cleans it of unnecessary noise and regenerates it. The signal is retransmitted at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.

A repeater with multiple ports is known as an Ethernet hub. Repeaters work on the physical layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule.

Hubs and repeaters in LANs have been mostly obsoleted by modern switches.

Bridges

A network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network’s collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks.

Bridges come in three basic types:

  • Local bridges: Directly connect LANs
  • Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
  • Wireless bridges: Can be used to join LANs or connect remote devices to LANs.

Switches

A network switch is a device that forwards and filters OSI layer 2 datagrams (frames) between ports based on the destination MAC address in each frame. A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge. It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches.

Multi-layer switches are capable of routing based on layer 3 addressing or additional logical levels. The term switch is often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a Web URL identifier).

Routers

Adsl_connections

A typical home or small office router showing the ADSL telephone line and Ethernet network cable connections

A router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include a “null” interface, also known as the “black hole” interface because data can go into it, however, no further processing is done for said data, i.e. the packets are dropped.

Modems

Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more carrier signals are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a Digital Subscriber Line technology.

Firewalls

A firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.

Classification


The study of network topology recognizes eight basic topologies: point-to-point, bus, star, ring or circular, mesh, tree, hybrid, or daisy chain.

Point-to-point

The simplest topology with a dedicated link between two endpoints. Easiest to understand, of the variations of point-to-point topology, is a point-to-point communications channel that appears, to the user, to be permanently associated with the two endpoints. A child’s tin can telephone is one example of a physical dedicated channel.

Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up dynamically and dropped when no longer needed. Switched point-to-point topologies are the basic model of conventional telephony.

The value of a permanent point-to-point network is unimpeded communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers and has been expressed as Metcalfe’s Law.

Bus

BusNetwork.svg

Bus network topology

In local area networks where bus topology is used, each node is connected to a single cable, by the help of interface connectors. This central cable is the backbone of the network and is known as the bus (thus the name). A signal from the source travels in both directions to all machines connected on the bus cable until it finds the intended recipient. If the machine address does not match the intended address for the data, the machine ignores the data. Alternatively, if the data matches the machine address, the data is accepted. Because the bus topology consists of only one wire, it is rather inexpensive to implement when compared to other topologies. However, the low cost of implementing the technology is offset by the high cost of managing the network. Additionally, because only one cable is utilized, it can be the single point of failure. In this topology data being transferred may be accessed by any workstation.

Linear Bus

The type of network topology in which all of the nodes of the network that are connected to a common transmission medium which has exactly two endpoints (this is the ‘bus’, which is also commonly referred to as the backbone, or trunk) – all data that is transmitted in between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network simultaneously.

Note: When the electrical signal reaches the end of the bus, the signal is reflected back down the line, causing unwanted interference. As a solution, the two endpoints of the bus are normally terminated with a device called a terminator that prevents this reflection.

Distributed Bus

The type of network topology in which all of the nodes of the network are connected to a common transmission medium which has more than two endpoints that are created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology (i.e., all nodes share a common transmission medium).

Star

StarNetwork.svg

Star network topology

In local area networks with a star topology, each network host is connected to a central hub with a point-to-point connection. So it can be said that every computer is indirectly connected to every other node with the help of the hub. In Star topology, every node (computer workstation or any other peripheral) is connected to a central node called hub, router or switch. The switch is the server and the peripherals are the clients. The network does not necessarily have to resemble a star to be classified as a star network, but all of the nodes on the network must be connected to one central device. All traffic that traverses the network passes through the central hub. The hub acts as a signal repeater. The star topology is considered the easiest topology to design and implement. An advantage of the star topology is the simplicity of adding additional nodes. The primary disadvantage of the star topology is that the hub represents a single point of failure. Since all peripheral communication must flow through the central hub, the aggregate central bandwidth forms a network bottleneck for large clusters.

Extended Star

A type of network topology in which a network that is based upon the physical star topology has one or more repeaters between the central node and the peripheral or ‘spoke’ nodes, the repeaters being used to extend the maximum transmission distance of the point-to-point links between the central node and the peripheral nodes beyond that which is supported by the transmitter power of the central node or beyond that which is supported by the standard upon which the physical layer of the physical star network is based.

If the repeaters in a network that is based upon the physical extended star topology are replaced with hubs or switches, then a hybrid network topology is created that is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies.

Distributed Star

A type of network topology that is composed of individual networks that are based upon the physical star topology connected in a linear fashion – i.e., ‘daisy-chained’ – with no central or top level connection point (e.g., two or more ‘stacked’ hubs, along with their associated star connected nodes or ‘spokes’).

Ring

RingNetwork.svg

Ring network topology

A ring topology is a bus topology in a closed loop. Data travels around the ring in one direction. When one node sends data to another, the data passes through each intermediate node on the ring until it reaches its destination. The intermediate nodes repeat (re transmit) the data to keep the signal strong. Every node is a peer; there is no hierarchical relationship of clients and servers. If one node is unable to re transmit data, it severs communication between the nodes before and after it in the bus.

Advantages:

When the load on the network increases, its performance is better than bus topology.
There is no need of network server to control the connectivity between workstations.

Disadvantages:

Aggregate network bandwidth is bottlenecked by the weakest link between two nodes.

Mesh

The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated by Reed’s Law.

Fully Connected Network

NetworkTopology-FullyConnected

Fully connected mesh topology

In a fully connected network, all nodes are interconnected. (In graph theory this is called a complete graph.) The simplest fully connected network is a two-node network. A fully connected network doesn’t need to use packet switching or broadcasting. However, since the number of connections grows quadratically with the number of nodes: This kind of topology does not trip and affect other nodes in the network

Opera Snapshot_2017-11-08_170942_en.wikipedia.org

This makes it impractical for large networks.

Partially Connected Network

NetworkTopology-Mesh.svg

Partially connected mesh topology

In a partially connected network, certain nodes are connected to exactly one other node; but some nodes are connected to two or more other nodes with a point-to-point link. This makes it possible to make use of some of the redundancy of mesh topology that is physically fully connected, without the expense and complexity required for a connection between every node in the network.

Hybrid

Hybrid networks combine two or more topologies in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree network (or star-bus network) is a hybrid topology in which star networks are interconnected via bus networks. However, a tree network connected to another tree network is still topologically a tree network, not a distinct network type. A hybrid topology is always produced when two different basic network topologies are connected.

A star-ring network consists of two or more ring networks connected using a multistation access unit (MAU) as a centralized hub.

Snowflake topology is a star network of star networks.

Two other hybrid network types are hybrid mesh and hierarchical star.

Daisy Chain

Except for star-based networks, the easiest way to add more computers into a network is by daisy-chaining, or connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring.

  • A linear topology puts a two-way link between one computer and the next. However, this was expensive in the early days of computing, since each computer (except for the ones at each end) required two receivers and two transmitters.
  • By connecting the computers at each end, a ring topology can be formed. An advantage of the ring is that the number of transmitters and receivers can be cut in half, since a message will eventually loop all of the way around. When a node sends a message, the message is processed by each computer in the ring. If the ring breaks at a particular link then the transmission can be sent via the reverse path thereby ensuring that all nodes are always connected in the case of a single failure.

Centralization


The star topology reduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such as Ethernet, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The failure of a transmission line linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes.

If the central node is passive, the originating node must be able to tolerate the reception of an echo of its own transmission, delayed by the two-way round trip transmission time (i.e. to and from the central node) plus any delay generated in the central node. An active star network has an active central node that usually has the means to prevent echo-related problems.

A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks arranged in a hierarchy. This tree has individual peripheral nodes (e.g. leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or regenerators. Unlike the star network, the functionality of the central node may be distributed.

As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest.

To alleviate the amount of network traffic that comes from broadcasting all signals to all nodes, more advanced central nodes were developed that are able to keep track of the identities of the nodes that are connected to the network. These network switches will “learn” the layout of the network by “listening” on each port during normal data transmission, examining the data packets and recording the address/identifier of each connected node and which port it is connected to in a lookup table held in memory. This lookup table then allows future transmissions to be forwarded to the intended destination only.

Decentralization


In a partially connected mesh topology, there are at least two nodes with two or more paths between them to provide redundant paths in case the link providing one of the paths fails. Decentralization is often used to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful. In 2012 the Institute of Electrical and Electronics Engineers (IEEE) published the Shortest Path Bridging protocol to ease configuration tasks and allows all paths to be active which increases bandwidth and redundancy between all devices.

This is similar in some ways to a grid network, where a linear or ring topology is used to connect systems in multiple directions. A multidimensional ring has a toroidal topology, for instance.

A fully connected network, complete topology, or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are n(n-1)/2 direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in military applications.