Wireless Network

From Wikipedia, the free encyclopedia

 443px-Wifi.svg

A wireless network is a computer network that uses wireless data connections between network nodes.

Wireless networking is a method by which homes, telecommunications networks and business installations avoid the costly process of introducing cables into a building, or as a connection between various equipment locations. Wireless telecommunications networks are generally implemented and administered using radio communication. This implementation takes place at the physical level (layer) of the OSI model network structure.

Examples of wireless networks include cell phone networks, wireless local area networks (WLANs), wireless sensor networks, satellite communication networks, and terrestrial microwave networks.

Contents
1 History
2 Wireless links
3 Types of wireless networks
3.1 Wireless PAN
3.2 Wireless LAN
3.3 Wireless ad hoc network
3.4 Wireless MAN
3.5 Wireless WAN
3.6 Cellular network
3.7 Global area network
3.8 Space network
4 Different uses
5 Properties
5.1 General
5.2 Performance
5.3 Space
5.4 Home
5.5 Wireless Network Elements
5.6 Difficulties
5.6.1 Interferences
5.6.2 Absorption and reflection
5.6.3 Multipath fading
5.6.4 Hidden node problem
5.6.5 Shared resource problem
5.7 Capacity
5.7.1 Channel
5.7.2 Network
6 Security
7 Safety

History


The first professional wireless network was developed under the brand ALOHAnet in 1969 at the University of Hawaii and became operational in June 1971. The first commercial wireless network was the WaveLAN product family, developed by NCR in 1986.

  • 1991 2G cell phone network
  • June 1997 802.11 “WiFi” protocol first release
  • 1999 803.11 VoIP integration

Wireless Links


Wireless_network

Computers are very often connected to networks using wireless links, e.g. WLANs

  • Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 48 km (30 mi) apart.
  • Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth’s atmosphere. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
  • Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
  • Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wifi.
  • Free-space optical communication uses visible or invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.

Types of Wireless Networks


Wireless PAN

Wireless personal area networks (WPANs) internet devices within a relatively small area, that is generally within a person’s reach. For example, both Bluetooth radio and invisible infrared light provides a WPAN for interconnecting a headset to a laptop. ZigBee also supports WPAN applications. Wi-Fi PANs are becoming commonplace (2010) as equipment designers start to integrate Wi-Fi into a variety of consumer electronic devices. Intel “My WiFi” and Windows 7 “virtual Wi-Fi” capabilities have made Wi-Fi PANs simpler and easier to set up and configure.

Wireless LAN

LAN

Wireless LANs are often used for connecting to local resources and to the Internet

A wireless local area network (WLAN) links two or more devices over a short distance using a wireless distribution method, usually providing a connection through an access point for internet access. The use of spread-spectrum or OFDM technologies may allow users to move around within a local coverage area, and still remain connected to the network.

Products using the IEEE 802.11 WLAN standards are marketed under the Wi-Fi brand name. Fixed wireless technology implements point-to-point links between computers or networks at two distant locations, often using dedicated microwave or modulated laser light beams over line of sight paths. It is often used in cities to connect networks in two or more buildings without installing a wired link.

Wireless Ad Hoc Network

A wireless ad hoc network, also known as a wireless mesh network or mobile ad hoc network (MANET), is a wireless network made up of radio nodes organized in a mesh topology. Each node forwards messages on behalf of the other nodes and each node performs routing. Ad hoc networks can “self-heal”, automatically re-routing around a node that has lost power. Various network layer protocols are needed to realize ad hoc mobile networks, such as Distance Sequenced Distance Vector routing, Associativity-Based Routing, Ad hoc on-demand Distance Vector routing, and Dynamic source routing.

Wireless MAN

Wireless metropolitan area networks are a type of wireless network that connects several wireless LANs.

  • WiMAX is a type of Wireless MAN and is described by the IEEE 802.16 standard.

Wireless WAN

Wireless wide area networks are wireless networks that typically cover large areas, such as between neighbouring towns and cities, or city and suburb. These networks can be used to connect branch offices of business or as a public Internet access system. The wireless connections between access points are usually point to point microwave links using parabolic dishes on the 2.4 GHz band, rather than omnidirectional antennas used with smaller networks. A typical system contains base station gateways, access points and wireless bridging relays. Other configurations are mesh systems where each access point acts as a relay also. When combined with renewable energy systems such as photovoltaic solar panels or wind systems they can be stand alone systems.

Cellular Network

Frequency_reuse.svg

Example of frequency reuse factor or pattern 1/4

A cellular network or mobile network is a radio network distributed over land areas called cells, each served by at least one fixed-location transceiver, known as a cell site or base station. In a cellular network, each cell characteristically uses a different set of radio frequencies from all their immediate neighbouring cells to avoid any interference.

When joined together these cells provide radio coverage over a wide geographic area. This enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission.

Although originally intended for cell phones, with the development of smartphones, cellular telephone networks routinely carry data in addition to telephone conversations:

  • Global System for Mobile Communications (GSM): The GSM network is divided into three major systems: the switching system, the base station system, and the operation and support system. The cell phone connects to the base system station which then connects to the operation and support station; it then connects to the switching station where the call is transferred to where it needs to go. GSM is the most common standard and is used for a majority of cell phones.
  • Personal Communications Service (PCS): PCS is a radio band that can be used by mobile phones in North America and South Asia. Sprint happened to be the first service to set up a PCS.
  • D-AMPS: Digital Advanced Mobile Phone Service, an upgraded version of AMPS, is being phased out due to advancement in technology. The newer GSM networks are replacing the older system.

Global Area Network

A global area network (GAN) is a network used for supporting mobile across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.

Space Network

Space networks are networks used for communication between spacecraft, usually in the vicinity of the Earth. The example of this is NASA’s Space Network.

Different Uses


Some examples of usage include cellular phones which are part of everyday wireless networks, allowing easy personal communications. Another example, Intercontinental network systems, use radio satellites to communicate across the world. Emergency services such as the police utilize wireless networks to communicate effectively as well. Individuals and businesses use wireless networks to send and share data rapidly, whether it be in a small office building or across the world.

Properties


General

In a general sense, wireless networks offer a vast variety of uses by both business and home users.

“Now, the industry accepts a handful of different wireless technologies. Each wireless technology is defined by a standard that describes unique functions at both the Physical and the Data Link layers of the OSI model. These standards differ in their specified signaling methods, geographic ranges, and frequency usages, among other things. Such differences can make certain technologies better suited to home networks and others better suited to network larger organizations.”

Performance

Each standard varies in geographical range, thus making one standard more ideal than the next depending on what it is one is trying to accomplish with a wireless network. The performance of wireless networks satisfies a variety of applications such as voice and video. The use of this technology also gives room for expansions, such as from 2G to 3G and, most recently, 4G technology, which stands for the fourth generation of cell phone mobile communications standards. As wireless networking has become commonplace, sophistication increases through configuration of network hardware and software, and greater capacity to send and receive larger amounts of data, faster, is achieved.

Space

Space is another characteristic of wireless networking. Wireless networks offer many advantages when it comes to difficult-to-wire areas trying to communicate such as across a street or river, a warehouse on the other side of the premises or buildings that are physically separated but operate as one. Wireless networks allow for users to designate a certain space which the network will be able to communicate with other devices through that network.

Space is also created in homes as a result of eliminating clutters of wiring. This technology allows for an alternative to installing physical network mediums such as TPs, coaxes, or fiber-optics, which can also be expensive.

Home

For homeowners, wireless technology is an effective option compared to Ethernet for sharing printers, scanners, and high-speed Internet connections. WLANs help save the cost of installation of cable mediums, save time from physical installation, and also creates mobility for devices connected to the network. Wireless networks are simple and require as few as one single wireless access point connected directly to the Internet via a router.

Wireless Network Elements

The telecommunications network at the physical layer also consists of many interconnected wireline network elements (NEs). These NEs can be stand-alone systems or products that are either supplied by a single manufacturer or are assembled by the service provider (user) or system integrator with parts from several different manufacturers.

Wireless NEs are the products and devices used by a wireless carrier to provide support for the backhaul network as well as a mobile switching center (MSC).

Reliable wireless service depends on the network elements at the physical layer to be protected against all operational environments and applications (see GR-3171, Generic Requirements for Network Elements Used in Wireless Networks – Physical Layer Criteria).

What are especially important are the NEs that are located on the cell tower to the base station (BS) cabinet. The attachment hardware and the positioning of the antenna and associated closures and cables are required to have adequate strength, robustness, corrosion resistance, and resistance against wind, storms, icing, and other weather conditions. Requirements for individual components, such as hardware, cables, connectors, and closures, shall take into consideration the structure to which they are attached.

Difficulties

Interferences

Compared to wired systems, wireless networks are frequently subject to electromagnetic interference. This can be caused by other networks or other types of equipment that generate radio waves that are within, or close, to the radio bands used for communication. Interference can degrade the signal or cause the system to fail.

Absorption and Reflection

Some materials cause absorption of electromagnetic waves, preventing it from reaching the receiver, in other cases, particularly with metallic or conductive materials reflection occurs. This can cause dead zones where no reception is available. Aluminium foiled thermal isolation in modern homes can easily reduce indoor mobile signals by 10 dB frequently leading to complaints about the bad reception of long-distance rural cell signals.

Multipath Fading

In multipath fading two or more different routes taken by the signal, due to reflections, can cause the signal to cancel out at certain locations, and to be stronger in other places (upfade).

Hidden Node Problem

The hidden node problem occurs in some types of network when a node is visible from a wireless access point (AP), but not from other nodes communicating with that AP. This leads to difficulties in media access control.

Shared Resource Problem

The wireless spectrum is a limited resource and shared by all nodes in the range of its transmitters. Bandwidth allocation becomes complex with multiple participating users. Often users are not aware that advertised numbers (e.g., for IEEE 802.11 equipment or LTE networks) are not their capacity, but shared with all other users and thus the individual user rate is far lower. With increasing demand, the capacity crunch is more and more likely to happen. User-in-the-loop (UIL) may be an alternative solution to ever upgrading to newer technologies for over-provisioning.

Capacity

Channel

Prinzip_MIMO.svg

Understanding of SISO, SIMO, MISO and MIMO. Using multiple antennas and transmitting in different frequency channels can reduce fading, and can greatly increase the system capacity.

Shannon’s theorem can describe the maximum data rate of any single wireless link, which relates to the bandwidth in hertz and to the noise on the channel.

One can greatly increase channel capacity by using MIMO techniques, such as artificial noise generation, and other techniques, where multiple aerials or multiple frequencies can exploit multiple paths to the receiver to achieve much higher throughput – by a factor of the product of the frequency and aerial diversity at each end.

Under Linux, the Central Regulatory Domain Agent (CRDA) controls the setting of channels.

Network

The total network bandwidth depends on how dispersive the medium is (more dispersive medium generally has better total bandwidth because it minimises interference), how many frequencies are available, how noisy those frequencies are, how many aerials are used and whether a directional antenna is in use, whether nodes employ power control and so on. there are two bands for now 2.4 GHz and 5 GHz. mostly 5 gigahertz band gives better connection and speed.

Cellular wireless networks generally have good capacity, due to their use of directional aerials, and their ability to reuse radio channels in non-adjacent cells. Additionally, cells can be made very small using low power transmitters this is used in cities to give network capacity that scales linearly with population density.

Security


In communication networks, standard secrecy methods such as cryptography can be used to protect the transmitted information from being accessed by unauthorized users. Another level of secrecy is achieved when covert communication is established, where the existence of the communication is concealed from the adversary.

Safety


Wireless access points are also often close to humans, but the drop off in power over distance is fast, following the inverse-square law. The position of the United Kingdom’s Health Protection Agency (HPA) is that “…radio frequency (RF) exposures from WiFi are likely to be lower than those from mobile phones.” It also saw “…no reason why schools and others should not use WiFi equipment.” In October 2007, the HPA launched a new “systematic” study into the effects of WiFi networks on behalf of the UK government, in order to calm fears that had appeared in the media in a recent period up to that time”. Dr Michael Clark, of the HPA, says published research on mobile phones and masts does not add up to an indictment of WiFi.

Optical Time-Domain Reflectometer

From Wikipedia, the free encyclopedia

1024px-OTDR_-_Yokogawa_AQ7270_-_1

An OTDR

OF_OTDR_in_use

An OTDR in use

An optical time-domain reflectometer (OTDR) is an optoelectronic instrument used to characterize an optical fiber. An OTDR is the optical equivalent of an electronic time domain reflectometer. It injects a series of optical pulses into the fiber under test and extracts, from the same end of the fiber, light that is scattered (Rayleigh backscatter) or reflected back from points along the fiber. The scattered or reflected light that is gathered back is used to characterize the optical fiber. This is equivalent to the way that an electronic time-domain meter measures reflections caused by changes in the impedance of the cable under test. The strength of the return pulses is measured and integrated as a function of time, and plotted as a function of fiber length.

Contents
1 Reliability and quality of OTDR equipment
2 Types of OTDR-like test equipment
3 OTDR Data Format
4 Return Loss

Reliability and Quality of OTDR Equipment


The reliability and quality of an OTDR is based on its accuracy, measurement range, ability to resolve and measure closely spaced events, measurement speed, and ability to perform satisfactorily under various environmental extremes and after various types of physical abuse. The instrument is also judged on the basis of its cost, features provided, size, weight, and ease of use.

Some of the terms often used in specifying the quality of an OTDR are as follows:

Accuracy: Defined as the correctness of the measurement i.e., the difference between the measured value and the true value of the event being measured.
Measurement range: Defined as the maximum attenuation that can be placed between the instrument and the event being measured, for which the instrument will still be able to measure the event within acceptable accuracy limits.
Instrument resolution: Is a measure of how close two events can be spaced and still be recognized as two separate events. The duration of the measurement pulse and the data sampling interval create a resolution limitation for OTDRs. The shorter the pulse duration and the shorter the data sampling interval, the better the instrument resolution, but the shorter the measurement range. Resolution is also often limited when powerful reflections return to the OTDR and temporarily overload the detector. When this occurs, some time is required before the instrument can resolve a second fiber event. Some OTDR manufacturers use a “masking” procedure to improve resolution. The procedure shields or “masks” the detector from high-power fiber reflections, preventing detector overload and eliminating the need for detector recovery.

Industry requirements for the reliability and quality of OTDRs are specified in the Generic Requirements for Optical Time Domain Reflectometer (OTDR) Type Equipment.

Types of OTDR-like Test Equipment


The common types of OTDR-like test equipment are:

  • Full-feature OTDR:
    • Full-feature OTDRs are traditional, optical time domain reflectometers. They are feature-rich and usually larger, heavier, and less portable than either the hand-held OTDR or the fiber break locator. Despite being characterized as large, their size and weight is only a fraction of that of early generation OTDRs. Often a full-feature OTDR has a main frame that can be fitted with multi-function plug-in units to perform many fiber measurement tasks. Larger color displays are common. The full-feature OTDR often has a greater measurement range than the other types of OTDR-like equipment. Often it is used in laboratories and in the field for difficult fiber measurements. Most full-feature OTDRs are powered from AC and/or a battery.
  • Hand-held OTDR and Fiber break locator:
    • Hand-held (formerly mini) OTDRs and fiber break locators are designed to troubleshoot fiber networks in a field environment, often using battery power. The two types of instruments cover the spectrum of approaches to fiber optic plant taken by communication providers. Hand-held, inexpensive OTDRs are intended to be easy-to-use, light-weight, sophisticated OTDRs that collect field data and perform rudimentary data analysis. They may be less feature rich than full-feature OTDRs. Often they can be used in conjunction with PC-based software to perform data collection and sophisticated data analysis. Hand-held OTDRs are commonly used to measure fiber links and locate fiber breaks, points of high loss, high reflectance, end-to-end loss, and Optical Return Loss (ORL).
    • Fiber break locators are intended to be low-cost instruments specifically designed to determine the location of a catastrophic fiber event, e.g., fiber break, point of high reflectance, or high loss. The fiber break locator is an opto-electronic tape measure designed to measure only distance to catastrophic fiber events.
    • In general, hand-held OTDRs and fiber break locators are lighter and smaller, simpler to operate, and more likely to employ battery power than full-feature OTDRs. The intent with hand-held OTDRs and fiber break locators is to be inexpensive enough for field technicians to be equipped with one as part of a standard tool kit.
  • RTU in RFTSs:
    • The RTU is the testing module of the RFTS described in Generic Requirements for Remote Fiber Testing Systems (RFTSS). An RFTS enables fiber to be automatically tested from a central location. A central computer is used to control the operation of OTDR-like test components located at key points in the fiber network. The test components scan the fiber to locate problems. If a problem is found, its location is noted and the appropriate personnel are notified to begin the repair process. The RFTS can also provide direct access to a database that contains historical information of the OTDR fiber traces and any other fiber records for the physical fiber plant.
    • Since OTDRs and OTDR-like equipment have many uses in the communications industry, operating environments vary widely, both indoors and outdoors. Most often, however, these test sets are operated in controlled environments, accessing the fibers at their termination points on fiber distribution frames. Indoor environments include controlled areas such as central offices (COs), equipment huts, or Controlled Environment Vaults (CEVs). Use in outside environments is rarer, but may include use in a manhole, aerial platform, open trench, or splicing vehicle.

OTDR Data Format


In the late 1990s, OTDR industry representatives and the OTDR user community developed a unique data format to store and analyze OTDR fiber data. This data was based on the specifications in GR-196, Generic Requirements for Optical Time Domain Reflectometer (OTDR) Type Equipment. The goal was for the data format to be truly universal, in that it was intended to be implemented by all OTDR manufacturers. OTDR suppliers developed the software to implement the data format. As they proceeded, they identified inconsistencies in the format, along with areas of misunderstanding among users.

From 1997 to 2000, a group of OTDR supplier software specialists attempted to resolve problems and inconsistencies in what was then called the “Bellcore” OTDR Data Format. This group, called the OTDR Data Format Users Group (ODFUG), made progress. Since then, many OTDR developers continued to work with other developers to solve individual interaction problems and enable cross use between manufacturers.

In 2011, Telcordia decided to compile industry comments on this data format into one document entitled Optical Time Domain Reflectometer (OTDR) Data Format. This Special Report (SR) summarizes the state of the Bellcore OTDR Data Format, renaming it as the Telcordia OTDR Data Format.

The data format is intended for all OTDR-related equipment designed to save trace data and analysis information. Initial implementations require standalone software to be provided by the OTDR supplier to convert existing OTDR trace files to the SR-4731 data format and to convert files from this universal format to a format that is usable by their older OTDRs. This file conversion software can be developed by the hardware supplier, the end user, or a third party. This software also provides backward compatibility of the OTDR data format with existing equipment.

The SR-4731 format describes binary data. While text information is contained in several fields, most numbers are represented as either 16-bit (2-byte) or 32-bit (4-byte) signed or unsigned integers stored as binary images. Byte ordering in this file format is explicitly low-byte ordering, as is common on Intel® processor-based machines. String fields are terminated with a zero byte “\0”. OTDR waveform data are represented as short, unsigned integer data uniformly spaced in time, in units of decibels (dB) times 1000, referenced to the maximum power level. The maximum power level is set to zero, and all waveform data points are assumed to be zero or negative (the sign bit is implied), so that the minimum power level in this format is -65.535 dB, and the minimum resolution between power level steps is 0.001 dB. In some cases, this will not provide sufficient power range to represent all waveform points. For this reason, the use of a scale factor has been introduced to expand the data point power range.

Return Loss


Opera Snapshot_2017-11-13_042432_en.wikipedia.org

TIA/EIA-568

From Wikipedia, the free encyclopedia

ANSI/TIA-568 is a set of telecommunications standards from the Telecommunications Industry Association (TIA). The standards address commercial building cabling for telecommunications products and services.

As of 2017, the standard is at revision D, replacing the 2009 revision C, 2001 revision B, the 1995 revision A, and the initial issue of 1991, which are now obsolete.

Perhaps the best known features of ANSI/TIA-568 are the pin/pair assignments for eight-conductor 100-ohm balanced twisted pair cabling. These assignments are named T568A and T568B.

An IEC standard ISO/IEC 11801 provides similar standards for network cables.

Contents
1 History
2 Goals
3 Cable categories
4 Structured cable system topologies
5 T568A and T568B termination
5.1 Wiring
5.2 Use for T1 connectivity
5.3 Backward compatibility
5.4 Theory
6 Standards

History


ANSI/TIA-568 was developed through the efforts of more than 60 contributing organizations including manufacturers, end-users, and consultants. Work on the standard began with the EIA, to define standards for telecommunications cabling systems. EIA agreed to develop a set of standards, and formed the TR-42 committee, with nine subcommittees to perform the work. The work continues to be maintained by TR-42 within the TIA, EIA is not longer in existence and hence EIA has been removed from the name.

The first revision of the standard, TIA/EIA-568-A.1-1991 was released in 1991. The standard was updated to revision B in 1995. The demands placed upon commercial wiring systems increased dramatically over this period due to the adoption of personal computers and data communication networks and advances in those technologies. The development of high-performance twisted pair cabling and the popularization of fiber optic cables also drove significant change in the standards. These changes were first released in a revision C in 2009 which has subsequently been replaced by the D series.

Goals


ANSI/TIA-568 defines structured cabling system standards for commercial buildings, and between buildings in campus environments. The bulk of the standards define cabling types, distances, connectors, cable system architectures, cable termination standards and performance characteristics, cable installation requirements and methods of testing installed cable. The main standard, ANSI/TIA-568.0-D defines general requirements, while ANSI/TIA-568-C.2 focuses on components of balanced twisted-pair cable systems. ANSI/TIA-568.3-D addresses components of fiber optic cable systems, and ANSI/TIA-568-C.4, addressed coaxial cabling components.

The intent of these standards is to provide recommended practices for the design and installation of cabling systems that will support a wide variety of existing and future services. Developers hope the standards will provide a lifespan for commercial cabling systems in excess of ten years. This effort has been largely successful, as evidenced by the definition of category 5 cabling in 1991, a cabling standard that (mostly) satisfied cabling requirements for 1000BASE-T, released in 1999. Thus, the standardization process can reasonably be said to have provided at least a nine-year lifespan for premises cabling, and arguably a longer one.

All these documents accompany related standards that define commercial pathways and spaces (TIA-569-C-1, February 2013), residential cabling (ANSI/TIA-570-C, August 2012), administration standards (ANSI/TIA-606-B, June 2012), grounding and bonding (TIA-607-B-2, August 2013), and outside plant cabling (TIA-758-B, April 2012).

Cable Categories


The standard defines categories of unshielded twisted pair cable systems, with different levels of performance in signal bandwidth, insertion loss, and cross-talk. Generally increasing category numbers correspond with a cable system suitable for higher rates of data transmission. Category 3 cable was suitable for telephone circuits and data rates up to 16 million bits per second. Category 5 cable, with more restrictions on attenuation and cross talk, has a bandwidth of 100 MHz. The 1995 edition of the standard defined categories 3, 4, and 5. Categories 1 and 2 were excluded from the standard since these categories were only used for voice circuits, not for data. The current revision includes Category 5e (100 MHz), 6 (250 MHz), 6A (500 MHz) and 8 (2,000 MHz).

Structured Cable System Topologies


ANSI/TIA-568-D defines a hierarchical cable system architecture, in which a main cross-connect (MCC) is connected via a star topology across backbone cabling to intermediate cross-connects (ICC) and horizontal cross-connects (HCC). Telecommunications design traditions utilized a similar topology. Many people refer to cross-connects by their telecommunications names: “distribution frames” (with the various hierarchies called MDFs, IDFs and wiring closets). Backbone cabling is also used to interconnect entrance facilities (such as telco demarcation points) to the main cross-connect. Maximum allowable backbone fibre distances vary between 300m and 3000m, depending upon the cable type and use.

Horizontal cross-connects provide a point for the consolidation of all horizontal cabling, which extends in a star topology to individual work areas such as cubicles and offices. Under TIA/EIA-568-B, maximum allowable horizontal cable distance is 90m of installed cabling, whether fibre or twisted-pair, with 100m of maximum total length including patch cords. No patch cord should be longer than 5m. Optional consolidation points are allowable in horizontal cables, often appropriate for open-plan office layouts where consolidation points or media converters may connect cables to several desks or via partitions.

At the work area, equipment is connected by patch cords to horizontal cabling terminated at jackpoints.

TIA/EIA-568 also defines characteristics and cabling requirements for entrance facilities, equipment rooms and telecommunications rooms.

T568A and T568B Termination


Perhaps the widest known and most discussed feature of ANSI/TIA-568 is the definition of the pin-to-pair assignments, or pinout, between the pins in a connector (a plug or a socket) and the wires in a cable.

The standard specifies how to connect eight-conductor 100-ohm balanced twisted-pair cabling, such as Category 3 and Category 6 unshielded twisted-pair (UTP), to 8P8C eight-pin modular connectors (often incorrectly called RJ45 connectors).

The standard defines two alternative pinouts: T568A and T568B. The pinout definitions occupy merely 1 of the standard’s 468 pages. Much attention is paid to them because cables do not function if the pinouts at their two ends aren’t correctly matched.

ANSI/TIA-568 recommends the T568A pinout for horizontal cables. This pinout’s advantage is that it is compatible with the 1-pair and 2-pair Universal Service Order Codes (USOC) pinouts. The U.S. Government requires it in federal contracts.

The standard also allows the T568B pinout, as an alternative “if necessary to accommodate certain 8-pin cabling systems”. This pinout matches the older AT&T 258A (Systimax) pinout. In the 1990s, when the original TIA/EIA-568 was published, 258A had the most widely installed UTP cabling infrastructure. Many organizations still use T568B out of inertia.

The colors of the wire pairs in the cable, in order, are: blue (for pair 1), orange, green, and brown (for pair 4). Each pair consists of one conductor of solid color and a second conductor which is white with a stripe of the other color.

The difference between the two pinouts is that the orange and green wire pairs are exchanged.

Wiring

Opera Snapshot_2017-11-11_203318_en.wikipedia.org

Note that the only difference between T568A and T568B is that pairs 2 and 3 (orange and green) are swapped. Both configurations wire the pins “straight through”, i.e., pins 1 through 8 on one end are connected to pins 1 through 8 on the other end. Also, the same sets of pins connect to the opposite ends that are paired in both configurations: pins 1 and 2 form a pair, as do 3 and 6, 4 and 5, and 7 and 8. One can use cables wired according to either configuration in the same installation without significant problem. The primary thing one has to be careful of, is not to accidentally wire the ends of the same cable according to different configurations (unless one intends to create an ethernet crossover cable) or, worse, swapping two lines from different pairs. This creates crosstalk, which is normally rectified by correctly twisting a pair together. These problems will be most apparent in the more stringent specifications such as Category 6.

Use for T1 Connectivity

In Digital Signal 1 (T1) service, the pairs 1 and 3 (T568A) are used, and the USOC-8 jack is wired as per spec RJ-48C. The Telco termination jack is often wired to spec RJ-48X, which provides for a Transmit-to-Receive loopback when the plug is withdrawn.

Vendor cables are often wired with tip and ring reversed—i.e. pins 1 and 2 reversed, or pins 4 and 5 reversed. This has no effect on the signal quality of the T1 signal, which is fully differential, and uses the Alternate Mark Inversion (AMI) signaling scheme.

Backward Compatibility

Because pair 1 connects to the center pins (4 and 5) of the 8P8C connector in both T568A and T568B, both standards are compatible with the first line of RJ11, RJ14, RJ25, and RJ61 connectors that all have the first pair in the center pins of these connectors.

If the second line of an RJ14, RJ25 or RJ61 plug is used, it connects to pair 2 (orange/white) of jacks wired to T568A but to pair 3 (green/white) in jacks wired to T568B. This makes T568B potentially confusing in telephone applications.

Because of different pin pairings, the RJ25 and RJ61 plugs cannot pick up lines 3 or 4 from either T568A or T568B without splitting pairs. This would most likely result in unacceptable levels of hum, crosstalk and noise.

Theory

The original idea in wiring modular connectors, as seen in the registered jacks, was that the first pair would go in the center positions, the next pair on the next outermost ones, and so on. Also, signal shielding would be optimized by alternating the “live” and “earthy” pins of each pair. The terminations diverge slightly from this concept because on the 8 position connector, the resulting pinout would separate the outermost pair too far to meet the electrical echo requirements of high-speed LAN protocols.

Standards


  • ANSI/TIA-568.0-D, Generic Telecommunications Cabling for Customer Premises, Ed. D, 09-2015
  • ANSI/TIA-568.1-D, Commercial Building Telecommunications Cabling Standard, Ed. D, 09-2015
  • ANSI/TIA-568-C.2, Balanced Twisted-Pair Telecommunication Cabling and Components Standard, Ed. C, Err. 04-2014
  • ANSI/TIA-568.3-D, Optical Fiber Cabling And Components Standard, Ed. D, 10-2016
  • ANSI/TIA-568-C.4, Broadband Coaxial Cabling and Components Standard, Ed. C, 07-2011

Cable Tester

From Wikipedia, the free encyclopedia

This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2017) (Learn how and when to remove this template message)

1024px-Cable-tester-and-analyzer-0c

A tester and analyzer for twisted pair and fiber optic cables.

509px-Network_cable_tester_IMGP1639_smial_wp

A simple tester for BNC and twisted pair cabling

A cable tester is an electronic device used to verify the electrical connections in a signal cable or other wired assembly. Basic cable testers are continuity testers that verify the existence of a conductive path between ends of the cable, and verify the correct wiring of connectors on the cable. More advanced cable testers can measure the signal transmission properties of the cable such as its resistance, signal attenuation, noise and interference.

Contents
1 Basic tester
2 Signal testers
3 Optical cable testers

Basic Tester


Generally a basic cable tester is a battery operated portable instrument with a source of electric current, one or more voltage indicators, and possibly a switching or scanning arrangement to check each of several conductors sequentially. A cable tester may also have a microcontroller and a display to automate the testing process and show the testing results, especially for multiple-conductor cables. A cable tester may be connected to both ends of the cable at once, or the indication and current source portions may be separated to allow injection of a test current at one ond of a cable and detection of the results at the distant end. Both portions of such a tester will have connectors compatible with the application, for example, modular connectors for Ethernet local area network cables.

A cable tester is used to verify that all of the intended connections exist and that there are no unintended connections in the cable being tested. When an intended connection is missing it is said to be “open”. When an unintended connection exists it is said to be a “short” (a short circuit). If a connection “goes to the wrong place” it is said to be “miswired” (the connection has two faults: it is open to the correct contact and shorted to an incorrect contact).

Generally, the testing is done in two phases. The first phase, called the “opens test” makes sure each of the intended connections is good. The second phase, called the “shorts test” makes sure there are no unintended connections.

There are two common ways to test a connection:

  1. A continuity test. Current is passed down the connection. If there is current the connection is assumed to be good. This type of test can be done with a series combination of a battery (to provide the current) and a light bulb (that lights when there is a current).
  2. A resistance test. A known current is passed down the connection and the voltage that develops is measured. From the voltage and current the resistance of the connection can be calculated and compared to the expected value.

There are two common ways to test for a short:

  1. A low voltage test. A low power, low voltage source is connected between two conductors that should not be connected and the amount of current is measured. If there is no current the conductors are assumed to be well isolated.
  2. A high voltage test. Again a voltage source is connected but this time the voltage is of several hundred volts. The increased voltage will make the test more likely to find connections that are nearly shorted since the higher voltage will cause the insulation of nearly shorted wires to break down.

Signal Testers


More powerful cable testers can measure the properties of the cable relevant to signal transmission. These include the DC resistance of the cable, the loss of signal strength (attenuation) of a signal at one or more frequencies, and a measure of the isolation between multiple pairs of a multi-pair cable or crosstalk. While these instruments are several times the cost and complexity of basic continuity testers, these measurements may be required to certify that a cable installation meets the technical standards required for its use, for example, in local area network cabling.

Optical Cable Testers


An optical cable tester contains a visible light source and a connector compatible with the optical cable installation. A visible light source is used, so that detection can be done by eye. More advanced optical cable testers can verify the signal loss properties of an optical cable and connectors.