Unit – IV: ETHERNET & SWITCHING TECHNIQUES
Circuit
Switching , Packet Switching, Message Switching Ethernet: Overview of Ethernet
10 Base, 100 Base, Fast Ethernet, POE, FDDI, Token Ring, VLAN and its features,
frame relay, CSMA-CD,CA, Flow control, Error Control, Congestion control.,Half,
Full duplex communication.
==========================================================
==========================================================
Circuit
Switching and Packet Switching:
Communication via
circuit switching involves that there is a dedicated communication path between
two stations. That path is a connected sequence of links between network nodes.
On each physical link, a logical channel is dedicated to the connection.
Communication via
circuit switching involves three phases:
- Circuit
establishment. Before any signals can be transmitted,
an end-to-end (station-to-station) circuit must be established.
- Data
transfer. Data can now be transmitted from source through the
network to destination. Circuit
disconnect. After some period of
data transfer, the connection is terminated, usually by the action of one of
the two stations.
In circuit switching
network resources (bandwidth) is divided into pieces and bit delay is constant
during a connection. The dedicated path/circuit established between sender and
receiver provides a guaranteed data rate. Data can be transmitted without any
delays once the circuit is established.
Telephone system
network is the one of example of Circuit switching. TDM (Time Division
Multiplexing) and FDM (Frequency Division Multiplexing) are two methods of
multiplexing multiple signals into a single carrier.
Frequency Division
Multiplexing : Divides into multiple bands Frequency Division Multiplexing or
FDM is used when multiple data signals are combined for simultaneous
transmission via a shared communication medium.It is a technique by which the
total bandwidth is divided into a series of non-overlapping frequency
sub-bands,where each sub-band carry different signal. Practical use in radio
spectrum & optical fiber to share multiple independent signals.
Time Division
Multiplexing : Divides into frames Time-division multiplexing (TDM) is a method
of transmitting and receiving independent signals over a common signal path by
means of synchronized switches at each end of the transmission line. TDM is
used for long-distance communication links and bears heavy data traffic loads
from end user.
Packet
switching:
Packet switching is a
method of transferring the data to a network in form of packets. In order to
transfer the file fast and efficient manner over the network and minimize the
transmission latency, the data is broken into small pieces of variable length,
called Packet. At the destination, all these small-parts (packets) has to be
reassembled, belonging to the same file. A packet composes of payload and
various control information. No pre-setup or reservation of resources is
needed.
Packet Switching uses
Store and Forward technique while switching the packets; while forwarding the
packet each hop first store that packet then forward. This technique is very
beneficial because packets may get discarded at any hop due to some reason.
More than one path is possible between a pair of source and destination. Each
packet contains Source and destination address using which they independently
travel through the network. In other words, packets belonging to the same file
may or may not travel through the same path. If there is congestion at some
path, packets are allowed to choose different path possible over existing
network.
Packet-Switched
networks were designed to overcome the weaknesses of Circuit-Switched networks
since circuit-switched networks were not very effective for small messages.
Advantage
of Packet Switching over Circuit Switching :
- More efficient in terms of bandwidth, since the concept of reserving circuit is not there.
- Minimal transmission latency.
- More reliable as destination can detect the missing packet.
- More fault tolerant because packets may follow different path in case any link is down, Unlike Circuit Switching.
- Cost effective and comparatively cheaper to implement.
Disadvantage
of Packet Switching over Circuit Switching :
- Packet Switching don’t give packets in order, whereas Circuit Switching provides ordered delivery of packets because all the packets follow the same path.
- Since the packets are unordered, we need to provide sequence numbers to each packet.
- Complexity is more at each node because of the facility to follow multiple path.
- Transmission delay is more because of rerouting.
- Packet Switching is beneficial only for small messages, but for bursty data (large messages) Circuit Switching is better.
Message Switching:
Message switching was a
technique developed as an alternate to circuit switching, before packet
switching was introduced. In message switching, end users communicate by
sending and receiving messages that included the entire data to be
shared. Messages are the smallest individual unit. Also, the sender and
receiver are not directly connected. There are a number of intermediate nodes
transfer data and ensure that the message reaches its destination. Message
switched data networks are hence called hop-by-hop systems.
Message switching is
advantageous as it enables efficient usage of network resources. Also, because
of the store-and-forward capability of intermediary nodes, traffic can be
efficiently regulated and controlled. Message delivery as one unit, rather than
in pieces, is another benefit.
However, message
switching has certain disadvantages as well. Since messages are stored
indefinitely at each intermediate node, switches require large storage
capacity. Also, these are pretty slow. This is because at each node, first
there us wait till the entire message is received, then it must be stored and
transmitted after processing the next node and links to it depending on
availability and channel traffic. Hence, message switching cannot be used for
real time or interactive applications like video conference.
The store-and-forward
method was implemented in telegraph message switching centres. Today, although
many major networks and systems are packet-switched or circuit switched
networks, their delivery processes can be based on message switching. For
example, in most electronic mail systems the delivery process is based on
message switching, while the network is in fact either circuit-switched or
packet-switched.
Overview
of Ethernet:
Ethernet is the
technology that is most commonly used in wired local area networks (LANs). A
LAN is a network of computers and other electronic devices that covers a small
area such as a room, office, or building. It is used in contrast to a wide area
network (WAN), which spans much larger geographical areas. Ethernet is a
network protocol that controls how data is transmitted over a LAN. Technically
it is referred to as the IEEE 802.3 protocol. The protocol has evolved and
improved over time to transfer data at the speed of a gigabit per second.
Many people have used
Ethernet technology their whole lives without knowing it. It is most likely
that any wired network in your office, at the bank, and at home is an Ethernet
LAN. Most desktop and laptop computers come with an integrated Ethernet card
inside so they are ready to connect to an Ethernet LAN.
When a machine on the
network wants to send data to another, it senses the carrier, which is the main
wire connecting all the devices. If it is free, meaning no one is sending
anything, it sends the data packet on the network, and all other devices check
the packet to see whether they are the recipient. The recipient consumes the
packet. If there is already a packet on the highway, the device that wants to
send holds back for some thousandths of a second to try again until it can
send.
10
Base –T:
One of several
adaptations of the Ethernet (IEEE 802.3) standard for Local Area Networks
(LANs). The 10Base-T standard (also called Twisted Pair Ethernet) uses a
twisted-pair cable with maximum lengths of 100 meters. The cable is thinner and
more flexible than the coaxial cable used for the 10Base-2 or 10Base-5
standards.
Cables in the 10Base-T
system connect with RJ-45 connectors. A star topology is common with 12 or more
computers connected directly to a hub.
The 10Base-T system
operates at 10 Mbps and uses baseband transmission methods.
100Base-T
(IEEE 802.3u) Fast Ethernet:
A networking standard
that supports data transfer rates up to 100 Mbps (100 megabits per second).
100BASE-T is based on the older Ethernet standard. Because it is 10 times
faster than Ethernet, it is often referred to as Fast Ethernet. Officially, the
100BASE-T standard is IEEE 802.3u.
Like Ethernet,
100BASE-T is based on the CSMA/CDLAN (Carrier Sense Multiple Access with Collision
Detection) access method. There are several different cabling schemes that can
be used with 100BASE-T, including:
- 100BASE-TX: two pairs of high-quality twisted-pair wires
- 100BASE-T4:four pairs of normal-quality twisted-pair wires
- 100BASE-FX: fiber optic cables
Power
over Ethernet (POE):
Power over Ethernet
(POE) is a networking feature that lets network cables carry electrical power
over an existing data connection with a single Cat5e/Cat6 ethernet cable.
PoE technology relies
on the IEEE 802.3af and 802.3at standards, which are set by the Institute of
Electrical and Electronics Engineers and govern how networking equipment should
operate in order to promote interoperability between devices.
PoE-capable devices can
be power sourcing equipment (PSE), powered devices (PDs), or sometimes both.
The device that transmits power is a PSE, while the device that is powered is a
PD. Most PSEs are either network switches or PoE injectors intended for use
with non-PoE switches.
Common examples of PDs
include VoIP phones, wireless access points, and IP cameras.
Token
Ring:
This is a 4-Mbps or
16-Mbps token-passing method, operating in a ring topology. Devices on a Token
Ring network get access to the media through token passing. Token and data pass
to each station on the ring. The devices pass the token around the ring until
one of the computer who wants to transmit data , takes the token and replaces
it with a frame. Each device passes the frame to the next device, until the
frame reaches its destination. As the frame passes to the intended recipient,
the recipient sets certain bits in the frame to indicate that it received the
frame. The original sender of the frame strips the frame data off the ring and
issues a new token.
Fast
Ethernet:
This is an extension of
10Mbps Ethernet standard and supports speed upto 100Mbps. The access method
used is CSMA/CD .For physical connections Star wiring topology is used. Fast
Ethernet is becoming very popular as an upgradation from 10Mbps Ethernet LAN to
Fast Ethernet LAN is quite easy.
FDDI
(Fiber Distributed Data Interface):
FDDI provides data
speed at 100Mbps which is faster than Token Ring and Ethernet LANs . FDDI
comprise two independent, counter-rotating rings : a primary ring and a
secondary ring. Data flows in opposite directions on the rings. The
counter-rotating ring architecture prevents data loss in the event of a link
failure, a node failure, or the failure of both the primary and secondary links
between any two nodes. This technology is usually implemented for a backbone
network.
VLANs
and Features:
VLANs have the primary
role to enable easier configuration and management of large corporate networks
built around many bridges.
Virtual LAN is software
that is employed to provide multiple networks in single hub by grouping
terminals connected to switching hubs. It is a LANs that is grouped together by
logical addresses into a virtual LAN instead of a physical LAN through a
switch. The switch can support many virtual LANs that operate with having
different network addresses or as subnets. Users within a virtual LAN are
grouped either by IP address or by port address, with each node attached to the
switch via a dedicated circuit. Users also can be assigned to more than one
virtual LAN.
The VLAN can be defined
as a broadcast domain in which the broadcast address reaches all stations
belonging to the VLAN. Communications within the VLAN can be secured, and
between those two controlled separate VLANs.
A router is generally
required to establish communication between VLANs.
Features
of VLANs:
VLANs provide a number
of features:
- Simplified administration for the network manager: One of the best things about virtualization is that it simplifies management. By logically grouping users into the same virtual networks, you make it easy to set up and control your policies at a group level. When users physically move workstations, you can keep them on the same network with different equipment. Or if someone changes teams but not workstations, they can easily be given access to whatever new VLANs they need.
- Improved security: Using VLANs improves security by reducing both internal and external threats. Internally, separating users improves security and privacy by ensuring that users can only access the networks that apply to their responsibilities. External threats are also minimized. If an outside attacker is able to gain access to one VLAN, they’ll be contained to that network by the boundaries and controls you have in place to segment it from your others.
- Easier fault management: Troubleshooting problems on the network can be simpler and faster when your different user groups are segmented and isolated from one another. If you know that complaints are only coming from a certain subset of users, you’ll be able to quickly narrow down where to look to find the issue.
- Improved quality of service: VLANs manage traffic more efficiently so that your end users experience better performance. You’ll have fewer latency problems on your network and more reliability for critical applications.
Frame
relay:
Frame relay is a
packet-switching telecommunication service designed for cost-efficient data transmission
for intermittent traffic between local area networks (LANs) and between
endpoints in wide area networks (WANs).
Frame relay puts data
in a variable-size unit called a frame and leaves any necessary error
correction (retransmission of data) up to the endpoints, which speeds up
overall data transmission. For most services, the network provides a permanent
virtual circuit (PVC), which means that the customer sees a continuous,
dedicated connection without having to pay for a full-time leased line, while
the service provider figures out the route each frame travels to its
destination and can charge based on usage. Switched virtual circuits (SVC), by
contrast, are temporary connections that are destroyed after a specific data
transfer is completed.
Frame relay supports
multiplexing of traffic from multiple connections over a shared physical link.
It uses hardware components including frame routers, bridges, and switches to
package data into individual frame relay messages. Each connection uses a 10-bit
data link connection identifier (DLCI) for unique channel addressing.
There are two
connection types:
- Permanent virtual circuits (PVC) for persistent connections intended to be maintained for long periods even if no data is actively transferred.
- Switched virtual circuits (SVC) for temporary connections that last only for a single session.
Carrier
Sense Multiple Access (CSMA)
This method was
developed to decrease the chances of collisions when two or more stations start
sending their signals over the datalink layer. Carrier Sense multiple access
requires that each station first check the state of the medium before sending.
Vulnerable Time:
Vulnerable time = Propagation time (Tp)
The persistence methods
can be applied to help the station take action when the channel is busy/idle.
Carrier
Sense Multiple Access with Collision Detection (CSMA/CD):
In CSMA/CD, a station
monitors the medium after it sends a frame to see if the transmission was
successful.If succcessful, the station is finished, if not, the frame is sent
again.
In the diagram, A
starts send the first bit of its frame at t1 and since C sees the channel idle
at t2, starts sending its frame at t2. C detects A’s frame at t3 and aborts
transmission. A detects C’s frame at t4 and aborts its transmission.
Transmission time for C’s frame is therefore
and for A’s frame is .
So, the frame
transmission time (Tfr) should be at least twice the maximum propagation time
(Tp). This can be deduced when the two stations involved in collision are
maximum distance apart.
The entire process of
collision detection can be explained as follows:
Throughput and
Efficiency – The throughput of CSMA/CD is much greater than pure or slotted
ALOHA.
• For 1-persistent method throughput is
50% when G=1.
• For non-persistent method throughput
can go upto 90%.
Carrier
Sense Multiple Access with Collision Avoidance (CSMA/CA):
The basic idea behind
CSMA/CA is that the station should be able to receive while transmitting to
detect a collision from different stations. In wired networks, if a collision
has occurred then the energy of received signal almost doubles and the station
can sense the possibility of collision. In case of wireless networks, most of
the energy is used for transmission and the energy of received signal increases
by only 5-10% if collision occurs. It can’t be used by station to sense
collision. Therefore CSMA/CA has been specially designed for wireless networks.
These are three type of
strategies:
- InterFrame Space (IFS):When a station finds the channel busy, it waits for a period of time called IFS time. IFS can also be used to define the priority of a station or a frame. Higher the IFS lower is the priority.
- Contention Window: It is the amount of time divided into slots.A station which is ready to send frames chooses random number of slots as wait time.
- Acknowledgements: The positive acknowledgements and time-out timer can help guarantee a successful transmission of the frame.
The entire process for
collision avoidance can be explained as follows:
Flow
Control and Congestion Control:
Flow Control and
Congestion Control are the traffic controlling methods in different situations.
The main difference
between flow control and congestion control is that, In flow control, Traffics
are controlled which are flow from sender to a receiver. On the other hand, In
congestion control, Traffics are controlled entering to the network.
The difference between
flow control and congestion control is as shown below:
FLOW CONTROL
|
CONGESTION CONTROL
|
In flow control, Traffics
are controlled which are flow from sender to a receiver.
|
In this, Traffics are
controlled entering to the network.
|
Data link layer and
Transport layer handle it.
|
Network layer and
Transport layer handle it.
|
In this, Receiver’s
data is prevented from being overwhelmed.
|
In this, Network is
prevented from congestion.
|
In flow control, Only
sender is responsible for the traffic.
|
In this, Transport
layer is responsible for the traffic.
|
In this, Traffic is
prevented by slowly sending by the sender.
|
In this, Traffic is
prevented by slowly transmitting by the transport layer.
|
Error
Control in TCP:
TCP protocol has
methods for finding out corrupted segments, missing segments, out-of-order
segments and duplicated segments.
Error control in TCP is
mainly done through use of three simple techniques :
Checksum: Every
segment contains a checksum field which is used to find corrupted segment. If
the segment is corrupted, then that segment is discarded by the destination TCP
and is considered as lost.
Acknowledgement: TCP
has another mechanism called acknowledgement to affirm that the data segments
have been delivered. Control segments that contain no data but has sequence
number will be acknowledged as well but ACK segments are not acknowledged.
Retransmission: When a
segment is missing, delayed to deliver to receiver, corrupted when it is
checked by receiver then that segment is retransmitted again. Segments are
retransmitted only during two events: when the sender receives three duplicate
acknowledgements (ACK) or when a retransmission timer expires.
Retransmission after
RTO : TCP always preserve one retransmission time-out (RTO) timer for all sent
but not acknowledged segments. When the timer runs out of time, the earliest
segment is retransmitted. Here no timer is set for acknowledgement. In TCP, RTO
value is dynamic in nature and it is updated using round trip time (RTT) of
segments. RTT is the time duration needed for a segment to reach receiver and
an acknowledgement to be received to the sender.
Retransmission after
Three duplicate ACK segments : RTO method works well when the value of RTO is
small. If it is large, more time is needed to get confirmation about whether a
segment has delivered or not. Sometimes one segment is lost and the receiver
receives so many out-of-order segments that they cannot be saved. In order to
solve this situation, three duplicate acknowledgement method is used and
missing segment is retransmitted immediately instead of retransmitting already
delivered segment. This is a fast retransmission because it makes it possible
to quickly retransmit lost segments instead of waiting for timer to end.
Transmission
Modes:
Transmission mode means
transferring of data between two devices. It is also known as communication
mode. Buses and networks are designed to allow communication to occur between
individual devices that are interconnected.
There are three types
of transmission mode:
·
Simplex Mode
·
Half-Duplex Mode
·
Full-Duplex Mode
Simplex
Mode:
In Simplex mode, the
communication is unidirectional, as on a one-way street. Only one of the two
devices on a link can transmit, the other can only receive. The simplex mode
can use the entire capacity of the channel to send data in one direction.
Example: Keyboard and
traditional monitors. The keyboard can only introduce input, the monitor can
only give the output.
Half-Duplex
Mode:
In half-duplex mode,
each station can both transmit and receive, but not at the same time. When one
device is sending, the other can only receive, and vice versa. The half-duplex
mode is used in cases where there is no need for communication in both
direction at the same time. The entire capacity of the channel can be utilized
for each direction.
Example: Walkie- talkie
in which message is sent one at a time and messages are sent in both the
directions.
Full-Duplex
Mode
In full-duplex mode,
both stations can transmit and receive simultaneously. In full_duplex mode,
signals going in one direction share the capacity of the link with signals
going in other direction, this sharing can occur in two ways:
Either the link must
contain two physically separate transmission paths, one for sending and other
for receiving.
Or the capacity is
divided between signals travelling in both directions.
Full-duplex mode is
used when communication in both direction is required all the time. The
capacity of the channel however must be divided between the two directions.
Example: Telephone
Network in which there is communication between two persons by a telephone
line, through which both can talk and listen at the same time.
No comments:
Post a Comment