Conventional TDM systems usually employ either
Bit-Interleaved or Byte-Interleaved multiplexing schemes.
Bit-Interleaved Multiplexing
In Bit-Interleaved TDM, a single data bit from an I/O port is output to the
aggregate channel. This is followed by a data bit from another I/O port
(channel), and so on, and so on, with the process repeating itself.A "time slice" is reserved on the aggregate channel for each
individual I/O port. Since these "time slices" for each I/O port are
known to both the transmitter and receiver, the only requirement is for the
transmitter and receiver to be in-step; that is to say, being at the right
place (I/O port) at the right time. This is accomplished through the use of a
synchronization channel between the two multiplexers. The synchronization
channel transports a fixed pattern that the receiver uses to acquire synchronization.
Total I/O bandwidth (expressed in Bits Per Second - BPS) cannot exceed that
of the aggregate (minus the bandwidth requirements for the synchronization
channel).
Bit-Interleaved TDM is simple and efficient and requires little or no
buffering of I/O data. A single data bit from each I/O channel is
sampled, then interleaved and output in a high speed data stream. But the main
disadvantage of a bit interleaved transmission is that if we want to extract a
lower order data from the stream, then we need to do the entire de-multiplexing
process at the receiver. This is overcome by the Byte Interleaved mechanism
explained later.
All the T-Carrier and E-Carrier multiplexers are using Bit Interleaved
Multiplexing. Show below is the table containing the T-carrier and E-carrier hierarchies.
Byte-Interleaved Multiplexing
In Byte-Interleaved multiplexing, complete words (bytes) from the I/O
channels are placed sequentially, one after another, onto the high speed
aggregate channel. Again, a synchronization channel is used to synchronize the
multiplexers at each end of the communications facility. Here individual user data can be picked by intermediate add-drop multiplexers without the demultiplexing process by using specific pointers as in SDH or SONET multiplexing.
Examples for the byte Interleaved multiplexing includes SDH (Synchronous
Digital Hierarchy), SONET (Synchronous Optical Network) etc.
E1 is the European standard and is the framing format that
is widely used in almost all countries outside the USA,
Canada and Japan. The E1
frame is composed of 32 timeslots or channels. Timeslots are also called DS0s.
Each timeslot is 8 bits. Therefore, the E1 frame will be (32 timeslots * 8
bits) = 256 bits. Each timeslot has a data rate of 64,000 bits/second (8000
samples/sec * 8 bits/sample). As each slot has to be repeated every 1/8000 sec
or 125 microseconds, the entire frame frequency is also 125 microseconds. Therefore
the E1 line rate will be (32 channels * 8 bits/channel)/ frame * 8000
frames/second = 2048000 bits/second or 2.048Mbs.
Timeslot 0 is used for frame synchronization and alarms.
Timeslot 16 is used for signaling, alarms, or data. Timeslot 1 to 15 and 17 to
31 are used for carrying data.
An alarm is a
response to an error on the E1 line or framing. Three of the conditions that
cause alarms are loss of frame alignment (LFA), loss of multi-frame alignment
(LFMA), and loss of signal (LOS). The LFA condition, also called an
out-of-frame (OOF) condition, and LFMA condition occur when there are errors in
the incoming framing pattern. The number of bit errors that provokes the
condition depends on the framing format. The LOS condition occurs when no
pulses have been detected on the line for between 100 to 250 bit times. This is
the highest state alarm where nothing is detected on the line. The LOS may
occur when a cable is not plugged in or the far end equipment, which is the
source of the signal, is out of service. The alarm indication signal (AIS) and
remote alarm indication (RAI) alarms are responses to the LOS, LFA, and LFMA
conditions. The RAI alarm is transmitted on LFA, LFMA, or LOS. RAI will be
transmitted back to the far end that is transmitting frames in error. The AIS
condition is a response to error conditions also. The AIS response is an
unframed all 1's pattern on the line to the remote host. It is used to tell the
far end it is still alive. AIS is the blue alarm, RAI is the yellow alarm. A
red alarm that can occur after a LFA has existed for 2.5 seconds. It is cleared
after the LFA has been clear for at least one second.
E1 Double Frame
There are two E1 frame formats, the double frame and the multi-frame. The
synchronization methods are different in the two frame formats. In double frame format , synchronization
can be achieved after the receipt of three E1 frames. The
synchronization information is carried in timeslot 0. This is called the frame
alignment signal (FAS).
The FAS is a pattern "0011011" that specifies the alignment of a
frame. The FAS is in timeslot 0 in alternate frames (Frame N) . Bits 2 through
8 are the FAS. The other frame's (i.e Frame N+1) bit 2 is set to 1. Frame alignment is
reached if there is:
A correct FAS word in frame
N.
Bit 2 = 1 in frame N+1
A correct FAS word in frame
N+2.
What happens if synchronization is not achieved or has been
achieved and lost? This condition is called LFA or loss of frame alignment. If three in four alignment
words are in error, an LFA is declared. The near end must respond to the far end that there is an alignment problem.
This is done with the RAI alarm. The A bit (bit 3) in all N+1 frames is used
for sending the RAI alarm to the far-end equipment.
E1 Multi-frame
In multi-frame format, the synchronization requires 16
consecutive good frames. The multi-frame structure also has two extra features.
It provides channel associated signaling (CAS) and a cyclic redundancy check
(CRC).CAS is sent in timeslot 16 of each frame. It is CAS information that can
denote on-hook and off-hook conditions of telephone calls. Figure 1 shows how CAS information is sent.
Figure 1
In frame 1, the information for channels 1 and 16 is sent. In frame 2, the
information for frames 2 and 17 is sent. Only 4 bits are used to denote on-hook
and off-hook conditions. Of the four bits, not all are always used. Refer to Figure 2 for the definitions of the ABCD bits for on hook/off hook conditions.
Notice that timeslot 16 of frame 0 does not send this information.
Figure 2
The extra feature to multi-frame is the addition of a CRC. This resides in
timeslot 0 (Figure 3). The Cx bits are for the four-bit CRC which
resides in bit 1 of frames 0, 2, 4, 6, 8, 10, 12, and 14. The E and S bits are
for international use.
Figure 3
The FAS pattern for multi-frame is also "001011". This is in bit 1
of frames 1, 3, 5, 7, 9, and 11. Notice now that the FAS is 1 bit in each of
the frames vertically. This pattern is called a multi-frame alignment, when all
16 frames are correct. Also note that a Double frame alignment is achieved before
multi-frame alignment.
If synchronization is not achieved or has been achieved and then lost, a LMFA
condition will be declared. This denotes that the FAS was not received
correctly in 16 frames. If double frame alignment has been lost, LFA will also
be declared. The LMFA and LFA conditions are handled differently. When the LMFA
condition exist at the near end of a link, the near end will send a RAI alarm
the to far end of the link. The RAI is transmitted by setting the Y bit to 1 in
timeslot 16 (Figure 4). The LFA alarm will be handled as it is in double
frame by setting the A bit in bit 3 of every N+1 frame.
Figure 4
The AIS is sent as all 1's in the frame. All timeslots will be
filled with 1's. This is sent in double frame and multi-frame when the LFA
occurs. When LMFA condition occurs, AIS will be sent only in timeslot 16.
Framing is the
process of combining the information (bits or bytes) before transmitting it to
the destination device. The generated collection of bits is called a frame. A
frame can contain information of a single user or of multiple users. It has
clearly defined boundaries to identify the start and end of a frame. In a TDM
based system, a frame is a set of consecutive time slots in which the position
of each bit or a time slot can be identified by reference to a frame-alignment
signal. There are many framing formats (of different speed and size) defined
worldwide and used across the telecommunication networks. Apart from the user
information, a frame may carry other special bits (or bytes) like framing bits
– to identify the start and end of a frame, address bits – to identify the
source or destination devices/users, error correction or detection bits, timing
or synchronization information and other maintenance information bits. The two
basic TDM based framing techniques are T1 and E1.Also the basic unit of the
T-carrier system is the DS0, which has a transmission rate of 64 Kbit/s, and is
commonly used for one voice circuit.
T1 is the North
American standard and is used in North America and Japan. T1 combines 24 separate
voice channels onto a single link. This individual voice channels is called a
DS0, which is the basic unit of any TDM based systems. The T1 data stream is
broken into frames consisting of a single framing bit plus 24 channels of 8-bit
bytes (1 framing bit per frame + 24 channels per frame * 8 bits per channel =
193 bits per frame). The frames must repeat 8,000 times per second in order to
properly recreate the voice signal of individual users (24 such users). Thus,
the required bit rate for T1 is 1.544 Mbps (8,000 frames per second * 193 bits per
frame).
The T1
electrical interface consists of two pairs of wires - a transmit data pair and
a receive data pair. Timing information is embedded in the data. T1 utilizes
bipolar electrical pulses. Where most digital signals are either a ONE or a
ZERO (unipolar operation), T1 signals can be one of three states. The ZERO
voltage level is 0 volts, while the ONE voltage level can be either a positive
or a negative voltage.
Encoding Methods
There are a
number of different encoding methods used on T1 lines. Alternate Mark Inversion
(AMI), Bipolar With 8-Bit Substitution (B8ZS), and High Density Bipolar Three
Code (HDB3) will be discussed here.
AMI encoding causes the line to
alternate between positive and negative pulses for successive 1's. The 0's code
is no pulse at all. Thus, a data pattern of 11001011 would cause the following
pattern on an AMI line: - +,-, 0, 0, +, 0,-, +. With this encoding technique
there is a problem with long strings of 0's in the user's data which produce no
transitions on the line. The receiving equipment needs to see transitions in
order to maintain synchronization. Because of this problem, DS-1 (the signal
carried via a T-carrier) specifications require that users limit the number of
consecutive 0's in their data steam to less than 15. With this scheme of
encoding there should never be consecutive positive or negative pulses on the
line (i.e., the following pattern should never occur: 0, +,-, +, +,-). If two
successive positive or two successive negative pulses appear on the line, it is
called a Bipolar Violation (BPV). Most T1 systems watch for this event and flag
it as an error when it occurs.
B8ZS and HDB3 are both methods
which permit the user to send any pattern of data without affecting the
operation of the T1 line. Both of these encoding schemes make use of BPVs to
indicate that the user’s data contains a long string of 0's. B8ZS looks for a
sequence of eight successive 0's and substitutes a pattern of two successive
BPVs. The receiving station watches for this particular pattern of BPVs and
removes them to recreate the original user data stream.
HDB3 is the scheme recommended
by the CCITT. This scheme watches for a string of four successive 0's and
substitutes a single BPV on the line.
T1 Framing Techniques
SuperFrame (also called D4 Framing)
In order to determine where each
channel is located in the stream of data being received, each set of 24
channels is aligned in a frame. The frame is 192 bits long (8 * 24), and is
terminated with a 193rd bit, the framing bit, which is used to find the end of
the frame. In order for the framing bit to be located by receiving equipment, a
pattern is sent on this bit.Equipment will search for a bit which has the
correct pattern, and will align its framing based on that bit. The pattern sent
is 12 bits long, so every group of 12 frames is called a Super Frame. The
pattern used in the 193rd bit is 1000 1101 1100. In order to send supervisory
information over a D4 link "bit robbing" is used. A voice signal is
not significantly affected if the low-order bit in a byte is occasionally
wrong. D4 framing makes use of this characteristic of voice and uses the
least-significant bits in each channel of the 6th (A Bit) and 12th (B Bit)
frames to send signalling information; on-hook, off-hook, dialing andbusy
status.
Extended Superframe (ESF) Framing
The Extended
Superframe Format (ESF) extends the D4 superframe from 12 frames to 24 frames.
ESF also redefines the 193rd bit location in order to add additional
functionality. In ESF the 193rd bit location serves three
different purposes:
Frame
synchronization
Error
detection
Maintenance
communications (Facilities Data Link - FDL)
Within an ESF
superframe, 24 bits are available for these functions. Six are used for
synchronization, six are used for error detection, and twelve are used for
maintenance communications. In D4 framing, 12 bits are used per superframe for
synchronization. In ESF framing, 6 bits are used per superframe for
synchronization.There is no link-level error checking available with D4
framing (except for bipolar violations). ESF framing utilizes a 6-bit Cyclic
Redundancy Check (CRC) sequence to verify that the frame has been received
without any bit errors. As a superframe is transmitted, a 6-bit CRC character
is calculated for the frame. This character is then sent in the six CRC bit
locations of the next superframe. The receiving equipment uses the same
algorithm to calculate the CRC on the received superframe and then compares the
CRC value that it calculated with the CRC received in the next superframe. If
the two compare, then there is a very high probability that there were no bit
errors in transmission. As was stated earlier, 12 bits are used for maintenance
communications. These 12 bits give the maintenance communications channel a
capacity of 4,000 bits per second. This function enables the operators at the
network control center to interrogate the remote equipment for information on
the performance of the link. As with D4 framing ESF utilizes "robbed
bits" for in-band signalling. ESF utilizes 4 frames per superframe for
this signalling. The 6th (A bit), 12th (B bit), 18th (C bit), and 24th (D bit)
frames are used for the robbed bits. The function of the robbed bits is the
same as in D4 framing.
T1 Alarms
An alarm is a response to an error on the E1 line or framing. Three of the
conditions that cause alarms are loss of frame alignment (LFA), loss of
multi-frame alignment (LFMA), and loss of signal (LOS). The LFA condition, also
called an out-of-frame (OOF) condition, and LFMA condition occur when there are
errors in the incoming framing pattern. The number of bit errors that provokes
the condition depends on the framing format. The LOS condition occurs when no
pulses have been detected on the line for between 100 to 250 bit times. This is
the highest state alarm where nothing is detected on the line. The LOS may
occur when a cable is not plugged in or the far end equipment, which is the
source of the signal, is out of service. The alarm indication signal (AIS) and
remote alarm indication (RAI) alarms are responses to the LOS, LFA, and LFMA
conditions. The RAI alarm is transmitted on LFA, LFMA, or LOS. RAI will be
transmitted back to the far end that is transmitting frames in error. The AIS
condition is a response to error conditions also. The AIS response is an
unframed all 1's pattern on the line to the remote host. It is used to tell the
far end it is still alive. AIS is the blue alarm, RAI is the yellow alarm. A
red alarm that can occur after a LFA has existed for 2.5 seconds. It is cleared
after the LFA has been clear for at least one second.
Red alarm indicates the alarming equipment is unable to
recover the framing reliably. Corruption or loss of the signal will
produce “red alarm.” Connectivity has been lost toward the alarming
equipment. There is no knowledge of connectivity toward the far end.
Yellow alarm, also known as remote alarm indication (RAI),
indicates reception of a data or framing pattern that reports the far
end is in “red alarm.” The alarm is carried differently in SF (D4) and
ESF (D5) framing. For SF framed signals, the user bandwidth is
manipulated and "bit two in every DS0 channel shall be a zero.
The resulting loss of payload data while transmitting a yellow alarm is
undesirable, and was resolved in ESF framed signals by using the data
link layer. "A repeating 16-bit pattern consisting of eight 'ones'
followed by eight 'zeros' shall be transmitted continuously on the ESF
data link, but may be interrupted for a period not to exceed 100-ms per
interruption. Both types of alarms are transmitted for the duration of the alarm condition, but for at least one second.
Blue alarm, also known as alarm indication signal (AIS)
indicates a disruption in the communication path between the terminal
equipment and line repeaters or DCS. If no signal is received by the
intermediary equipment, it produces an unframed, all-ones signal. The
receiving equipment displays a “red alarm” and sends the signal for
“yellow alarm” to the far end because it has no framing, but at
intermediary interfaces the equipment will report “AIS” or Alarm Indication Signal. AIS is also called “all ones” because of the data and framing pattern.
Companding algorithms reduce the dynamic
range of an audio signal. In analog systems, this can
increase the signal-to-noise ratio (SNR) achieved during
transmission, and in the digital domain, it can reduce the quantization error
(hence increasing signal to quantization noise ratio). The µ-law algorithm
(PCMU) is a companding
algorithm, primarily used in the digital telecommunication
systems of North America and Japan. And A-law
algorithm (PCMA) is the standard companding
algorithm, used in European
digital communications
systems to optimize, i.e., modify, the dynamic
range of an analog signal for digitizing.
PCMU for Digitized Signals
PCMU digital implementation
takes a maximum sine wave amplitude of ±8159 equal to 3.17 dBm in
power representation. The front-end analog circuit and ADCs (Analog-to-Digital
Converters) are calibrated to take a (maximum of) 3.17 dBm sine wave and give ±8159
amplitude in digital number representation. The numbers 8159 and above are
clipped to 8158 in the process of quantization. In the process of compression, the
input range is broken into segments, and each segment will use different
intervals. Most segments contain 16 intervals, and the interval size doubles
from segment to segment as shown in Figure 1. Large signals use a big
quantization step and small signals use a small quantization step. This type of
non uniform quantization gives a signal-to-noise ratio (SNR) of 38 to 40 dB for
most of the useful signal amplitude range. For full-scale amplitude of 3.17
dBm, signal to quantization is shown as 39.3 dB.
Figure 1
A PCM 14–bit (±8159 amplitude) input is split into eight amplitude segments represented
with 3 bits. Each segment is quantized uniformly into 16 levels using 4 bits.
The polarity of the input is represented using 1 bit. A sign bit is represented
as “1″ for positive numbers and as “0″ for negative numbers. In a two’s
complement number format, sign bit-1 is used for negative numbers. After
framing 8 bits, the last 7 bits except the sign bit are inverted in PCMU. This
bit inversion increases the one’s bit density in transmission systems that help
timing and clock recovery circuits in the receiver. Ideal channel noise makes
bits, toggle between 01111111 (0x7F) and 11111111 (0xFF), which allows clock
recovery to be better even under ideal channel conditions. Bit inversion is
achieved by simple exclusive OR (XOR) operation of the encoded output with
value 0x7F. This inversion is applied on both positive and negative values of
input. At the decoder, the same operation of 7-bit inversion can retrieve the
original decoded compressed byte.
PCMU has eight segments (both on the positive side and negative side),
numbered from 1 to 8. The intervals are 2,4,8,16,32,64,128 and 256. In PCMU, the maximum interval size is 256, and the corresponding
segment number is 8. However, encoded output has only 3 bits for representing
the segment number, which means only numbers from 0 to 7, can be used for
segment coding. Therefore, in PCMU, maximum and minimum segment numbers are
coded as 111(7) and 000(0), respectively.Table 1 lists PCMU encoded and
decoded output values for positive numbers and Table 2 lists the output values for some of the negative numbers.
Table 1
Table 2
PCMA for Digitized Signals
PCMA takes a maximum input of ±4096 equal to 3.14dBm
in power representation. The front–end analog circuit and analog–to–digital
converters (ADCs) are calibrated to take 3.14 dBm and to give ±4096 amplitude
in digital number representation. The quantization procedure is the same as
with PCMU with few deviations in quantization steps. The step values (intervals) are
128, 64, 32, 16, 8, 4, and 2 (for the last two 16 intervals), which means the last two
segments use the same interval 2. Of the 16 total segments for positive and
negative numbers, four segments from -63 to 63 use a step size of 2. Hence,
PCMA is referred to as 13-segment quantization. PCMA encodes a 13-bit sample
number to an 8-bit compressed sample. Unlike in PCMU, where the last 7 bits are
inverted, PCMA encoder uses even bits inversion (EBI)., where in, the actual bits in the position 2,4,6 and 8 are inverted. In the A-law tables, (Table 3 and Table 4)
bits before inversion and after EBI are also included in separate columns. EBI
is performed in software by XOR operation of the encoded output with 0×55. EBI
is applied on both positive and negative values of input. At the decoder, the
same operation of even bit inversion can retrieve the original decoded
compressed byte. EBI in PCMA once again helps clock recovery mechanisms during
ideal channel or near zero amplitude signals. Table 3 shows the PCMA Quantization Example for Positive Inputs
and Table 4 shows the same output for a few Segments of Negative Inputs.
Table 3
Table 4
Therefore, Companding is a process which helps to transmit and reproduce a high-fidelity voice information in an effective manner (in terms of bandwidth as well as cost).
The voice frequency (VF) or voice band ranges from approximately 300 Hz to 3.30 kHz. After adding
the guard bands (a guard band is an unused part of the radio spectrum between
radio bands, for the purpose of preventing interference) of 0-300 Hz and 3.3kHz-4kHz , the net bandwidth
that is required for voice transmission will be 4 kHz. Now we need to convert
this signal into a digitized one. And the concept behind digitizing a sound
signal is Nyquist theorem. Working at Bell Labs, Harry Nyquist discovered that
it was not necessary to capture (or send) the entire analog waveform. Only the samples
of the wave taken at various points need to be captured to recreate the
original one. He also found that in order to reconstruct the original waveform
with enough information as the original one, the sampling rate must be at least
twice the signal bandwidth.
This concept became the foundation
for PCM (Pulse Code Modulation). PCM is the fundamental thing for any digital
transmission or switching technology. As I previously mentioned, the highest
frequency content in the voice sample is 4 kHz (4000 samples per second). According
to Nyquist criterion, the sampling
rate should be at least twice the highest frequency content of the selected
signal. So here the sampling rate will be 8000 samples per second. After this,
the amplitude of each sample is measured based on a logarithmic scale. This measurement
process is called Quantization.
There are two PCM algorithms defined within CCITT G.711, called "A-Law" and "Mu-Law". Mu-Law PCM is used in
North America and Japan,
and A-Law used in most other countries. In both A-Law and Mu-Law PCM, 8 bit codewords are used to represent each sample's amplitude (voltage value of the voice sample at the sampling instance) To attain this 8 bit quantization, a logarithmic scale is used. The y-axis of this scale is divided into 16 segments called chords ( 8 on the positive side & 8 on the negative side). Within each chord are 16 uniform quantization intervals, or steps. The length of the steps depend on the chord number. For example, chord 1 has 16 steps having a length of Δ each. Chord 2 has 16 steps having a length of 2Δ each. In general, the step size in the n-th chord is 2n−1Δ.(as shown in Figure below)
Now the representation of the 8 bits is as shown below.
The
left-most bit is known as the "sign" bit or "polarity" bit,
and is a 1 for positive values and a 0 for negative values (both PCM types).
This is the Most Significant Bit (MSB) and is transmitted first. The next three
bits indicates the chord value, and the final four bits denotes the step
value.(as shown in Figure 1). This way of defining an 8 bit pattern for a
sample value is called encoding.
Now why do you want to use a
logarithmic scale for the quantization process? Dividing the amplitude of the
voice signal up into equal positive and negative steps is not an efficient way to encode voice into PCM. This does not take advantage
of a natural property of human voice :- voices create low amplitude
signals most of the time (people seldom shout on the telephone). That is, most
of the energy in human voice is concentrated in the lower end of voice’s
dynamic range. To create the highest-quality voice reproduction from PCM, the
quantization process must take into account this fact that most voice signals
are typically of lower amplitude. To do this the voice coder adjusts the chords
and steps so that most of them are in the low-amplitude end of the total
encoding range. In this modified scheme, all step sizes are not equal. Step
sizes are smaller for lower-amplitude signals. Quantization levels (chords)
distributed according to a logarithmic function, instead of linear function
gives finer resolution, or smaller quantization steps, at lower signal amplitudes
(as shown in Figure 3). Therefore, higher-fidelity reproduction of voice is
achieved.
And finally, as each sample is
represented using 8 bits, the total bits that needs to be transmitted per
second (for the recreation of the original 4kHz band sample) is 8000
samples/sec * 8 bits/sample = 64000 bits transmitted per second or 64Kbps.
And therefore the minimum bandwidth required to convey your voice (in
digital terms) is 64Kbps.
The PSTN or the public switched telephone
network (PSTN) is the network of the world's circuit-switched telephone
networks. You can think of it as a network where each Telephone switch in the
network is connected to another switch, either directly or in an indirect way
(through other switches). Also the switches are intelligent enough to route
your call to a destination switch near to your friend’s or relative’s house, be
it a national or an international call. Each of these switches uses the
telephone number that you dial, to find the switch where your friend belongs
to.
Now when you want to talk to some one over a
telephone, you will first dial his/her telephone number. Then the other end’s
phone started ringing and if somebody lifts the phone, you will get an answer.
Do you know what happened during that tiny gap between the “end of dialing” and
“Ringing”? This is the duration where
the switches are trying to establish a physical connection between you and the
person you dialed through the PSTN network. This physical connection is used to
convey your voice data (speech) to the other person and vice versa. Now what
does this physical connection looks like? Is it the physical wires that we are
talking about? You might have seen only one or two optical cables or electrical
cables between the telephone switch boxes, but still it supports thousands of
users from your locality. How is this possible? There comes the technology
called TDM (Time division multiplexing).
Multiplexing is the method of combining many
smaller units of signals carried via separate channels on to a bigger and
single channel. Or in other words multiplexing helps to carry multiple users
information simultaneously. The unit can be a time slice or a frequency. When
it is frequency, that method is called FDM (Frequency division multiplexing)
and if it is time, then it is called TDM. Consider your link has a carrying
capacity (bandwidth) of 2Mbps or 2 Mega bits per second (i.e. it can carry 2
Megabits every 1 sec), and you want to carry “n” users data simultaneously
through this link How do you achieve this? The TDM multiplexer will do the job
for you. Each user will be allocated 1/n sec duration of the bandwidth out of
this 2Mbps. All users will send there data to the Multiplexer in a parallel
fashion. The multiplexer will buffer this information and will arrange them in
a serial manner to make the 2Mb of information and sent these data chunks every
second. And at the other end a de-multiplexer will separate the information and
forward it to multiple destinations. Now you can see, how a single high
capacity link can me equipped to carry multiple users data simultaneously.
Same thing is happening in a switch (Digital switch). Your telephones
will be wired separately to a switch. The switch will do the multiplexing for
you and send your information, as if separate cables are laid for you between
the switches.