WiMAX articles

USB – Universal Serial Bus

USB is a polled cable bus with single-host-scheduled, token-based protocol.

Tiered-star topology: USB host is the root hub. A hub is center of a star. A star consists of point-to-point connections between [host and hub/function] or [hub and (another) hub/function].

Allowed topology: Maximum tiers = 7; Maximum non-root hubs in a tier = 5.

Data rates:

High-speed: 480 Mbps

Full-speed: 12 Mbps

Low-speed: 1.5Mbps

For effective utilization of bandwidth, full-speed and low-speed data can be transmitted at high-speed between host and hub, but transmitted between hub and device at full-speed or low-speed.

Operation: Clock is transmitted, encoded (NRZI encoding with bit stuffing) with differential data. A SYNC field precedes each packet for synchronization. The cable also carries VBUS (+5V at source) and GND to deliver power to devices. Cable with biased terminations at each end and up to seven metre length is allowed.

Most bus transactions involve the transmission of up to three packets. Each transaction begins when the Host Controller, on a scheduled basis, sends a USB packet describing the type and direction of transaction, the USB device address, and endpoint number. This packet is referred to as the “token packet.” The USB device that is addressed selects itself by decoding the appropriate address fields. In a given transaction, data is transferred either from the host to a device or from a device to the host. The direction of data transfer is specified in the token packet. The source of the transaction then sends a data packet or indicates it has no data to transfer. The destination, in general, responds with a handshake packet indicating whether the transfer was successful.

Some bus transactions between host controllers and hubs involve the transmission of four packets. These types of transactions are used to manage the data transfers between the host and full-/low- speed devices. The USB data transfer model between a source or destination on the host and an endpoint on a device is referred to as a pipe. There are two types of pipes: stream and message. Stream data has no USB-defined structure, while message data does. Additionally, pipes have associations of data bandwidth, transfer service type, and endpoint characteristics like directionality and buffer sizes. Most pipes come into existence when a USB device is configured. One message pipe, the Default Control Pipe, always exists once a device is powered, in order to provide access to the device’s configuration, status, and control information.

The transaction schedule allows flow control for some stream pipes. At the hardware level, this prevents buffers from under-run or overrun situations by using a NAK handshake to throttle the data rate. When NAKed, a transaction is retried when bus time is available. The flow control mechanism permits the construction of flexible schedules that accommodate concurrent servicing of a heterogeneous mix of stream pipes. Thus, multiple stream pipes can be serviced at different intervals and with packets of different sizes.


Universal Asynchronous Receiver Transmitter: Used for serial communications via cable.  UART generates signals of same timing as RS-232 used by Personal Computer’s COM ports. 


Logic 0

Logic 1







Asynchronous communication requires clock recovery, where a known transition event in the data is used to synchronize transmitter/receiver.

Baud rate of UART: integer multiples or submultiples of 9600 Hz.

RS-232 frame:

1) Start bit (always logic 0)

2) Data bits (5, 6, 7, or 8 of them)

3) A parity bit (optional, even or odd parity)

4) A stop bit (always logic 1); may be 1, 1.5, 2 bit times in duration

The synchronization point is at the start of the frame (always a 1 to 0 transition).

• The 8 received data values are sampled 1.5BT, 2.5BT, … , 8.5BT after the synchronization point (BT = bit time).

• The stop bit is sampled 9.5BT after the synchronization point (if it is not a logic 1, this is a framing error).

Send, Receive data are buffered using Tx, Rx registers.


JTAG: Joint Test Action Group

Boundary Scan technology has the ability to set and read values on pins without direct physical access. Boundary Scan Register: Intercepts device’s core logic and its pins which is invisible for normal operation. In test mode these cells can be used to set/read values.

TCK: Test ClocK synchronizes the internal state machine operations

TMS: Test Mode State’ is sampled at the rising edge of TCK to determine the next state.

TDI: Test Data In represents the data shifted into the device’s test or programming logic. It is sampled at the rising edge of TCK when the internal state machine is in the correct state.

TDO: Test Data Out represents the data shifted out of the device’s test or programming logic and is valid on the falling edge of TCK when the internal state machine is in the correct state.

TRST: Test Reset is an optional pin which, when available, can reset the TAP controller’s state machine.

Instruction Register: Defines to which of the data registers signals should be passed.

Data Registers:

BSR- Boundary Scan Register: The main testing data register used to move data to and from the ‘pins’ on a device.

BYPASS Register: A single-bit register that passes information from TDI to TDO.

IDCODES Register: Contains the ID code and revision number for the device. This information allows the device to be linked to its Boundary Scan Description Language (BSDL) file.

The IEEE 1149.1 standard defines a set of instructions that must be available for a device to be considered compliant.

TAP (Test Access Port) controller:
A state machine whose transitions are controlled by TMS signal. All states have two exits (for TMS=0, TMS=1). Two main paths (in the state machine) allow for setting or retrieving information from either a data register or the instruction register on the device. The data register operated on (e.g. BSR, IDCODES, BYPASS) depends on the value loaded into the instruction register.

PCI Express

One PCIe lane consists of a differential Tx pair, Rx pair. One PCIe link consists at-least one lane. An xN link denotes N lanes. Supported link widths: x1, x2, x4, x8, x16, x32.

Raw Bandwidth: 2.5Gbps/lane/direction.

During hardware initialization, each PCI Express Link is set up following a negotiation of Lane widths and frequency of operation by the two agents at each end of the Link. No firmware or operating system software is involved.

A PCIe fabric consists of point-to-point links that interconnect a set of components. Root complex at the top of the hierarchy connects CPU/memory subsystem to the I/O (End-point device, switch, PCIe-PCI bridge, etc). Each of the components is mapped in a single flat address space and can be accessed using PCI-like load/store accesses transaction semantics.

Load-store mechanism in PCI:
From the CPU’s perspective, PCI devices are accessible via a fairly straightforward load-store mechanism. There’s flat, unified chunk of address space dedicated for PCI use, which looks to the CPU much like a flat chunk of main memory address space, the primary difference being that at each range of addresses there sits a PCI device instead of a group of memory cells containing code or data. When a PCI-enabled computer boots up, it must initialize the PCI subsystem by assigning chunks of the PCI address space to the different devices so that they’ll be accessible to the CPU.

PCI Express’s designers have left this load-store-based, flat memory model unchanged. So a legacy application that wants to communicate via PCIe still executes a read from or a write to a specific address. Hence PCIe is backwards-compatible with PCI, and that operating systems can boot on and use a PCIe-based system without modification.

Note: In PCI system, devices are connected to host (root) through a shared bus (parallel bus). There is an arbitration scheme that decides who gets access to the bus. In PCIe system, devices are connected to root complex by point-to-point connection (serial connection).

PCI Express uses packets to communicate information between components.
The capability to route peer-to-peer transactions between hierarchy domains through a Root Complex is optional and implementation dependent. For example, an implementation may incorporate a real or virtual Switch internally within the Root Complex to enable full peer-to-peer support in a software transparent way.