
×
This thesis studies two communication problems from an informationtheoretic perspective: identification via the broadcast channel and zeroerror transmission over the Gelfand-Pinsker channel with a feedback link. For each problem the capacity is established; it characterizes the maximum number of messages that can be conveyed. Key to the proofs are
new identification and zero-error codes.
In identification via the single-user channel the sender conveys an identification message from a finite set over the channel, and for every possible message there is a receiving party focused on that message. Each receiving party observes the channel output and must guess from it whether or not the message it is focused on was sent. It thus faces a hypothesis-testing problem with two hypotheses. Loosely speaking, an identification scheme is said to be reliable if, for every possible pair of transmitted message and receiving party, the receiving party guesses correctly with high probability. That is, if the transmitted message equals the message that the receiving party is focused on, then the receiving party guesses with high probability that the message it is focused on was sent, and otherwise it guesses with high probability that the message it is focused on was not sent. The number of identifiable messages is double exponential in the number of channel uses, and the identification rate is thus defined as the iterated logarithm of the number of identification messages normalized by the number of channel uses. The identification capacity is the supremum of achievable identification rates. It is achievable only if randomization at the encoder is allowed: if only deterministic encoders, i. e., deterministic mappings from the set of identification messages to the set of channel inputs, are allowed, then the number of identifiable messages grows only exponentially in the number of channel uses.
This thesis studies identification via the broadcast channel. Unlike the single-user channel, the broadcast channel has two receiving terminals; and the sender conveys two identification messages, one to each receiving terminal. The channel output at each receiving terminal is observed by different parties, each of which is focused—among all the possible identification messages intended for that terminal—on a different message. Each receiving party must guess from its observation whether or not the message it is focused on was sent. The identification capacity region of the broadcast channel is shown to be the set of rate-pairs for which, for some distribution on the channel input, each receiver’s identification rate does not exceed the mutual information between the channel input and the channel output that it observes. Moreover, the capacity region’s interior is achieved by codes with deterministic encoders. The results are obtained under the average-error criterion, which requires that the message intended for each receiving terminal be identified reliably whenever the message intended for the other receiving terminal is drawn at random. They hold also for channels whose transmission capacity region is to-date unknown. Key to the proof is a new code construction for identification via the single-user channel. Extensions to the broadcast channel with one-sided feedback and the three-receiver broadcast channel are also discussed: inner bounds on their identification capacity regions are obtained, and those are shown to be in some cases tight.
The Gelfand-Pinsker channel is a state-dependent single-user channel whose state is revealed acausally to the sender. On this channel the problem where the sender observes the channel output via a feedback link and wishes to convey to the receiver error-free a message from a finite set is considered. The number of messages that can be conveyed error-free is exponential in the number of channel uses, and the rate of a zero-error code is thus defined as the logarithm of the number of messages normalized by the number of channel uses. The zero-error capacity is the supremum of achievable rates. The present thesis fully characterizes the Gelfand-Pinsker channels whose zero-error feedback capacity is positive and establishes a formula for the capacity when it is positive. Together, these results provide a single-letter characterization of the Gelfand-Pinsker channel’s zero-error feedback capacity. The capacity is shown to exhibit phenomena that are not present on the state-less channel: it can be positive even if the zero-error capacity is zero in the absence of feedback; the error-free transmission of a single bit may require cannot be applied naively. These phenomena do not occur when the state is revealed to the transmitter causally, a case that is solved here using Shannon strategies. Key to the proofs are new zero-error coding schemes, which—unlike Shannon’s sequential coding technique—allow the encoder to take advantage of the acausal state information. Cost constraints on the channel inputs or channel states are also discussed, as is the scenario where—in addition to the message—also the state sequence must be recovered.
In identification via the single-user channel the sender conveys an identification message from a finite set over the channel, and for every possible message there is a receiving party focused on that message. Each receiving party observes the channel output and must guess from it whether or not the message it is focused on was sent. It thus faces a hypothesis-testing problem with two hypotheses. Loosely speaking, an identification scheme is said to be reliable if, for every possible pair of transmitted message and receiving party, the receiving party guesses correctly with high probability. That is, if the transmitted message equals the message that the receiving party is focused on, then the receiving party guesses with high probability that the message it is focused on was sent, and otherwise it guesses with high probability that the message it is focused on was not sent. The number of identifiable messages is double exponential in the number of channel uses, and the identification rate is thus defined as the iterated logarithm of the number of identification messages normalized by the number of channel uses. The identification capacity is the supremum of achievable identification rates. It is achievable only if randomization at the encoder is allowed: if only deterministic encoders, i. e., deterministic mappings from the set of identification messages to the set of channel inputs, are allowed, then the number of identifiable messages grows only exponentially in the number of channel uses.
This thesis studies identification via the broadcast channel. Unlike the single-user channel, the broadcast channel has two receiving terminals; and the sender conveys two identification messages, one to each receiving terminal. The channel output at each receiving terminal is observed by different parties, each of which is focused—among all the possible identification messages intended for that terminal—on a different message. Each receiving party must guess from its observation whether or not the message it is focused on was sent. The identification capacity region of the broadcast channel is shown to be the set of rate-pairs for which, for some distribution on the channel input, each receiver’s identification rate does not exceed the mutual information between the channel input and the channel output that it observes. Moreover, the capacity region’s interior is achieved by codes with deterministic encoders. The results are obtained under the average-error criterion, which requires that the message intended for each receiving terminal be identified reliably whenever the message intended for the other receiving terminal is drawn at random. They hold also for channels whose transmission capacity region is to-date unknown. Key to the proof is a new code construction for identification via the single-user channel. Extensions to the broadcast channel with one-sided feedback and the three-receiver broadcast channel are also discussed: inner bounds on their identification capacity regions are obtained, and those are shown to be in some cases tight.
The Gelfand-Pinsker channel is a state-dependent single-user channel whose state is revealed acausally to the sender. On this channel the problem where the sender observes the channel output via a feedback link and wishes to convey to the receiver error-free a message from a finite set is considered. The number of messages that can be conveyed error-free is exponential in the number of channel uses, and the rate of a zero-error code is thus defined as the logarithm of the number of messages normalized by the number of channel uses. The zero-error capacity is the supremum of achievable rates. The present thesis fully characterizes the Gelfand-Pinsker channels whose zero-error feedback capacity is positive and establishes a formula for the capacity when it is positive. Together, these results provide a single-letter characterization of the Gelfand-Pinsker channel’s zero-error feedback capacity. The capacity is shown to exhibit phenomena that are not present on the state-less channel: it can be positive even if the zero-error capacity is zero in the absence of feedback; the error-free transmission of a single bit may require cannot be applied naively. These phenomena do not occur when the state is revealed to the transmitter causally, a case that is solved here using Shannon strategies. Key to the proofs are new zero-error coding schemes, which—unlike Shannon’s sequential coding technique—allow the encoder to take advantage of the acausal state information. Cost constraints on the channel inputs or channel states are also discussed, as is the scenario where—in addition to the message—also the state sequence must be recovered.