Coding and Error Checking of Binary Data
Concepts of SignMagnitude, One's Complement, Two's Complement, and Excessn Notation, along with Parity Check and Hamming Code Error Detection Methods for Binary Data
Coding Methods
There are four ways to represent binary data in computers: signmagnitude, one's complement, two's complement, and excessn notation. These methods are used to represent and process signed numbers in computers, each solving the problems encountered when using binary to represent signed numbers.

SignMagnitude:
Signmagnitude was the earliest method used for representation. The highest bit represents the sign, while the remaining bits represent the magnitude. However, signmagnitude has a problem: there are two representations for zero, positive zero and negative zero. Moreover, special processing is required to compare the results of addition and subtraction, which adds complexity to operations and comparisons.
The main advantage of signmagnitude is its simplicity, but the problem of representing zero makes it inconvenient for practical applications.

One's Complement:
To address the issue of the sign bit in signmagnitude representation, one's complement was introduced. In one's complement, the representation of negative numbers is obtained by taking the complement of each bit of the corresponding positive number. Therefore, addition and subtraction operations for two one's complement numbers can be unified as addition operations, with the sign bit participating in the operation, and the sign bit of the result being determined by the operation.
One's complement solves the problem of the sign bit in signmagnitude representation. However, when performing addition and subtraction operations, the issue of positive zero and negative zero still needs to be considered.

Two's Complement:
To address the issue of positive and negative zero in one's complement representation, two's complement was introduced. In two's complement, all bits are complemented based on one's complement representation, and then $1$ is added.
Since there is only one representation for zero in two's complement, the problem of positive and negative zero is avoided during operations.

ExcessN Notation (Bias): ExcessN notation is commonly used to represent the exponent part of floatingpoint numbers. ExcessN notation is introduced to enable direct comparison of floatingpoint numbers with integers without the need for additional operations.
ExcessN notation sets a bias value to represent unsigned integers as signed integers, enabling direct comparison with integers and simplifying the comparison process.
These different representation methods exist to more effectively represent and process signed numbers within a limited number of bits, while keeping operations and comparisons simple and efficient. The choice of representation method depends on specific application requirements and hardware design considerations.
Below, we will introduce their specific representation methods and calculation methods, assuming the binary number is $X_{(2)}$, and the machine's word length is $n$.
SignMagnitude Representation
 If $X$ is a pure integer
 When it is positive, the highest bit is $0$, and the remaining $n1$ bits are $X_{(2)}$.
 When it is negative, the highest bit is $1$, and the remaining $n1$ bits are $X_{(2)}$.
 If $X$ is a pure fraction
 When it is positive, the highest bit is $0$, and the remaining $n1$ bits are $X_{(2)}$.
 When it is negative, the highest bit is $1$, and the remaining $n1$ bits are $X_{(2)}$.
 If $X$ is $0$, there are two representations: $+0$ and $0$.
 Addition rule: First, determine the sign bit. If the signs are the same, add the absolute values, and the sign of the result remains unchanged; if the signs are different, perform subtraction, and the sign of the result is the same as the sign of the absolute value of the larger number.
 Subtraction rule: Subtracting two numbers represented by signmagnitude, first negate the sign of the subtrahend, then perform signmagnitude addition on the minuend and the negated subtrahend. (Subtract numbers with the same sign; add numbers with different signs)
One's Complement Representation
 If $X$ is a pure integer
 When it is positive, the highest bit is $0$, and the remaining $n1$ bits are $X_{(2)}$.
 When it is negative, take the complement of each bit of $X_{(2)}$.
 If $X$ is a pure fraction
 When it is positive, the highest bit is $0$, and the remaining $n1$ bits are $X_{(2)}$.
 When it is negative, take the complement of each bit of $X_{(2)}$.
 If $X$ is $0$, there are two representations: $+0$ and $0$.
 Addition and subtraction: $[X+Y] = [X] + [Y],[XY] = [X] + [Y]$. The sign bit is considered as part of the number, and any carry generated by the sign bit is discarded.
Two's Complement Representation
 If $X$ is a pure integer
 When it is positive, the highest bit is $0$, and the remaining $n1$ bits are $X_{(2)}$.
 When it is negative, the highest bit is $1$, and the remaining $n1$ bits are $X + 1_{(2)}$.
 If $X$ is a pure fraction
 When it is positive, the highest bit is $0$, and the remaining $n1$ bits are $X_{(2)}$.
 When it is negative, the highest bit is $1$, and the remaining $n1$ bits are $X + 1_{(2)}$.
 If $X$ is $0$, there is only one representation, for example, when $n=8$, it is $00000000$.
 Addition and subtraction: $[X+Y] = [X] + [Y],[XY] = [X] + [Y]$. The sign bit is considered as part of the number, and any carry generated by the sign bit is discarded.
ExcessN Notation (Bias)
A bias value of $(2^{n1})_{(2)}$ is set.
 If $X$ is a pure integer, it is $X_{(2)} + (2^{n1})_{(2)}$.
 If $X$ is a pure fraction, it is $X_{(2)} + 1_{(2)}$.
 If $X$ is $0$, there is only one representation, for example, when $n=8$, it is $10000000$.
 In fact, when the bias value is $(2^{n1})_{(2)}$, only the complement of the highest bit of the two's complement needs to be taken, to obtain the corresponding excessn notation representation. In other words, the highest bits of the two's complement and excessn notation are each other's complements.
 Addition and subtraction operations are the same as two's complement.
FixedPoint Numbers
For fixedpoint numbers, the decimal point is after the lowest bit. For fixedpoint fractions, if there is a
sign bit, the decimal point is after the sign bit; if there is no sign bit, the decimal point is before the highest bit.
Among the above representation methods, for fixedpoint integers,
 The representation range of signmagnitude and two's complement is $[(2^{n1}1),2^{n1}1]$.
 The representation range of two's complement and excessn notation is $[2^{n1},2^{n1}1]$.
 They can both represent $2^{n}$ numbers.
For fixedpoint fractions,
 The representation range of signmagnitude and two's complement is $[(12^{(n1)}),12^{(n1)}]$.
 The representation range of two's complement and excessn notation is $[(12^{(n1)}),12^{(n1)}]$.
 They can both represent $2^{n1}$ numbers.
FloatingPoint Numbers
Floatingpoint numbers are represented by exponent $E$ and mantissa $F$, where the exponent is represented by $R$ bits of excessn notation, and the mantissa is represented by $M$ bits of two's complement. The binary number $2^{E} * F$ can be expressed.
The range that can be expressed is $[2^{(2^R1)},(12^{M+1})*(2^{(2^R1)})]$.
Error Checking Codes
Parity Check
Parity check refers to adding $1$ to the check bit so that the number of $1$s in the binary number is odd. The same applies to even parity. However, odd parity can only detect coding errors in oddnumbered bits, and cannot detect coding errors in evennumbered bits. Even parity is the same.
Hamming Code
Hamming codes insert $k$ check bits into $n$ bits of binary data. And satisfy $2^{k}1≥ n+k$.
Taking the checked data $1100$ as an example, first determine the position of the check bits, and use the above formula to determine that $k=3$. Calculate the positions of the check bits using $2^{i},i =0,1,2,3$. These are the subscripts of the Hamming code check bits. Write down the binary representations of these subscripts.
Then use $*$ to replace $0$ in the above table to use as a wildcard.
Write down the binary sequence from $1$ to $n+k$, which is from $1$ to $4+3=7$.
Then use the wildcard table to match the numbers.
Then, the Hamming code table to be determined is obtained.
Therefore, we can determine that
 $H_{1}$ is responsible for the parity check of bits $1,3,5,7$.
 $H_{2}$ is responsible for the parity check of bits $2,3,6,7$.
 $H_{4}$ is responsible for the parity check of bits $4,5,6,7$.
Now, let's determine the values of $H_{1},H_{2},H_{4}$. If it is even parity:
 $H_{3},H_{5},H_{7}$, the number of $1$s is odd, so $H_{1}=1$.
 $H_{3},H_{6},H_{7}$, the number of $1$s is even, so $H_{2}=0$.
 $H_{5},H_{6},H_{7}$, the number of $1$s is even, so $H_{4}=0$.
If it is odd parity, simply take the complement of the above check bits.
Finally, the complete Hamming code is obtained.
To check for errors, simply perform XOR operations on the check bits and the bits to be checked:
$G_{1}=H_{1} \oplus (H_{1} \oplus H_{3} \oplus H_{5} \oplus H_{7}) \\ G_{2}=H_{2} \oplus (H_{2} \oplus H_{3} \oplus H_{6} \oplus H_{7}) \\ G_{3}=H_{4} \oplus (H_{4} \oplus H_{5} \oplus H_{6} \oplus H_{7})$If even parity is used, $G_{3}G_{2}G_{1}$ should be $000$, and for odd parity, it should be $111$. If it is not all $0$ or $1$, an error has occurred, and the data bit corresponding to the error should be flipped.