1,885
edits
Changes
→Binary Representation of Data
** Characters are encoded as integers, where each integer corresponds to one "code point" in a character table (e.g., code 65 in ASCII corresponds to the character "A").
** Historically, many different coding schemes have been used, but the two most common ones were the American Standard Code for Information Interchange (ASCII), and Extended Binary Coded Decimal Interchange Code (EBCDIC - primarily used on IBM midrange and mainframe systems).
** ASCII characters occupied seven bits (code points 0-127), and contains only characters used in North American English. ASCII characters are usually encoded in bytes, so many vendors of ASCII-based systems used the remaining codes 128-255 for special characters such as graphics, line symbols (horizontal, vertical, connector, and corner line symbols for drawing tables), and accented characters; these were called "extended ASCII".
** Several ISO standards exist in an attempt to standardize the "extended ascii" characters, such as ISO8859, which was intended to enable the encoding of European languages by adding currency symbols and accented characters. However, the original version of ISO8859-1 does not include all accented characters and was created before the Euro symbol was standardized, so there are multiple versions of ISO8859, ranging from ISO8859-1 through ISO8859-15.
** The Unicode and ISO10646 initiatives were initiated to create a single character code set that would encode all symbols used in human writing, both for current and obsolete languages. These initiatives were merged, and the Unicode and ISO10646 standards define a common character set with 2<sup>32</sup> potential code points. However, Unicode also describes transformation formats for data interchange, rendering and composition/decomposition recommendations, and font symbol recommendations.