In computing, Unicode is the international standard whose goal is to provide a means to encode the text of every document people want to store in computers. This includes all scripts still in active use today, many scripts known only by scholars, and symbols which do not strictly represent scripts, like mathematics, linguistics and APL.
The creation of Unicode was an ambitious project to replace existing character sets, many of which are short in size and are problematic in multilingual environments. Despite technical problems and limitations and criticism on process, today Unicode has been considered the most complete character set and one of the largest, and has become the dominant encoding scheme in internationalization of software and multilingual environments. Many recent standards such as XML and system software like operating systems have adopted Unicode as an underlying scheme to represent text. Still, Unicode is not used to write documents as widely as anticipated. Many documents stored on the computer, for instance, are still represented in other character sets.
To address the short coming, Unicode is being revised periodically with the addition of more characters and increase in the size of characters potentially represented in unicode.
Origin and development
It is the explicit aim of Unicode to transcend the limitations of traditional character encodings such as those defined by the ISO 8859 standard, which are used in the various countries of the world, but are largely incompatible with each other. One problem with traditional character encodings is that they allow for bilingual computer processing (usually Roman characters and the local language), but not for multilingual computer processing (computer processing of arbitrary languages mixed with each other).
Unicode in intent encodes the underlying characters and not variant glyphs for such characters. In the case of Chinese characters, this sometimes leads to controversies over what is the underlying character and what is the variant glyph (see Han unification).
Unicode aims to provide a code point for each character, but not for each glyph—or to put this in more common (but less accurate) terms, Unicode aims to provide a unique number for each letter, without regard to typographic variations used by printers.
This simple aim is greatly complicated by another aim, which is to provide lossless conversion amongst different existing encodings in order to ease the transition.
The Unicode standard also includes a number of related items, such as character properties, text normalisation forms, and bidirectional display order (for the correct display of text containing both right-to-left scripts, such as Arabic or Hebrew, and left-to-right scripts).
In 1997 a proposal was made by Michael Everson to encode the characters of the Klingon language in Plane 1 of ISO/IEC 10646-2. The proposal was rejected in 2001 as "inappropriate for encoding" — not because the proposal was technically faulty, but because users of Klingon normally read and write and exchange data in Latin transliteration. The elvish scripts Tengwar and Cirth from J. R. R. Tolkien's Middle-earth setting were proposed for inclusion in Plane 1 in 1993. The draft was withdrawn to incorporate changes suggested by Tolkienists, and is as of 2004 still under consideration.
Unicode revision history
Mapping and encodings
So far, it has only been said that Unicode is a means to assign a unique number for all characters used by humans in written language. How these numbers are stored in text processing is another matter; problems result from the fact that much software in the West has so far been written to deal with 8-bit character encodings only, and Unicode support has only been added slowly in recent years.
The internal logic of much 8-bit legacy software typically permits only 8 bits for each character, making it impossible to use more than 256 code points without special processing. Several mechanisms have therefore been suggested to implement Unicode; which one is chosen depends on available storage space, source code compatibility, and interoperability with other systems.
The mapping methods are called the UTF (Unicode Transformation Format), and among them are UTF-32, UTF-16, UTF-8 and UTF-7. The numbers indicate the number of bits in one unit. In UTF-32, one unit is enough for any character; in the other cases, a variable number of units are used for each character.
Unicode byte order marks are often used at the beginnings of text documents. The byte order mark is code point U+FEFF, which has the interesting property of being unambiguously interpretable regardless of which Unicode encoding is used. The units FE and FF never appear in UTF-8, U+FFFE (the result of byte-swapping U+FEFF) is not a legal character, and U+FEFF is a zero-width no-break space (basically, a character with no effect and no appearance, except that it prevents formation of ligatures). The same character converted to UTF-8 becomes the byte sequence
See also: Mapping_of_Unicode_characters
Ready-made vs. composite characters
Unicode includes a mechanism for modifying character shape and so greatly extending the supported glyph repertoire. This is the use of combining diacritical marks. They are inserted after the main character (it is possible to stack several combining diacritics over the same character). However, for reasons of compatibility, Unicode also includes a large quantity of precomposed characters. So in many cases there are many ways of encoding the same character. To deal with this, Unicode provides the mechanism of canonical equivalence.
A similar situation is with Hangul. Unicode provides the mechanism for composing Hangul syllables with Hangul Jamo. However, the precomposed Hangul syllables (about 20,000 of them) are also provided.
The CJK ideographs currently are encoded only in their precomposed form. Still, most of those ideographs are evidently made up of simpler elements, so in principle it would be possible to decompose them just as it is done with Hangul. This would greatly reduce the number of required codepoints, while allowing to display virtually every conceivable ideograph (and so doing away with all problems of the Han unification). A similar idea is used for some input methods, such as Cangjie and Wubi. However, attempts to do this for character encoding have stumbled over the fact that ideographs are not as simply decomposed or as regular as they seem.
Combining marks, like the complex script shaping required to properly render Arabic text and many other scripts, are usually dependent on complex font technologies, like OpenType (by Adobe and Microsoft), Graphite (by SIL International), and AAT (by Apple), by which a font designer includes instructions in a font telling software how to properly output different character sequences. Another method sometimes employed in fixed-width fonts is to place the combining mark's glyph before its own left sidebearing; this method, however, only works for some diacritics and stacking will not occur properly.
As of 2004, most software still cannot reliably handle many features not supported by older font formats, so combining characters generally will not work correctly. Hypothetically, ḗ (precomposed e with macron and acute above) and ḗ (e followed by the combining macron above and combining acute above) are identical in appearance, both giving an e with macron and acute accent, but appearance can vary greatly across software applications.
Also combining underdots, as needed in Indic Romanization, will often at least lack in correct placement. Sample:
Of course, this is in fact not a weakness in Unicode itself, but only uncovers gaps in rendering technology and fonts.
Process and issues
The Unicode Consortium, based in California is the organization that develops the Unicode standard. It is an organization open to any company or individual willing to pay the membership dues. Members include virtually all of the main computer software and hardware companies with any interest in text processing standards, such as Apple Computer, Microsoft, IBM, Xerox, HP, Adobe Systems and many others.
The Consortium first published "The Unicode Standard" (ISBN 0321185781) in 1991, and continues to develop standards based on that original work. Unicode was developed in conjunction with the International Organization for Standardization and it shares its character repertoire with ISO/IEC 10646. Unicode and ISO/IEC 10646 are equivalent as character encodings, but The Unicode Standard contains much more information for implementers, covering, in depth, topics such as bitwise encoding, collation, and rendering, and enumerating a multitude of character properties, including those needed for BiDi support. The two standards also have slightly different terminology.
A number of issues arise in Unicode. Some people from Japan tend to oppose Unicode in general, noting technical limitations  (http://www.hastingsresearch.com/net/04-unicode-limitations.shtml) (also see the response,  (http://slashdot.org/features/01/06/06/0132203.shtml)) and political problems in process. Unicode is also criticized for failing to allow for older and alternate forms of kanji, which complicates the processing of ancient Japanese and uncommon Japanese names. In fact, there are several attempts to create an alternative to Unicode in those countries.  (http://www-106.ibm.com/developerworks/unicode/library/u-secret.html) Among them are TRON (although it is not widely adopted in Japan, some, particularly those who need to handle historical Japanese text, favor this), UTF-2000 and Giga Character Set (GCS).
Some note that one of the reasons for the complaints perhaps stems from the fact that the consortium was initially organized mostly by the US manufacturers like Microsoft. Among the most controversial is Han unification, where one Chinese character was adopted into Japanese or Korean and there changed slightly, which Unicode is treating as one character in multiple font styles.
Thai language support has been criticized for its illogical ordering of Thai characters. This complication is due to Unicode inheriting the Thai Industrial Standard, which worked in the same way. This ordering problem complicates the Unicode collation process  (http://www-106.ibm.com/developerworks/unicode/library/u-secret.html).
Unicode in use
Despite technical problems and limitations and criticism on process, Unicode has emerged as the dominant encoding scheme Microsoft Windows NT and its descendants Windows 2000 and Windows XP make extensive use of Unicode, more specifically UTF-16, as an internal representation of text. UNIX-like operating systems such as GNU/Linux, BSD and Mac OS X have adopted Unicode, more specifically UTF-8, as the basis of representation of multilingual text.
MIME defines two different mechanisms for encoding non-ASCII characters in e-mail, depending on whether the characters are in e-mail headers such as the "Subject:" or in the text body of the message. In both cases, the original character set is identified as well as a transfer encoding. For e-mail transmission of Unicode the UTF-8 character set and the Base64 transfer encoding are recommended. The details of the two different mechanisms are specified in the MIME standards and are generally hidden from users of e-mail software.
The adoption of Unicode in e-mail has been very slow. Most East-Asian text is still encoded in a local encoding such as Shift-JIS, and many commonly used e-mail programs still cannot handle Unicode data correctly. The situation is not expected to change in the foreseeable future.
Although syntax rules may affect the order in which characters are allowed to appear, both HTML 4.0 and XML 1.0 documents are, by definition, comprised of characters from the entire range of Unicode code points, minus only a handful of disallowed control characters and the permanently-unassigned code points D800-DFFF, FFFE-FFFF and 11000 and above. These characters manifest either directly as bytes according to document's encoding, if the encoding supports them, or they may be written as numeric character references based on the character's Unicode code point, as long as the document's encoding supports the digits and symbols required to write the references (all encodings approved for use on the Internet do). For example, the references
There are thousands of fonts on the market, but fewer than a dozen fonts attempt to support the majority of Unicode's character repertoire; these fonts are sometimes described as pan-Unicode. Instead, Unicode based fonts typically focus on supporting only basic ASCII and particular scripts or sets of characters or symbols. There are several reasons for this: applications and documents rarely need to render characters from more than one or two writing systems; fonts tend to be resource hogs in computing environments; and operating systems and applications are becoming increasingly intelligent in regard to obtaining glyph information from separate font files as they are needed. Furthermore, it is a monumental task to design a consistent set of rendering instructions for tens of thousands of glyphs; such a venture passes the point of diminishing returns for most typefaces.
Unicode characters which cannot be rendered are most often displayed as an open rectangle only, to indicate the position of the unrecognized character. Some attempts have been made to provide more information about these characters. The Apple LastResort font will display an ersatz glyph indicating the Unicode range of the character and the SIL Unicode fallback font  (http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&item_id=UnicodeBMPFallbackFont) will display a box showing the hexadecimal scalar value of the character.
multilingual text rendering engine
cs:Unicode da:Unicode de:Unicode es:Unicode eo:Unikodo fr:Unicode hi:यूनिकोड ia:Unicode it:Unicode [[he:יוניקוד]] kn:ಯುನಿಕೋಡ್ ks:Yunikōḍa hu:Unicode minnan:Thong-iōng-bé nl:Unicode ja:Unicode no:Unicode pl:Unicode ro:Unicode ru:Юникод sk:Unicode sr:Уникод fi:Unicode sv:Unicode th:ยูนิโคด vi:Unicode zh:Unicode