Presentation is loading. Please wait.

Presentation is loading. Please wait.

Murray Sargent III Microsoft Corporation Text Services Group, Word Tips & Tricks on Editing and Displaying Unicode Text.

Similar presentations


Presentation on theme: "Murray Sargent III Microsoft Corporation Text Services Group, Word Tips & Tricks on Editing and Displaying Unicode Text."— Presentation transcript:

1 Murray Sargent III Microsoft Corporation Text Services Group, Word Tips & Tricks on Editing and Displaying Unicode Text

2 Whats RichEdit? u RichEdit 3.0 is set of plain/rich-text, single/multiline Unicode/ANSI edit controls in single world-wide binary u Multilevel undo, message & com interfaces, Word compatibility, pretty rich text u Outline view, zoom, font binding, latest in IME support, and rich complex script support (BiDi, Indic, and Thai) u Next version: pagination, nested tables, tight wrap, 2D math (maybe!)… u Clients: Office dialogs, WordPad, Outlook RTF editor, Pocket Word,…

3 Introduction Discuss some problems in manipulating multilingual Unicode text: u Multiple fonts to display Unicode plain text u Neutral characters, deunifying characters that look different in different scripts u Working with complex scripts, like Arabic u Using keyboards to enter Unicode characters conveniently u Maintaining backward compatibity with previous character sets u Navigating through text that includes multicharacters u Implementing glyph variants and surrogate pairs

4 Font Binding u Most Unicode characters belong to scripts u Associate with each position in a document a font bundle u When inserting characters, assign each one to a script u For CJK, check surrounding characters for Kana and Hangul as clues to use Japanese or Korean fonts instead of Chinese u Assign scripts to neutrals and digits u Keyboard language, especially IMEs, provide strong binding clues u Format inserted characters with fonts assigned to scripts. Check current font to see if it supports required script

5 Font Binding Problems u Character not in any script, e.g., mathematical, arrows, dingbats: use current font or bind to font with font signature covering appropriate Unicode range. Or invent new script ID u Font signature may be zero, i.e., unsupported. Call EnumFontFamiliesEx() to enumerate all charsets for facename u Font signature may claim support for Unicode ranges, but miss some characters. cmap reveals support on codewise basis (slow to access) u Ironically, charset or codepage is a good script ID

6 Language Detection & Font Binding u Korean and Japanese are often easy to spot because of Hangul and Kana characters, respectively u For CJK can convert back to codepage and see if errors occur (Ken Lundes suggestion) u For proofing purposes, accurate language identification is needed. For font binding, script identification is usually sufficient u Typically more than one language corresponds to a script, e.g., Latin script. Essentially only one uses the Korean script u Natural language processing techniques allow good language identification if more than a few words are involved, e.g., a sentence

7 Big Fonts u BitStream Cyberbit has most Unicode characters (big font) u Some big fonts have CJK glyph variants for Japanese vs Simplified Chinese vs Traditional Chinese vs Korean u Font-binding code needs to avoid unnecessary (and unwanted) font binding with such fonts u Recognize such fonts by using font signature Unicode ranges and script (codepage) information

8 Font Sizing u In dialogs, 8-pt Latin characters are commonly used u 8-pt Chinese characters are hard to read, so better to use 9 points in combination with 8-pt Latin characters u Latin characters have bigger descenders than Chinese characters, since latter only need room for underline u Combining 8-pt Latin characters with 9-point Chinese characters and keeping same baseline increases line height to 9 pts plus extra height for Latin descender u Result is more like 10 points: shifts text too high in dialog box originally designed to handle one language

9 Complex Scripts u Unicode covers many complex scripts, e.g., Arabic, Thai u Complex-scripts require layout engine that translates character codes to glyph indices (often referencing ligatures) u General Unicode text engine has to have access to complex-script layout engine u At the previous Unicode conference David Brown discussed such an engine, Uniscribe, which runs on all Windows platforms and is shipped with recent versions of Internet Explorer u For performance: only use CS engine if needed

10 Neutrals u Many characters are neutral or multiscript and can be rendered with many different fonts u E.g., blank, ASCII punctuation, ASCII in general, other punctuation, and decimal digits u Some scripts render neutrals very differently than others and Unicodes occasional over-unification has complicated what font to use u E.g., Western ellipsis consists of three dots on baseline, while a Japanese ellipsis has three raised dots u Unicode Standard gives detailed rules for neutrals in BiDi text u Simple rule: neutrals are surrounded by nonneutral characters of same kind should be rendered with font of nonneutrals u Compatibility characters, such as ASCII fullwidth characters, reveal which script they belong to

11 Backward Compatibility u Unicode text engine has to be able to import and export text in other standards, which are defined by their codepages u Given nonUnicode plain text, which codepage should one use to convert to/from Unicode? u On localized systems, system code page is a good bet u In multilingual text, you can enter text using keyboards in a variety of languages that need either Unicode or multiple code pages u For searching text, best choice seems to be to use the current keyboard code page u If text begins with a UTF-8 BOM, use UTF-8 conversion u If text begins with a rich-text header, e.g., {\rtf or or <!doctype html, use appropriate conversion routine

12 Backward Compatibility (cont) u Need a little rich-text functionality (minimal language tagging) to display Unicode plain text unambiguously in some CJK scenarios u This functionality handles font choices and language-dependent glyph variants u There can be a disparity between typed text and set text u When a user types in text using a keyboard charset, edit engine knows charset and therefore can insert accurate Unicode text including which CJK glyph variant to use u Client gets text as pure ANSI (or Unicode) text without script clues u Would be handy to have script tags. Language tags also work, but are a case of overkill unless proofing tools are to be supported

13 Unicode on Win95/98 u Win95/98 supports a limited subset of Unicode text functions u ExtTextOutW() works in most cases. Not on Win95J or with metafiles, so convert back to ANSI whenever possible u Device drivers may not handle Unicode text u With TrueType its possible to force downloading of fonts and use Unicode more reliably u A number of GDI text APIs arent implemented, e.g., GetGlyphOutlineW(). u GetStringTypeExW is stubbed out, so all references to character property tables have to go through a codepage translation (WideCharToMultiByte()). u Text boxes, list boxes, comboboxes are all ANSI; use RichEdit for Unicode

14 Unicode Keyboard Input u National keyboards provide ways to input many Unicode characters. E.g., Greek, Russian, and all ordinary European text. u IMEs (input method editors) let you type phonetic characters to get a partially composed character sequence. Then type blank to request composition. If the composition is reasonably unique, you get a fully composed character; else you get menu of possible resolutions. u To enter Unicode Hex input type a Unicode hexadecimal code into the text. type a special hot key, e.g., Alt+x, to convert the hex to a Unicode character u Type Alt+X to replace a character by its hexadecimal number. u Input Sequence Checking. Vietnamese, Thai, and Indic languages dont allow all Unicode sequences to be valid and utilize special input sequence checking code to disallow illegal sequences. For example, Vietnamese only allows tone marks on vowels.

15 Unicode Surrogates u Discuss 3 display models that could enable Win9x/WinNTx based applications to display higher-plane characters (those in the 16 planes above the BMP). Ideas are still under development... u First uses a plane index and a 16-bit offset u Second uses a flat 32-bit index u Third uses surrogate-pair ligatures u Models arent mutually exclusive, since they involve different cmaps (compressed tables used to convert codepoints to glyphs) u All assume higher-plane characters are stored as standard Unicode surrogate pairs u Alternative representations include straight 32-bit characters and UTF- 8, but arent as practical

16 Unicode Surrogates (cont) u Using 2 16-bit surrogates to represent a single character complicates more than measurement and display of characters: u Arrow-key handlers and other methods that change character position must avoid ending up in between lead and trail surrogates u Input methods need to map to surrogate pair u Case changes, line-breaking rules, sorting, file formats, and backing- store manipulations in general have to recognize and deal with pairs u Surrogate code ranges make them easy to work with relative to multibyte encoding systems u All three display models assume that GDI remains unchanged (need to be able to run on OSs already in field u Also assume that 16-bit glyph indices are sufficient so that TrueType rasterizer doesnt need to be revised

17 Surrogate Planar Model u Characters in font all belong to a particular plane u No changes required to OS. Applications extend font binding logic to handle font switches to appropriate planes u Character indices remain 16-bit: allows ExtTextOutW family to be used directly u Model easy for apps to use today in platform-independent way if no complex scripts are involved u Complex scripts need layout engine. Then applications can ignore model issue, since layout engine handles OS/font interactions u Truncated 16-bit code indices may map codes in higher planes to common control or neutral codes u For surrogate-unaware text-processing code, some ranges would have to be reserved in upper planes

18 Surrogate Flat and Ligature Models u Flat 32-bit model uses 32-bit code to index into a new 32-bit cmap in font file to translate the codes to 16-bit glyph indices u Glyph indices are used to access TextOut family u Method is too tricky for most applications to handle directly: need surrogate-aware version of Uniscribe u Font binding is done using font signature u Alternatively, application could use 32-bit character strings with a 32- bit TextOut family housed in platform-independent component u Ligature model requires use of complex-script engine to access ligature tables

19 Comparison of Surrogate Models u Ease of implementation: for simple scripts, planar model is easiest. In worldwide-binary environment, need Uniscribe, which can handle OS/font interactions u Performance: Code to glyph mapping has to be done at some point. Uniscribe is slower and more RAM intensive than planar model or 32- bit TextOut component u Flexibility: flat and ligature models can access chars in all 17 planes even in same font; planar model one plane per font u Backward compatibility: planar model only needs appropriate fonts and surrogate-aware apps to work on all Windows platforms u Flat and ligature models require a complex-script engine or a 32-bit TextOut component to run on all Win9x/WinNTx platforms

20 Nonspacing Combining Marks u Multicode characters (surrogate pairs, CRLFs, combining-mark and variant-tag sequences) require special display/navigation handling u Render combining-mark sequences by standard systems calls and fonts that support combining marks. Better display needs layout engine that talks to OpenType u Simple caret movement across combining-mark sequences prevents stopping inside a sequence. Backspace key deletes one mark at a time u Mouse-cursor hit testing leaves selection at beginning/end of combining-mark sequence (more elegant model allows selection and editing of individual marks) u Cool thing: if you can navigate past CRLF combinations, you can modify corresponding code to handle surrogate pairs and combining- mark sequences quite easily

21 Glyph Variants u Character variant: 1) Different character open to future coding, 2) Prescribed variant (Mongollian), 3) Systematic semantic variation (different forms like italic, bold, script, Fraktur in math expressions) Glyph variant: 1) Artistic variant: free variation (57 &s in Poetica font), 2) Context preferred style (CJK language- based variants), 3) Overloaded code points (U+005C: \ ¥ ), 4) Historical variant: glyph changed over time u Identity variant: 2 external characters map to same Unicode character

22 Handling Glyph Variants u Character variant is open to separate encoding. But if already used, complicates search algorithms (Ş vs Romanian S comma) u Two approaches: inline variant marks and out-of-plane annotations u Inline variant marks need to be ignored in some searches u Out-of-plane annotation is invisible in plain text and requires more memory than inline variant mark u Semantically different characters, e.g., math italic b and math script b, need to be distinguishable in searches, so separate encoding or use of inline variant marks are desirable u Current proposal for inline variant marks defines 256 standard variant codes in plane 14 as well as 256 codes for user-defined variant codes

23 Conclusions u Have addressed issues encountered in creating Unicode editors. Issues include: u Automatic choice of fonts for Unicode plain text u Handling nonUnicode documents in Unicode text engines u Ways to input Unicode text u Combining-mark sequences, surrogate pairs, navigation in multicode text, and glyph variants u Some ideas have been implemented in RichEdit 3.0 control and other text engines u Unicode surrogate pairs and glyph variants need decisions...


Download ppt "Murray Sargent III Microsoft Corporation Text Services Group, Word Tips & Tricks on Editing and Displaying Unicode Text."

Similar presentations


Ads by Google