[Israel.pm] Perl unicode question

Gaal Yahas gaal at forum2.org
Mon Feb 13 04:27:33 PST 2012


On Mon, Feb 13, 2012 at 1:12 PM, Issac Goldstand <margol at beamartyr.net>wrote:

>  On 13/02/2012 12:54, Gaal Yahas wrote:
>
>
> On Mon, Feb 13, 2012 at 12:30 PM, Issac Goldstand <margol at beamartyr.net>wrote:
>
>> If there's one thing I can never seem to get straight, it's character
>> encodings...
>>
>> I'm trying to parse some data from the web which can come in different
>> encodings, and write unit tests which come from static files.
>>
>> One of the strings that I'm trying to test for is "Forex Trading Avec
>> 100€"  The string is originally encoded (supposedly) in ISO-8859-1 based
>> on the header Content-Type: text/html; charset=ISO-8859-1 and presence
>> of the following META tag <meta http-equiv="Content-Type"
>> content="text/html; charset=ISO-8859-1">
>>
>>
>  When dealing with encoding problems, it's helpful to isolate the problem
> as much as you can. Every piece that reports on an encoding can get it
> wrong, and the fact that both the server and the document claim it's 8859-1
> doesn't mean it they aren't lying. So start by fetching the document in raw
> form with curl or wget, and open that with "od -t x1a".
>
> That gave me HEX format, which I don't understand how it'd really help
> (unless I got lucky and found a BOM at the start)...
>

The hex dump gives you the truth about the data on the wire. There's some
abstract stream of text which somebody encoded in some way, with is
probably cp-1252, and labeled it another way, iso8859-1. Then the web
server was either configured to use 8859-1 or trusted the meta tag, or
something, and sent that over to the client with the wrong label again in
the HTTP header.

You shouldn't expect to see a BOM here because that's a feature of Unicode,
and neither cp-1252 nor iso 8859-1 are Unicode encodings. They're 8 bit
encodings, not self-documenting, and notorious for exactly this kind of
trouble.

cp-1252 encodes the Euro symbol as the byte 0x80. In Unicode, the code
point is U+20A0, which encodes as 0xE2 0x82 0xAC in UTF-8, or 0x20 0xA0 /
0xA0 0x20 in UTF-16 depending on endianity. Looking at the hex dump allows
you to find the specific character and see how it was encoded, then infer
what the document "really" is. You have the benefit here of knowing what
symbol you're expecting.


>
>
>
>> (N.B. I'm a bit confused by that as IIRC, ISO-8859-1 doesn't contain the
>> EUR character...)
>>
>>
>  The standard predates the currency.
>
> I know - I meant it seemed odd that the document could *be* ISO-8859-1
> given that fact.
>
>
Well, obviously it *isn't*. It's just labeled that way :-)

>
>
>> When opening the source code in a text editor as either ISO-8859-1 or
>> ISO-8859-15 (or even UTF-8), I can't see the character.  I *do* see the
>> character when viewing it as CP1255 which kinda worries me, as I get the
>> feeling I'm a lot farther from the source as I think when I see that...
>>
>>
>  Sounds like you actually have the problem in your hands: somebody
> misencoded the data.
>
>
>> My unit test for above test is as following:
>>
>> use utf8; # String literals contain UTF-8 in this file
>> binmode STDOUT ":utf8";
>> ...
>> open($fh, "<:encoding(ISO-8859-1)", "t/html0004.html") || die "...: $!";
>> $parser->parse_file($fh); # Subclassed HTML::Parser
>> ...
>> is($test->{top}, "Forex Trading Avec 100€", "Correct headline text");
>>
>
>  If you tweak your code to use cp1255 (which encodes Euro as 0x80), does
> it pass? I expect it should, confirming the problem.
>
>>
>>   It failed some other tests adding hebrew chars instead of accents.
> CP1252 seemed to work, but this bothers me as I'm still doing human
> guess-work, and this would (and, indeed, does) still cause problems in the
> production code which has only LWP's output to work with.  And LWP goes by
> the character codes presented by the document from what I can see:
>
> (Message.pm line 359 from HTTP::Message)
>     if ($self->content_is_text || (my $is_xml = $self->content_is_xml)) {
>         my $charset = lc(
>             $opt{charset} ||
>         $self->content_type_charset ||
>         $opt{default_charset} ||
>         $self->content_charset ||
>         "ISO-8859-1"
>         );
>
> Do you know a better way to guess the real content-type?  The browsers do
> it somehow...
>

If you're dealing with 8-bit encodings, you'll have to use some sort of
probabilistic method, unless you have some special knowledge about your
documents.

"Of course, this is a heuristic, which is a fancy way of saying that it
doesn't work." --mjd <http://www.perl.com/pub/2000/02/spamfilter.html>



> _______________________________________________
> Perl mailing list
> Perl at perl.org.il
> http://mail.perl.org.il/mailman/listinfo/perl
>



-- 
Gaal Yahas <gaal at forum2.org>
http://gaal.livejournal.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.perl.org.il/pipermail/perl/attachments/20120213/3e8f72bf/attachment.htm 


More information about the Perl mailing list