GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages
Lingua::Identify(3) User Contributed Perl Documentation Lingua::Identify(3)

Lingua::Identify - Language identification

  use Lingua::Identify qw(:language_identification);
  $a = langof($textstring); # gives the most probable language

or the complete way:

  @a = langof($textstring); # gives pairs of languages / probabilities
                            # sorted from most to least probable

  %a = langof($textstring); # gives a hash of language / probability

or the expert way (see section OPTIONS, under HOW TO PERFORM IDENTIFICATION)

  $a = langof( { method => [qw/smallwords prefix2 suffix2/] }, $text);

  $a = langof( { 'max-size' => 3_000_000 }, $text);

  $a = langof( { 'extract_from' => ( 'head' => 1, 'tail' => 2)}, $text);

STARTING WITH VERSION 0.25, Lingua::Identify IS UNICODE BY DEFAULT!

"Lingua::Identify" identifies the language a given string or file is written in.

See section WHY LINGUA::IDENTIFY for a list of "Lingua::Identify"'s strong points.

See section KNOWN LANGUAGES for a list of available languages and HOW TO PERFORM IDENTIFICATION to know how to really use this module.

If you're in a hurry, jump to section EXAMPLES, way down below.

Also, don't forget to read the following section, IMPORTANT WARNING.

Take a word that exists in two different languages, take a good look at it and answer this question: "What language does this word belong to?".

You can't give an answer like "Language X", right? You can only say it looks like any of a set of languages.

Similarly, it isn't always easy to identify the language of a text if the only two active languages are very similar.

Now that we've taken out of the way the warning that language identification is not 100% accurate, please keep reading the documentation.

You might be wondering why you should use Lingua::Identify instead of any other tool for language identification.

Here's a list of Lingua::Identify's strong points:

  • it's free and it's open-source;
  • it's portable (it's Perl, which means it will work in lots of different platforms);
  • unicode support;
  • 4 different methods of language identification and growing (see METHODS OF LANGUAGE IDENTIFICATION for more details on this one);
  • it's a module, which means you can easily write your own application (be it CGI, TK, whatever) around it;
  • it comes with langident, which means you don't actually need to write your own application around it;
  • it's flexible (at the moment, you can actually choose the methods to use and their relevance, the max size of input to analyze each time and which part(s) of the input to analyze)
  • it supports big inputs (through the 'max-size' and 'extract_from' options)
  • it's easy to deal with languages (you can activate and deactivate the ones you choose whenever you want to, which can improve your times and accuracy);
  • it's maintained.

To identify the language a given text is written in, use the langof function. To get a single value, do:

  $language = langof($text);

To get the most probable language and also the percentage of its probability, do:

  ($language, $probability) = langof($text);

If you want a hash where each active language is mapped into its percentage, use this:

  %languages = langof($text);

OPTIONS

langof can also be given some configuration parameters, in this way:

  $language = langof(\%config, $text);

These parameters are detailed here:

  • extract-from

    When the size of the input exceeds the C'max-size', "langof" analyzes only the beginning of the file. You can specify which part of the file is analyzed with the 'extract-from' option:

      langof( { 'extract_from' => 'tail' } , $text );
        

    Possible values are 'head' and 'tail' (for now).

    You can also specify more than one part of the file, so that text is extracted from those parts:

      langof( { 'extract_from' => [ 'head', 'tail' ] } , $text );
        

    (this will be useful when more than two possibilities exist)

    You can also specify different values for each part of the file (not necessarily for all of them:

     langof( { 'extract_from' => { head => 40, tail => 60 } } , $text);
        

    The line above, for instance, retrives 40% of the text from the beginning and 60% from the end. Note, however, that those values are not percentages. You'd get the same behavior with:

     langof( { 'extract_from' => { head => 80, tail => 120 } } , $text);
        

    The percentages would be the same.

  • max-size

    By default, "langof" analyzes only 1,000,000 bytes. You can specify how many bytes (at the most) can be analyzed (if not enough exist, the whole input is still analyzed).

      langof( { 'max-size' => 2000 }, $text);
        

    If you want all the text to be analyzed, set max-size to 0:

      langof( { 'max-size' => 0 }, $text);
        

    See also "set_max_size".

  • method

    You can choose which method or methods to use, and also the relevance of each of them.

    To choose a single method to use:

      langof( {method => 'smallwords' }, $text);
        

    To choose several methods:

      langof( {method => [qw/prefixes2 suffixes2/]}, $text);
        

    To choose several methods and give them different weight:

      langof( {method => {smallwords => 0.5, ngrams3 => 1.5} }, $text);
        

    To see the list of available methods, see section METHODS OF LANGUAGE IDENTIFICATION.

    If no method is specified, the configuration for this parameter is the following (this might change in the future):

      method => {
        smallwords => 0.5,
        prefixes2  => 1,
        suffixes3  => 1,
        ngrams3    => 1.3
      };
        
  • mode

    By default, "Lingua::Identify" assumes "normal" mode, but others are available.

    In "dummy" mode, instead of actually calculating anything, "Lingua::Identify" only does the preparation it has to and then returns a bunch of information, including the list of the active languages, the selected methods, etc. It also returns the text meant to be analised.

    Do be warned that, with langof_file, the dummy mode still reads the files, it simply doesn't calculate language.

      langof( { 'mode' => 'dummy' }, $text);
        

    This returns something like this:

      { 'methods'          => {   'smallwords' => '0.5',
                                  'prefixes2'  => '1',
                              },
        'config'           => {   'mode' => 'dummy' },
        'max-size'         => 1000000,
        'active-languages' => [ 'es', 'pt' ],
        'text'             => $text,
        'mode'             => 'dummy',
      }
        

langof_file works just like langof, with the exception that it reveives filenames instead of text. It reads these texts (if existing and readable, of course) and parses its content.

Currently, langof_file assumes the files are regular text. This may change in the future and the files might be scanned to check their filetype and then parsed to extract only their textual content (which should be pretty useful so that you can perform language identification, say, in HTML files, or PDFs).

To identify the language a file is written in:

  $language = langof_file($path);

To get the most probable language and also the percentage of its probability, do:

  ($language, $probability) = langof_file($path);

If you want a hash where each active language is mapped into its percentage, use this:

  %languages = langof_file($path);

If you pass more than one file to langof_file, they will all be read and their content merged and then parsed for language identification.

OPTIONS

langof_file accepts all the options langof does, so refer to those first (up in this document).

  $language = langof_file(\%config, $path);

langof_file currently only reads the first 10,000 bytes of each file.

You can force an input encoding with "{ encoding => 'ISO-8859-1' }" in the configuration hash.

After getting the results into an array, its first element is the most probable language. That doesn't mean it is very probable or not.

You can find more about the likeliness of the results to be accurate by computing its confidence level.

  use Lingua::Identify qw/:language_identification/;
  my @results = langof($text);
  my $confidence_level = confidence(@results);
  # $confidence_level now holds a value between 0.5 and 1; the higher that
  # value, the more accurate the results seem to be

The formula used is pretty simple: p1 / (p1 + p2) , where p1 is the probability of the most likely language and p2 is the probability of the language which came in second. A couple of examples to illustrate this:

English 50% Portuguese 10% ...

confidence level: 50 / (50 + 10) = 0.83

Another example:

Spanish 30% Portuguese 10% ...

confidence level: 30 / (25 + 30) = 0.55

French 10% German 5% ...

confidence level: 10 / (10 + 5) = 0.67

As you can see, the first example is probably the most accurate one. Are there any doubts? The English language has five times the probability of the second language.

The second example is a bit more tricky. 55% confidence. The confidence level is always above 50%, for obvious reasons. 55% doesn't make anyone confident in the results, and one shouldn't be, with results such as these.

Notice the third example. The confidence level goes up to 67%, but the probability of French is of mere 10%. So what? It's twice as much as the second language. The low probability may well be caused by a great number of languages in play.

Returns a list comprised of all the available methods for language identification.

Language identification is based in patterns.

In order to identify the language a given text is written in, we repeat a given process for each active language (see section LANGUAGES MANIPULATION); in that process, we look for common patterns of that language. Those patterns can be prefixes, suffixes, common words, ngrams or even sequences of words.

After repeating the process for each language, the total score for each of them is then used to compute the probability (in percentage) for each language to be the one of that text.

"Lingua::Identify" currently comprises four different ways for language identification, in a total of thirteen variations of those.

The available methods are the following: smallwords, prefixes1, prefixes2, prefixes3, prefixes4, suffixes1, suffixes2, suffixes3, suffixes4, ngrams1, ngrams2, ngrams3 and ngrams4.

Here's a more detailed explanation of each of those ways and those methods

The "Small Word Technique" searches the text for the most common words of each active language. These words are usually articles, pronouns, etc, which happen to be (usually) the shortest words of the language; hence, the method name.

This is usually a good method for big texts, especially if you happen to have few languages active.

This method analyses text for the common prefixes of each active language.

The methods are, respectively, for prefixes of size 1, 2, 3 and 4.

Similar to the Prefix Analysis (see above), but instead analysing common suffixes.

The methods are, respectively, for suffixes of size 1, 2, 3 and 4.

Ngrams are sequences of tokens. You can think of them as syllables, but they are also more than that, as they are not only comprised by characters, but also by spaces (delimiting or separating words).

Ngrams are a very good way for identifying languages, given that the most common ones of each language are not generally very common in others.

This is usually the best method for small amounts of text or too many active languages.

The methods are, respectively, for ngrams of size 1, 2, 3 and 4.

When trying to perform language identification, "Lingua::Identify" works not with all available languages, but instead with the ones that are active.

By default, all available languages are active, but that can be changed by the user.

For your convenience, several methods regarding language manipulation were created. In order to use them, load the module with the tag :language_manipulation.

These methods work with the two letters code for languages.

activate_language
Activate a language

  activate_language('en');

  # or

  activate_language($_) for get_all_languages();
    
activate_all_languages
Activates all languages

  activate_all_languages();
    
deactivate_language
Deactivates a language

  deactivate_language('en');
    
deactivate_all_languages
Deactivates all languages

  deactivate_all_languages();
    
get_all_languages
Returns the names of all available languages

  my @all_languages = get_all_languages();
    
get_active_languages
Returns the names of all active languages

  my @active_languages = get_active_languages();
    
get_inactive_languages
Returns the names of all inactive languages

  my @active_languages = get_inactive_languages();
    
is_active
Returns the name of the language if it is active, an empty list otherwise

  if (is_active('en')) {
    # YOUR CODE HERE
  }
    
is_valid_language
Returns the name of the language if it exists, an empty list otherwise

  if (is_valid_language('en')) {
    # YOUR CODE HERE
  }
    
set_active_languages
Sets the active languages

  set_active_languages('en', 'pt');

  # or

  set_active_languages(get_all_languages());
    
name_of
Given the two letter tag of a language, returns its name

  my $language_name = name_of('pt');
    

Currently, "Lingua::Identify" knows the following languages (33 total):
AF - Afrikaans
BG - Bulgarian
BR - Breton
BS - Bosnian
CY - Welsh
DA - Danish
DE - German
EN - English
EO - Esperanto
ES - Spanish
FI - Finnish
FR - French
FY - Frisian
GA - Irish
HR - Croatian
HU - Hungarian
ID - Indonesian
IS - Icelandic
IT - Italian
LA - Latin
MS - Malay
NL - Dutch
NO - Norwegian
PL - Polish
PT - Portuguese
RO - Romanian
RU - Russian
SL - Slovene
SO - Somali
SQ - Albanian
SV - Swedish
SW - Swahili
TR - Turkish

Please do not contribute with modules you made yourself. It's easier to contribute with unprocessed text, because that allows for new versions of Lingua::Identify not having to drop languages down in case I can't contact you by that time.

Use make-lingua-identify-language to create a new module for your own personal use, if you must, but try to contribute with unprocessed text rather than those modules.

Check the language a given text file is written in:

  use Lingua::Identify qw/langof/;

  my $text = join "\n", <>;

  # identify the language by letting the module decide on the best way
  # to do so
  my $language = langof($text);

Check the language a given text file is written in, supposing you happen to know it's either Portuguese or English:

  use Lingua::Identify qw/langof set_active_languages/;
  set_active_languages(qw/pt en/);

  my $text = join "\n", <>;

  # identify the language by letting the module decide on the best way
  # to do so
  my $language = langof($text);

  • WordNgrams based methods;
  • More languages (always);
  • File recognition and treatment;
  • Deal with different encodings;
  • Create sets of languages and allow their activation/deactivation;
  • There should be a way of knowing the default configuration (other than using the dummy mode, of course, or than accessing the variables directly);
  • Add a section about other similar tools.

The following people and/or projects helped during this tool development:

   * EuroParl v5 corpus was used to train Dutch, German, English,
     Spanish, Finish, French, Italian, Portuguese, Danish and Swedish.

langident(1), Text::ExtractWords(3), Text::Ngram(3), Text::Affixes(3).

ISO 639 Language Codes, at http://www.w3.org/WAI/ER/IG/ert/iso639.htm

Alberto Simoes, "<ambs@cpan.org>"

Jose Castro, "<cog@cpan.org>"

Copyright 2008-2010 Alberto Simoes, All Rights Reserved. Copyright 2004-2008 Jose Castro, All Rights Reserved.

This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

2013-08-17 perl v5.32.1

Search for    or go to Top of page |  Section 3 |  Main Index

Powered by GSP Visit the GSP FreeBSD Man Page Interface.
Output converted with ManDoc.