Is there a semi-automated way to perform string extraction for i18n?

前端 未结 8 1806
死守一世寂寞
死守一世寂寞 2021-02-07 17:15

We have a Java project which contains a large number of English-language strings for user prompts, error messages and so forth. We want to extract all the translatable strings i

8条回答
  •  予麋鹿
    予麋鹿 (楼主)
    2021-02-07 18:11

    What you want is a tool that replaces every expression involving string concatenations with a library call, with the obvious special case of expressions involving just a single literal string.

    A program transformation system in which you can express your desired patterns can do this. Such a system accepts rules in the form of:

             lhs_pattern -> rhs_pattern  if condition ;
    

    where patterns are code fragments with syntax-category constraints on the pattern variables. This causes the tool to look for syntax matching the lhs_pattern, and if found, replace by the rhs_pattern, where the pattern matching is over langauge structures rather than text. So it works regardless of code formatting, indentation, comments, etc.

    Sketching a few rules (and oversimplifying to keep this short) following the style of your example:

      domain Java;
    
      nationalize_literal(s1:literal_string):
        " \s1 " -> "Language.getString1(\s1 )";
    
      nationalize_single_concatenation(s1:literal_string,s2:term):
        " \s1 + \s2 " -> "Language.getString1(\s1) + \s2"; 
    
      nationalize_double_concatenation(s1:literal_string,s2:term,s3:literal_string): 
          " \s1 + \s2 + \s3 " -> 
          "Language.getString3(\generate_template1\(\s1 + "{1}" +\s3\, s2);"
       if IsNotLiteral(s2);
    

    The patterns are themselves enclosed in "..."; these aren't Java string literals, but rather a way of saying to the multi-computer-lingual pattern matching engine that the suff inside the "..." is (domain) Java code. Meta-stuff are marked with \, e.g., metavariables \s1, \s2, \s3 and the embedded pattern call \generate with ( and ) to denote its meta-parameter list :-}

    Note the use of the syntax category constraints on the metavariables s1 and s3 to ensure matching only of string literals. What the meta variables match on the left hand side pattern, is substituted on the right hand side.

    The sub-pattern generate_template is a procedure that at transformation time (e.g., when the rule fires) evaluates its known-to-be-constant first argument into the template string you suggested and inserts into your library, and returns a library string index. Note that the 1st argument to generate pattern is this example is composed entirely of literal strings concatenated.

    Obviously, somebody will have to hand-process the templated strings that end up in the library to produce the foreign language equivalents.
    You're right in that this may over templatize the code because some strings shouldn't be placed in the nationalized string library. To the extent that you can write programmatic checks for those cases, they can be included as conditions in the rules to prevent them from triggering. (With a little bit of effort, you could place the untransformed text into a comment, making individual transformations easier to undo later).

    Realistically, I'd guess you have to code ~~100 rules like this to cover the combinatorics and special cases of interests. The payoff is that the your code gets automatically enhanced. If done right, you could apply this transformation to your code repeatedly as your code goes through multiple releases; it would leave previously nationalized expressions alone and just revise the new ones inserted by the happy-go-lucky programmers.

    A system which can do this is the DMS Software Reengineering Toolkit. DMS can parse/pattern match/transform/prettyprint many langauges, include Java and C#.

提交回复
热议问题