How do I tokenize input using Java's Scanner class and regular expressions?

前端 未结 4 1555
北海茫月
北海茫月 2021-01-01 03:46

Just for my own purposes, I\'m trying to build a tokenizer in Java where I can define a regular grammar and have it tokenize input based on that. The StringTokenizer class i

相关标签:
4条回答
  • 2021-01-01 04:06

    If this is for a simple project (for learning how things work), then go with what Balint Pato said.

    If this is for a larger project, consider using a scanner generator like JFlex instead. Somewhat more complicated, but faster and more powerful.

    0 讨论(0)
  • 2021-01-01 04:15

    Most of the answers here are already excellent but I would be remiss if I didn't point out ANTLR. I've created entire compilers around this excellent tool. Version 3 has some amazing features and I'd recommend it for any project that required you to parse input based on a well defined grammar.

    0 讨论(0)
  • 2021-01-01 04:16

    If I understand your question well then here are two example methods to tokenize a string. You do not even need the Scanner class, only if you want to pre-cast the tokens, or iterate through them more sofistically than using an array. If an array is enough just use String.split() as given below.

    Please give more requirements to enable more precise answers.

     import java.util.Scanner;
    
    
      public class Main {    
    
        public static void main(String[] args) {
    
            String textToTokenize = "This is a text that will be tokenized. I will use 1-2 methods.";
            Scanner scanner = new Scanner(textToTokenize);
            scanner.useDelimiter("i.");
            while (scanner.hasNext()){
                System.out.println(scanner.next());
            }
    
            System.out.println(" **************** ");
            String[] sSplit = textToTokenize.split("i.");
    
            for (String token: sSplit){
                System.out.println(token);
            }
        }
    
    }
    
    0 讨论(0)
  • 2021-01-01 04:26

    The name "Scanner" is a bit misleading, because the word is often used to mean a lexical analyzer, and that's not what Scanner is for. All it is is a substitute for the scanf() function you find in C, Perl, et al. Like StringTokenizer and split(), it's designed to scan ahead until it finds a match for a given pattern, and whatever it skipped over on the way is returned as a token.

    A lexical analyzer, on the other hand, has to examine and classify every character, even if it's only to decide whether it can safely ignore them. That means, after each match, it may apply several patterns until it finds one that matches starting at that point. Otherwise, it may find the sequence "//" and think it's found the beginning of a comment, when it's really inside a string literal and it just failed to notice the opening quotation mark.

    It's actually much more complicated than that, of course, but I'm just illustrating why the built-in tools like StringTokenizer, split() and Scanner aren't suitable for this kind of task. It is, however, possible to use Java's regex classes for a limited form of lexical analysis. In fact, the addition of the Scanner class made it much easier, because of the new Matcher API that was added to support it, i.e., regions and the usePattern() method. Here's an example of a rudimentary scanner built on top of Java's regex classes.

    import java.util.*;
    import java.util.regex.*;
    
    public class RETokenizer
    {
      static List<Token> tokenize(String source, List<Rule> rules)
      {
        List<Token> tokens = new ArrayList<Token>();
        int pos = 0;
        final int end = source.length();
        Matcher m = Pattern.compile("dummy").matcher(source);
        m.useTransparentBounds(true).useAnchoringBounds(false);
        while (pos < end)
        {
          m.region(pos, end);
          for (Rule r : rules)
          {
            if (m.usePattern(r.pattern).lookingAt())
            {
              tokens.add(new Token(r.name, m.start(), m.end()));
              pos = m.end();
              break;
            }
          }
          pos++;  // bump-along, in case no rule matched
        }
        return tokens;
      }
    
      static class Rule
      {
        final String name;
        final Pattern pattern;
    
        Rule(String name, String regex)
        {
          this.name = name;
          pattern = Pattern.compile(regex);
        }
      }
    
      static class Token
      {
        final String name;
        final int startPos;
        final int endPos;
    
        Token(String name, int startPos, int endPos)
        {
          this.name = name;
          this.startPos = startPos;
          this.endPos = endPos;
        }
    
        @Override
        public String toString()
        {
          return String.format("Token [%2d, %2d, %s]", startPos, endPos, name);
        }
      }
    
      public static void main(String[] args) throws Exception
      {
        List<Rule> rules = new ArrayList<Rule>();
        rules.add(new Rule("WORD", "[A-Za-z]+"));
        rules.add(new Rule("QUOTED", "\"[^\"]*+\""));
        rules.add(new Rule("COMMENT", "//.*"));
        rules.add(new Rule("WHITESPACE", "\\s+"));
    
        String str = "foo //in \"comment\"\nbar \"no //comment\" end";
        List<Token> result = RETokenizer.tokenize(str, rules);
        for (Token t : result)
        {
          System.out.println(t);
        }
      }
    }
    

    This, by the way, is the only good use I've ever found for the lookingAt() method. :D

    0 讨论(0)
提交回复
热议问题