Set up a tokenizing benchmark for ANTLR lexer (wikitext, bdc7a11)

This function doesn't actually do anything with the tokens, it just asks the lexer for tokens as fast as it can dish them out.

The intention is to measure the raw speed of the tokenization part only (ie. the part due to ANTLR only).

Signed-off-by: Greg Hurrell <greg@hurrell.net>

← Add tokenization benchmarks for ANTLR-backed scanner (wikitext, fcfc13c)
Move encoding benchmarks into separate subdirectory (wikitext, 4aaffb1) →

All snippets