IdType c = tok!"case";
assert (str(c) == "case");dparse.lexer
Types 19
Token ID type for the D lexer.
Function used for converting an IdType to a string.
Examples
The token type in the D lexer
Same as Token, but doesn't contain child TriviaTokens
Configure whitespace handling
Configure string lexing behavior
StringBehavior compilerDo not include quote characters, process escape sequencesStringBehavior includeQuoteCharsOpening quotes, closing quotes, and string suffixes are included in the string tokenStringBehavior notEscapedString escape sequences are not replacedStringBehavior sourceNot modified at all. Useful for formatters or highlightersubyte behaviorLexer configuration struct
string fileNameStringBehavior stringBehaviorWhitespaceBehavior whitespaceBehaviorCommentBehavior commentBehaviorBasic type token types.
Number literal token types.
Number literal token types.
Operator token types.
Keyword token types.
String literal token types
Protection token types.
The D lexer struct.
tokenStartMessage[] _messagesStringCache * cacheLexerConfig configbool haveSSE42IstringState[] istringStackconst(Message[]) messages() const @propertyReturns: An array of all of the warnings and errors generated so far during lexing. It may make sense to only check this when `empty` returns `true`.bool isWhitespace()void popFrontWhitespaceAware()void lexWhitespace(ref Token token) @trustedvoid lexDecimal(ref Token token)void lexDecimal(ref Token token, size_t mark, size_t line, size_t column,
size_t index) @trustedvoid lexScriptLine(ref Token token)void lexSpecialTokenSequence(ref Token token)void lexSlashStarComment(ref Token token) @trustedvoid lexSlashSlashComment(ref Token token) @trustedvoid lexSlashPlusComment(ref Token token) @trustedvoid lexStringLiteral(ref Token token) @trustedvoid lexWysiwygString(ref Token token) @trustedvoid lexInterpolatedString(ref Token token)void _popFrontIstringContent()void _popFrontIstringPlain()bool isAtIstringExpression()void lexDelimitedString(ref Token token)void lexNormalDelimitedString(ref Token token, size_t mark, size_t line, size_t column,
size_t index, ubyte open, ubyte close)void lexHeredocString(ref Token token, size_t mark, size_t line, size_t column, size_t index)void lexTokenString(ref Token token)void lexHexString(ref Token token)bool lexNamedEntity()bool lexEscapeSequence()void lexCharacterLiteral(ref Token token)void lexIdentifier(ref Token token, const bool silent = false) @trustedvoid lexLongNewline(ref Token token) @nogcbool isNewline() @nogcbool isSeparating(size_t offset) @nogcvoid error(string message)void warning(string message)this()this(R range, const LexerConfig config, StringCache * cache,
bool haveSSE42 = sse42())Params: range = the bytes that compose the source code that will be lexed. config = the lexer configuration to use. cache = the string interning cache for de-duplicating identifiers and other token...MessageLexer error/warning message.IstringStateThe string cache is used for string interning.
It will only store a single copy of any string that it is asked to hold. Interned strings can be compared for equality by comparing their .ptr field.
Default and postbilt constructors are disabled. When a StringCache goes out of scope, the memory held by it is freed.
defaultBucketCountThe default bucket count for the string cache. BLOCK_SIZE BIG_STRINGNode *[] bucketsBlock * rootBlockstring _intern(const(ubyte)[] bytes) @trustedthis()this(size_t bucketCount)Params: bucketCount = the initial number of buckets. Must be a power of twoNodeBlockFunctions 17
bool isBasicType(IdType type) nothrow pure @safe @nogcReturns: true if the given ID is for a basic type.bool isNumberLiteral(IdType type) nothrow pure @safe @nogcReturns: true if the given ID type is for a number literal.bool isIntegerLiteral(IdType type) nothrow pure @safe @nogcReturns: true if the given ID type is for a integer literal.bool isOperator(IdType type) nothrow pure @safe @nogcReturns: true if the given ID type is for an operator.bool isKeyword(IdType type) pure nothrow @safe @nogcReturns: true if the given ID type is for a keyword.bool isStringLiteral(IdType type) pure nothrow @safe @nogcReturns: true if the given ID type is for a string literal.bool isProtection(IdType type) pure nothrow @safe @nogcReturns: true if the given ID type is for a protection attribute.Token[] getTokensForParser(R)(R sourceCode, LexerConfig config, StringCache * cache) if (is(Unqual!(ElementEncodingType!R) : ubyte) && isDynamicArray!R)Returns: an array of tokens lexed from the given source code to the output range. All whitespace, comment and specialTokenSequence tokens (trivia) are attached to the token nearest to them.auto byToken(R)(R range) if (is(Unqual!(ElementEncodingType!R) : ubyte) && isDynamicArray!R)Creates a token range from the given source code. Creates a default lexer configuration and a GC-managed string cache.auto byToken(R)(R range, StringCache * cache) if (is(Unqual!(ElementEncodingType!R) : ubyte) && isDynamicArray!R)Creates a token range from the given source code. Uses the given string cache.auto byToken(R)(R range, const LexerConfig config, StringCache * cache) if (is(Unqual!(ElementEncodingType!R) : ubyte) && isDynamicArray!R)Creates a token range from the given source code. Uses the provided lexer configuration and string cache.size_t optimalBucketCount(size_t size)Helper function used to avoid too much allocations while lexing.Variables 7
operators = [
",", ".", "..", "...", "/", "/=", "!", "!<", "!<=", "!<>", "!<>=", "!=",
"!>", "!>=", "$", "%", "%=", "&", "&&", "&=", "(", ")", "*", "*=", "+", "++",
"+=", "-", "--", "-=", ":", ";", "<", "<<", "<<=", "<=", "<>", "<>=", "=",
"==", "=>", ">", ">=", ">>", ">>=", ">>>", ">>>=", "?", "@", "[", "]", "^",
"^=", "^^", "^^=", "{", "|", "|=", "||", "}", "~", "~="
]Operators
keywords = [
"abstract", "alias", "align", "asm", "assert", "auto", "bool",
"break", "byte", "case", "cast", "catch", "cdouble", "cent", "cfloat",
"char", "class", "const", "continue", "creal", "dchar", "debug", "default",
"delegate", "delete", "deprecated", "do", "double", "else", "enum",
"export", "extern", "false", "final", "finally", "float", "for", "foreach",
"foreach_reverse", "function", "goto", "idouble", "if", "ifloat",
"immutable", "import", "in", "inout", "int", "interface", "invariant",
"ireal", "is", "lazy", "long", "macro", "mixin", "module", "new", "nothrow",
"null", "out", "override", "package", "pragma", "private", "protected",
"public", "pure", "real", "ref", "return", "scope", "shared", "short",
"static", "struct", "super", "switch", "synchronized", "template", "this",
"throw", "true", "try", "typedef", "typeid", "typeof", "ubyte", "ucent",
"uint", "ulong", "union", "unittest", "ushort", "version", "void",
"wchar", "while", "with", "__DATE__", "__EOF__", "__FILE__",
"__FILE_FULL_PATH__", "__FUNCTION__", "__gshared", "__LINE__", "__MODULE__",
"__parameters", "__PRETTY_FUNCTION__", "__TIME__", "__TIMESTAMP__", "__traits",
"__vector", "__VENDOR__", "__VERSION__"
]Kewords
dynamicTokens = [
"specialTokenSequence", "comment", "identifier", "scriptLine",
"whitespace", "doubleLiteral", "floatLiteral", "idoubleLiteral",
"ifloatLiteral", "intLiteral", "longLiteral", "realLiteral",
"irealLiteral", "uintLiteral", "ulongLiteral", "characterLiteral",
"dstringLiteral", "stringLiteral", "wstringLiteral", "istringLiteralStart",
"istringLiteralText", "istringLiteralEnd"
]Other tokens
pseudoTokenHandlers = [
"\"", "lexStringLiteral",
"`", "lexWysiwygString",
"//", "lexSlashSlashComment",
"/*", "lexSlashStarComment",
"/+", "lexSlashPlusComment",
".", "lexDot",
"'", "lexCharacterLiteral",
"0", "lexNumber",
"1", "lexDecimal",
"2", "lexDecimal",
"3", "lexDecimal",
"4", "lexDecimal",
"5", "lexDecimal",
"6", "lexDecimal",
"7", "lexDecimal",
"8", "lexDecimal",
"9", "lexDecimal",
"i\"", "lexInterpolatedString",
"i`", "lexInterpolatedString",
"iq{", "lexInterpolatedString",
"q\"", "lexDelimitedString",
"q{", "lexTokenString",
"r\"", "lexWysiwygString",
"x\"", "lexHexString",
" ", "lexWhitespace",
"\t", "lexWhitespace",
"\r", "lexWhitespace",
"\n", "lexWhitespace",
"\v", "lexWhitespace",
"\f", "lexWhitespace",
"\u2028", "lexLongNewline",
"\u2029", "lexLongNewline",
"#!", "lexScriptLine",
"#line", "lexSpecialTokenSequence"
]extraFields = "import dparse.lexer:TokenTriviaFields,TriviaToken; mixin TokenTriviaFields;"extraFieldsBare = q{
import dparse.lexer : Token;
this(Token token) pure nothrow @safe @nogc {
this(token.type, token.text, token.line, token.column, token.index);
}
int opCmp(size_t i) const pure nothrow @safe @nogc {
if (index < i) return -1;
if (index > i) return 1;
return 0;
}
int opCmp(ref const typeof(this) other) const pure nothrow @safe @nogc {
return opCmp(other.index);
}
string toString() const @safe pure
{
import std.array : appender;
auto sink = appender!string;
toString(sink);
return sink.data;
}
void toString(R)(auto ref R sink) const
{
import std.conv : to;
import dparse.lexer : str;
sink.put(`trivia!"`);
sink.put(str(type));
sink.put(`"(`);
sink.put("text: ");
sink.put([text].to!string[1 .. $ - 1]); // escape hack
sink.put(", index: ");
sink.put(index.to!string);
sink.put(", line: ");
sink.put(line.to!string);
sink.put(", column: ");
sink.put(column.to!string);
sink.put(")");
}
}stringBehaviorNotWorking = "Automatic string parsing is not "
~ "supported and was previously not working. To unescape strings use the "
~ "`dparse.strings : unescapeString` function on the token texts instead."Templates 1
Template used to refer to D token types.
See the operators, keywords, and dynamicTokens enums for values that can be passed to this template. Example:
import dparse.lexer;
IdType t = tok!"floatLiteral";