Documentation
¶
Overview ¶
Package tokenizer scans SQL source code into tokens.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func NormalizeIdentifier ¶
NormalizeIdentifier removes optional quoting from identifiers while unescaping content.
func ScanSeq ¶
ScanSeq returns an iterator over tokens in the source. This is memory-efficient for large files and enables early termination. Use this when you only need to process tokens sequentially. For random access, use Scan() instead.
Example:
for tok := range tokenizer.ScanSeq(path, src, false) {
if tok.Kind == tokenizer.KindEOF {
break
}
process(tok)
}
Types ¶
type Kind ¶
type Kind int
Kind represents the classification of a scanned token.
const ( // KindInvalid represents an unrecognized or placeholder token. KindInvalid Kind = iota // KindIdentifier represents bare or quoted identifiers. KindIdentifier // KindKeyword represents SQL keywords normalized to uppercase. KindKeyword // KindNumber represents numeric literals. KindNumber // KindString represents string literals using single quotes. KindString // KindBlob represents blob literals of the form X'...'. KindBlob // KindSymbol represents punctuation or operator symbols. KindSymbol // KindParam represents PostgreSQL-style positional parameters ($1, $2, etc.) KindParam // KindDocComment represents a documentation comment captured for a following statement. KindDocComment // KindEOF marks the logical end of the input. KindEOF )
type Scanner ¶
type Scanner struct {
// contains filtered or unexported fields
}
Scanner maintains scanning state over a schema source.
type Span ¶
Span represents a best-effort start and end position within a source file.
func SpanBetween ¶
SpanBetween returns a span that covers both the start and end tokens, inclusive.
Click to show internal directories.
Click to hide internal directories.