ddn.data.csv

ddn.data.csv — High‑performance CSV reader/writer

This module will provide a fast, RFC 4180–compliant CSV reader and writer with configurable dialect options and performance features such as buffered I/O and zero‑copy parsing.

RFC 4180 Compliance -------------------

  • Record delimiters: CRLF per RFC; reader will detect CRLF, LF, and legacy CR in detect mode.
  • Header row: optional; same field count as data rows when enabled.
  • Field delimiter: comma by default; configurable to other single‑byte delimiters.
  • Quoted fields: fields containing delimiter, quote, or newline are quoted.
  • Quote escaping: doubled quotes within quoted fields.
  • Embedded newlines: supported inside quoted fields.
  • Spaces: spaces are data; optional trimming for unquoted fields.

Dialect Options --------------------------------------------

  • delimiter: char — default `','`
  • quote: char — default `'"'`
  • doubleQuote: bool — default true
  • trimWhitespace: bool — default false (unquoted fields only)
  • newlinePolicy: NewlinePolicy — default detect (recognize CRLF/LF)
  • detect, forceCRLF, forceLF
  • escapeStyle: EscapeStyle — default none
  • none, backslash (extension for non‑RFC datasets)
  • header: bool — default false

Additional toggles:

  • strictFieldCount: bool — strict mode vs permissive
  • acceptUtf8BOM: bool — accept/skip leading UTF‑8 BOM

Notes and Examples ------------------

  • A comprehensive compliance and options document is available at

docs/rfc4180-compliance.md in this repository.

  • Performance tips and tuning guidance are collected in

docs/performance-notes.md.

  • Writer will quote any field that contains the delimiter, quote character, or

newline, and will double internal quotes.

  • Reader will support embedded CRLF within quoted fields and mixed line endings

under newlinePolicy = detect.

Example: --- import ddn.data.csv;

void example() { auto dialect = CsvDialect( delimiter: ',', quote: '"', doubleQuote: true, trimWhitespace: false, newlinePolicy: NewlinePolicy.detect, escapeStyle: EscapeStyle.none, header: true ); }

Types 24

Newline policy controls how record boundaries are detected and, for writer, how newlines are emitted.

Planned values match RFC 4180 defaults while allowing practical overrides.

Examples

import ddn.data.csv;

// Reader detects CRLF/LF by default
auto d1 = CsvDialect.init; // newlinePolicy = DETECT

// Force writer to emit LF newlines
auto d2 = CsvDialect.init; d2.newlinePolicy = NewlinePolicy.FORCE_LF;
DETECTDetect CRLF and LF during reading; writer uses CRLF by default (subject to dialect).
FORCE_CRLFForce CRLF (\r\n) handling; writer emits CRLF.
FORCE_LFForce LF (\n) handling; writer emits LF.

Escape style for non‑RFC datasets (optional extension). none — only RFC 4180 double‑quote escaping inside quoted fields. backslash — treat backslash as escape for delimiter/quote/newline in unquoted fields.

Example:

import ddn.data.csv;
auto d = CsvDialect.init;
d.escapeStyle = EscapeStyle.NONE; // RFC 4180 behavior (default)

NONE
BACKSLASH

Reader error handling mode.

  • PERMISSIVE (default): malformed rows are skipped and errors are

counted; iteration continues. Optionally collect diagnostics.

  • FAIL_FAST: stop iteration at the first error and surface it via

CsvReader.stats.

PERMISSIVE
FAIL_FAST

Convenience aliases for common public types.

These aliases have no runtime cost and exist purely for readability in user code and documentation.

Examples

import ddn.data.csv;

CsvField f;          // alias for FieldView
CsvRow row;          // alias for RowView
CsvResultT!int ri;   // alias for CsvResult!int

// Parametric reader/writer helpers
alias Reader = CsvReaderOf!(const(char)[]);
alias Writer = CsvWriterTo!(typeof((const(char)[] s){}) ); // any sink with put()

Shorthand for a parsed CSV row view.

aliasCsvResultT(T) = CsvResult!T

Shorthand for CsvResult!T.

Shorthand for CsvReader!R.

Shorthand for CsvWriter!S.

structFieldView

Lightweight non‑owning view over a CSV field.

Fields
const(char)[] dataSlice pointing into an underlying buffer (non-owning, zero-copy).
bool wasQuotedTrue when the source field was quoted.
bool needsUnescapeTrue when the field contains doubled quotes that must be unescaped to get logical text.
Methods
const(char)[] toString() const @safe pure nothrow @nogcReturns the field slice; callers may copy if they require immutability.
string unescaped() const @safeReturns the logical text of the field with doubled quotes resolved when needed.
size_t length() @property const @safe pure nothrow @nogcLength in bytes (UTF‑8 code units).

Header index providing fast name -> column index lookup.

Built once from a set of header fields (typically the first CSV record when CsvDialect.header == true) and attached to subsequent RowView instances via RowView.attachHeader to enable row.byName("col") without per-row overhead.

Policy for duplicates: the first occurrence wins; hasDuplicates is set to true when the header contains duplicate (normalized) names.

Fields
private string[] _names
private size_t[string] _map
private bool _caseSensitive
private bool _hasDuplicates
Methods
size_t length() @property const @safe pure nothrow @nogcNumber of columns in the header.
bool hasDuplicates() @property const @safe pure nothrow @nogcWhether the header contains duplicate names (after normalization).
inout(string) nameAt(size_t i) inout @safe pure nothrow @nogcReturn the original, unescaped name at index `i`.
CsvResult!size_t indexOf(scope const(char)[] name) const @safe pure nothrow @nogcLookup the index of a column by `name`. Returns `unknownColumn` on miss.
string _normalize(scope const(char)[] s) const @safeNormalize name for lookup depending on case-sensitivity policy.
Constructors
this(const(FieldView)[] headerFields, bool caseSensitive = true)Build an index from parsed header fields.
structRowView

Lightweight non‑owning view over a parsed CSV row.

Fields
FieldView[] fieldsFields comprising this row.
private const(HeaderIndex) * _header
Methods
size_t length() @property const @safe pure nothrow @nogcNumber of fields in the row.
inout(FieldView) opIndex(size_t i) inout @safe pure nothrow @nogcRandom access to a field.
void attachHeader(const(HeaderIndex) * header) @safe pure nothrow @nogcAttach a header index to this row to enable name-based field access.
CsvResult!FieldView byName(scope const(char)[] name) const @safe pure nothrow @nogcLookup a field by column name using the attached header index.

Error codes describing common CSV parsing and configuration issues.

NONE = 0
UNEXPECTED_EOF
INVALID_QUOTE
INVALID_ESCAPE
INCONSISTENT_FIELD_COUNT
INVALID_DIALECT
IO_FAILURE
INVALID_CONVERSIONConversion from `FieldView` to requested type failed.
INVALID_COLUMNColumn name not found in header index.
FIELD_OVERFLOWRow field count exceeds scanner's fixed capacity.
OVERFLOWNumeric value overflows target type range.
structCsvError

Structured error information returned in CsvResult!T.

Fields
CsvErrorCode codeError code.
size_t line1‑based line counter if available (0 when unknown).
size_t column1‑based column counter if available (0 when unknown).
string messageOptional human‑readable message.
classCsvException : Exception

Exception type used by optional throwing helpers.

This exception wraps a CsvError and is thrown by convenience APIs such as CsvResult!T.valueOrThrow() for users who prefer exception‑driven control flow instead of explicit result checking. Hot paths should prefer CsvResult without throwing for performance.

Fields
CsvError errorThe underlying structured error information.
Constructors
this(CsvError e)Construct the exception from a `CsvError`.
structCsvResult(T)

Result container for error‑aware APIs without throwing.

Holds either a value of type T (when isOk is true) or an error. For compile‑time friendliness we store both value and err; users should consult isOk before accessing value.

Fields
bool isOkTrue when the operation succeeded and `value` is valid.
T valueThe result value (meaningful when `isOk == true`).
CsvError errThe error (meaningful when `isOk == false`).
Methods
CsvResult ok(T v) @safe pure nothrowConstruct a success result.
CsvResult error(CsvError e) @safe pure nothrowConstruct an error result.
T valueOrThrow() @safeReturn the contained value or throw `CsvException` if this result represents an error.

CSV dialect configuration.

Represents the set of options that control how CSV is parsed and written. Defaults follow RFC 4180: comma delimiter, double‑quote for quoting, CRLF/LF detection on read, RFC‑only escaping, and no header row by default.

Fields (all public):

  • delimiter: Single‑byte field separator (default: `,`).
  • quote: Quote character (default: `"`).
  • doubleQuote: When true (default), two consecutive quotes inside

quoted fields represent a literal quote character.

  • trimWhitespace: When true, trim leading/trailing whitespace for

unquoted fields. Default: false.

  • newlinePolicy: Controls record boundary handling and writer emission

policy. Default: NewlinePolicy.detect.

  • escapeStyle: Optional extension for non‑RFC datasets. Default: none.
  • header: When true, the first record is interpreted as a header row.

Construction:

  • The type can be used with its .init value for RFC defaults, e.g.

auto d = CsvDialect.init; and then mutate fields.

  • A convenience constructor allows specifying any subset (positional with

defaults) while other options fall back to defaults, e.g.: auto d = CsvDialect(';'); // semicolon delimiter

Validation:

  • isValid() returns true if options are self‑consistent. At minimum it

ensures delimiter != quote.

Examples

import ddn.data.csv;

// Customize only the delimiter (semicolon-separated values)
auto d1 = CsvDialect(';');

// Fully configure a dialect
auto d2 = CsvDialect(',', '\'', true, false, NewlinePolicy.forceLF);
assert(d1.isValid && d2.isValid);
Fields
char delimiterField delimiter (single‑byte), default `,`.
char quoteQuote character, default `"`.
bool doubleQuoteWhether doubled quotes inside quoted fields represent a literal quote.
bool trimWhitespaceTrim leading/trailing whitespace in unquoted fields.
NewlinePolicy newlinePolicyNewline detection/emission policy.
EscapeStyle escapeStyleOptional non‑RFC escape policy.
bool headerInterpret the first record as a header row.
ErrorMode errorModeError handling mode for the reader.
bool strictFieldCountWhen `true`, enforce a consistent number of fields per row.
bool collectDiagnosticsWhen `true`, the reader will collect per-error diagnostics in `stats`.
Methods
bool isValid() const @safe pure nothrow @nogcReturns `true` when the dialect options are self‑consistent.
Constructors
this( char delimiter, char quote = DEFAULT_QUOTE, bool doubleQuote = DEFAULT_DOUBLE_QUOTE, bool trimWhitespace = DEFAULT_TRIM_WHITESPACE, NewlinePolicy newlinePolicy = DEFAULT_NEWLINE_POLICY, EscapeStyle escapeStyle = DEFAULT_ESCAPE_STYLE, bool header = DEFAULT_HEADER )Convenience constructor. Specify any prefix of arguments to customize options and rely on defaults for the rest.
structCsvReader(Range) if (isInputRange!Range)

High‑throughput CSV reader over an input range of bytes.

Models a forward input range over RowView. Instances created via the provided constructors yield an empty range.

Examples

import ddn.data.csv;
// Parse two rows, sum first column lazily (no allocations on hot path)
const csv = "x,y\n10,20\n30,40\n";
long sum = 0;
auto r = byRows(csv);
// Skip header
r.popFront();
while (!r.empty)
{
   auto row = r.front;
   auto a = fromCsv!long(row[0]);
   auto b = fromCsv!long(row[1]);
   if (a.isOk && b.isOk) sum += a.value + b.value;
   r.popFront();
}
assert(sum == 100);
Fields
private CsvDialect _dialect
private CsvReadStats _statsReading statistics and diagnostics (see `CsvReadStats`).
private size_t _lineNumber of successfully yielded rows so far (1-based line numbers use `line + 1` for next row).
private size_t _expectedFieldCountExpected field count when `strictFieldCount` is enabled.
private bool _expectedSet
private Range _source
Methods
inout(CsvDialect) dialect() @property ref return inout @safe pure nothrowAccess the dialect (read/write).
inout(CsvReadStats) stats() @property ref return inout @safe pure nothrowAccess read statistics and, optionally, diagnostics collected during iteration.
bool empty() @property const @safe pure nothrow @nogcWhether the reader has more rows to read.
inout(RowView) front() @property inout @safe pure nothrow @nogcCurrent row view. Valid until the next call to `popFront`.
void popFront() @safe nothrow @nogcAdvance to the next row.
private void _recordError(CsvError e) @safe nothrow @nogcRecord an error into the reader statistics and diagnostics as configured.
Constructors
this(Range source, CsvDialect dialect = CsvDialect.init)Construct a reader over `source` with an optional `dialect`.
structCsvWriter(OutputRange) if (isOutputRange!(OutputRange, const(char)[]))

Efficient CSV writer to an output range of bytes.

Implements a buffered writer core that minimizes calls to the underlying sink by batching output into a large internal buffer. The buffer size is configurable via the constructor; a sensible default is used when not specified.

Notes:

  • Fields are written as provided. Quoting rules per RFC 4180.
  • Newline emission follows dialect.newlinePolicy: detect and

forceCRLF emit CRLF; forceLF emits LF.

Examples

import ddn.data.csv;
static class Sink { string data; void put(const(char)[] s) { data ~= s; } }

auto s = new Sink();
auto w = CsvWriter!(Sink)(s); // default RFC dialect
assert(w.writeRow(["id", "name"]).isOk);
assert(w.writeRow(["1", "Doe, Jane"]).isOk); // quoted due to comma
w.flush();
assert(s.data.startsWith("id,name"));
Fields
private OutputRange _sink
private CsvDialect _dialect
private char[] _buf
private size_t _pos
private size_t _flushes
size_t DEFAULT_BUFFER_SIZEDefault buffer size (64 KiB).
Methods
inout(CsvDialect) dialect() @property ref return inout @safe pure nothrowAccess the dialect (read/write).
size_t flushes() @property const @safe pure nothrow @nogcNumber of flushes performed so far (diagnostic/testing aid).
void flush()Ensure pending buffered data is sent to the underlying sink.
private void writeRaw(scope const(char)[] s)Write raw bytes through the buffer, flushing as necessary.
private void writeChar(char c)Write a single character (usually delimiter).
private void writeNewline()Write dialect-specific newline (CRLF for detect/forceCRLF, LF for forceLF).
private bool needsQuoting(scope const(char)[] s) const @safe pure nothrow @nogcDetermine whether `s` must be quoted given the current dialect.
private CsvResult!bool writeField(scope const(char)[] s)Write a single field applying quoting/escaping per RFC 4180 and dialect.
CsvResult!bool writeRow(const FieldView[] fields)Write a row given as an array of fields
CsvResult!bool writeRow(const string[] fields)Convenience overload for string fields.
CsvResult!bool writeRow(T)(auto ref T t) if (isTuple!T)Write a row from a `std.typecons.Tuple` of arbitrary values.
CsvResult!bool writeRow(T)(auto ref T s) if (isAggregateType!T && !isTuple!T && !isSomeString!T && !is(T == RowView))Write a row from a user `struct` using field declaration order.
CsvResult!bool writeRow(RowView row)Write a `RowView` directly by emitting its fields.
CsvResult!bool writeRow(Args...)(auto ref Args args) if (!(Args.length == 1 && (is(Args[0] == string[]) || is(Args[0] == const(string)[]) || is(Args[0] == FieldView[]) || is( Args[0] == RowView))))Variadic overload to write a row from heterogeneous values.
CsvResult!bool writeValue(T)(auto ref T v)Emit a single value as a CSV field using the appropriate conversion.
Constructors
this(OutputRange sink, CsvDialect dialect = CsvDialect.init, size_t bufferSize = DEFAULT_BUFFER_SIZE)Construct a writer over `sink` with an optional `dialect` and `bufferSize`.
Destructors
~thisDestructor flushes any remaining bytes.
structBufferManager(Reader) if (hasRead!Reader)

Package-internal buffer manager that batches read() calls into large contiguous buffers to reduce syscall overhead and improve throughput.

Usage pattern (illustrative):

auto bm = BufferManager!MyReader(reader, 64 * 1024);
for (;;)
{
   auto chunk = bm.nextChunk();
   if (chunk.length == 0) break; // EOF
   // Consume data from `chunk` (do not hold past next call).
   bm.popFront(chunk.length);
}

Notes:

  • Buffer size is configurable via the constructor; defaults to 64 KiB.
  • nextChunk returns a slice of the internal buffer; its content becomes

invalid after the next nextChunk call.

  • Metrics readCalls and bytesRead are available for tests/diagnostics.
Fields
private Reader _reader
private ubyte[] _buf
private size_t _fill
private size_t _pos
private bool _eof
private size_t _readCalls
private size_t _bytesRead
size_t DEFAULT_BUFFER_SIZEDefault buffer size (64 KiB).
Methods
size_t readCalls() @property const @safe pure nothrow @nogcNumber of low-level `read()` calls performed so far.
size_t bytesRead() @property const @safe pure nothrow @nogcNumber of bytes read from the underlying reader so far.
bool empty() @property const @safe pure nothrow @nogcReturns `true` when EOF has been reached and no data remains in buffer.
size_t available() @property const @safe pure nothrow @nogcBytes currently available in the buffer (without more I/O).
const(ubyte)[] nextChunk()Obtain the next available chunk. If the internal buffer is exhausted, this method refills it by calling the underlying `read()` once.
void popFront(size_t n) @safe pure nothrowAdvance the current position by `n` bytes (consumed by caller).
const(ubyte)[] peek() const @safe pure nothrowReturn the current chunk without triggering a refill.
Constructors
this(Reader reader, size_t bufferSize = DEFAULT_BUFFER_SIZE)Construct a buffer manager over `reader` with optional `bufferSize`.

Package‑internal scan result for a single CSV row.

  • row: A RowView referencing the provided buffer slice; valid until the

next scan or buffer reuse.

  • consumed: Number of bytes consumed from the start of the buffer,

including the record terminator (CRLF/LF) when found.

  • hasRow: true when a complete row was found; false otherwise (no

terminator encountered in the provided chunk per newlinePolicy).

Fields
size_t consumed
bool hasRow
bool badRowTrue when the scanned row exhibits a parsing anomaly (e.g., invalid quote sequence).
CsvErrorCode errCodeError code describing the anomaly when `badRow == true`.
size_t totalFieldsTotal number of fields in the row (may exceed row.length if truncated).

Zero‑copy row scanner that splits a single row into fields without allocations and without quote/escape handling.

Notes:

  • Handles CRLF and/or LF per CsvDialect.NEWLINE_POLICY.
  • Delimiter is taken from CsvDialect.DELIMITER.
  • trimWhitespace is applied to all fields (unquoted fields only are

relevant at this stage).

  • Produces FieldView slices that reference the input buffer; callers must

ensure the buffer remains valid until they finish using the row.

  • The scanner maintains an internal fixed array to avoid GC allocations in

hot paths. If the number of fields exceeds the fixed capacity, a dynamic array fallback is used for that row (rare); @nogc tests should keep field counts below the fixed capacity to avoid allocations.

Fields
private CsvDialect _dialect
size_t FIXED_CAPACITY
private FieldView[FIXED_CAPACITY] _fixed
Methods
inout(CsvDialect) dialect() @property ref return inout @safe pure nothrowAccess the dialect in use.
private bool isTrimChar(char c) @safe pure nothrow @nogcReturns true if character `c` is considered whitespace for trimming.
private bool isTerminator(const(char)[] buf, size_t i, NewlinePolicy p, ref size_t termLen) @safe pure nothrow @nogcDetects a record terminator at position `i` and returns its length via `termLen`.
ScanResult scan(const(char)[] chunk, const bool eofIsTerminator = false) @trusted pure nothrow @nogcScan the provided `chunk` for a single CSV row using the scanner's dialect. Returns a `ScanResult`. When `hasRow` is false, the caller should provide more data (append next chunk) and retry.
Constructors
this(CsvDialect dialect)

Reading statistics and optional diagnostics collected by CsvReader.

Fields:

  • rows: number of successfully yielded rows.
  • badRows: number of malformed rows encountered.
  • errors: total error count (equals badRows in current implementation).
  • lastError: the most recent error encountered.
  • diagnostics: per-error list (populated only when collectDiagnostics is enabled).
Fields
size_t rows
size_t badRows
size_t errors
CsvError lastError
private size_t DIAG_CAPACITY
private CsvError[DIAG_CAPACITY] _diagnostics
private size_t _diagLen
private size_t _diagDropped
Methods
size_t diagnosticsCount() @property const @safe pure nothrow @nogcNumber of stored diagnostics.
inout(CsvError)[] diagnostics() @property return inout @safe pure nothrow @nogcView over collected diagnostics (non-owning slice).
size_t diagnosticsDropped() @property const @safe pure nothrow @nogcNumber of diagnostics dropped due to capacity limits.
void _pushDiagnostic(CsvError e) @safe nothrow @nogcInternal: push a diagnostic if capacity allows, otherwise count as dropped.

Mode for opening a CsvFile.

read
write
append
structCsvFile

Convenience wrapper for reading/writing CSV files.

Examples

import ddn.data.csv;
import std.file : remove, exists;

// Write a small CSV file
CsvFile out = CsvFile("./t23_sample.csv", CsvOpenMode.write);
string[][] rows = [["a","b"],["1","x,y"],["2","z"]];
assert(out.writeRows(rows).isOk);
assert(out.close().isOk);

// Read it back (buffered path)
CsvFile f = CsvFile("./t23_sample.csv", CsvOpenMode.read);
auto rr = f.reader();
size_t cnt = 0; while (!rr.empty) { ++cnt; rr.popFront(); }
assert(cnt == rows.length);
// Cleanup
if (exists("./t23_sample.csv")) remove("./t23_sample.csv");
Fields
string pathFilesystem path to the CSV.
CsvOpenMode modeOpen mode (read/write/append).
CsvDialect dialectDialect used for parsing/formatting.
bool memoryMappedWhen true, memory mapping is used for reading where supported.
private File _file
private MmFile _mm
private const(char)[] _mapped
Methods
CsvResult!bool open()Open the file according to the selected `mode`.
CsvResult!bool close()Close the file if it is open. Safe to call multiple times. Returns: success or `ioFailure` if the OS close operation fails.
bool isOpen() @property const @safe nothrowWhether the underlying file handle is currently open.
private CsvWriter!(FileSink) _makeWriter()
CsvResult!bool writeRows(R)(R rows)Write a sequence of rows to the file using a `CsvWriter` under the hood.
CsvResult!bool appendRow(const string[] fields)Append a single row given as `string[]`. See `CsvWriter.writeRow`.
CsvResult!bool appendRow(RowView row)Append a single row given as a parsed `RowView`.
CsvResult!bool appendRow(Args...)(auto ref Args args) if (!(Args.length == 1 && (is(Args[0] == string[]) || is(Args[0] == const(string)[]) || is(Args[0] == FieldView[]) || is( Args[0] == RowView))))Variadic convenience to append a heterogeneous row, e.g., `appendRow("id", 42, true)`.
CsvResult!(CsvReader!(const(char)[])) reader()Create a `CsvReader` over this file's contents.
CsvResult!(CsvReader!(const(char)[])) _bufferedReader()Buffered non-mapped reading using BufferManager.
Constructors
this(string path, CsvOpenMode mode = CsvOpenMode.read, CsvDialect dialect = CsvDialect.init, bool memoryMapped = false)Construct a `CsvFile` with the given parameters.
Destructors
~thisDestructor: best-effort close of the file handle.
Nested Templates
FileSink

Functions 4

fnbool ddnCanMemoryMapSize(size_t fileSize) @safe pure nothrow @nogcDecide whether memory mapping should be attempted for a file of `fileSize` bytes on the current platform.
fnHeaderIndex makeHeaderIndex(RowView headerRow, bool caseSensitive = true) @safeConvenience: build a `HeaderIndex` from a parsed header `RowView`.
fnCsvResult!T fromCsv(T)(FieldView f) @safe nothrow @nogcConvert a `FieldView` to the requested type `T` lazily, without allocations.
fnauto byRows(Source)(Source source, CsvDialect dialect = CsvDialect.init) @safe nothrow @nogcHelper to construct a `CsvReader` range over rows for common sources.

Variables 8

enumvarDDN_MAX_MAP_SIZE32 = cast(size_t) 1_500_000_000UL

Maximum file size to attempt memory mapping on 32‑bit platforms (heuristic).

Mapping very large files on 32‑bit OSes can fail due to limited virtual address space. We apply a conservative cap (≈1.5 GB) and fall back to buffered reading beyond this size.

enumvarDEFAULT_DELIMITER = ','

Default delimiter (RFC 4180).

enumvarDEFAULT_QUOTE = '"'

Default quote character (RFC 4180).

enumvarDEFAULT_DOUBLE_QUOTE = true

Default: interpret doubled quotes inside quoted fields.

enumvarDEFAULT_TRIM_WHITESPACE = false

Default: do not trim whitespace in unquoted fields.

enumvarDEFAULT_NEWLINE_POLICY = NewlinePolicy.DETECT

Default newline policy: detect CRLF and LF on read.

enumvarDEFAULT_ESCAPE_STYLE = EscapeStyle.NONE

Default escape style: RFC only.

enumvarDEFAULT_HEADER = false

Default: header row absent.

Templates 1

tmplhasRead(R)

Trait that evaluates to true when R provides a read(ubyte[]) method returning the number of bytes read (size_t). This matches common D I/O patterns (e.g., file descriptors, custom sources) and enables our buffered reader to minimize read calls.