upgrade to zig 0.16.0
All checks were successful
Generic zig build / build (push) Successful in 2m20s
Generic zig build / deploy (push) Successful in 27s

IO-as-an-interface refactor across the codebase. The big shifts:
- std.io → std.Io, std.fs → std.Io.Dir/File, std.process.Child → spawn/run.
- Juicy Main: pub fn main(init: std.process.Init) gives gpa, io, arena,
  environ_map up front. main.zig + the build/ scripts use it directly.
- Threading io through everywhere that touches the outside world (HTTP,
  files, stderr, sleep, terminal detection). Functions taking `io` now
  announce side effects at the call site — the smell is the feature.
- date math takes `as_of: Date`, not `today: Date`. Caller resolves
  `--as-of` flag vs wall-clock at the boundary; the function operates
  on whatever date it's given. Every "today" parameter renamed and
  the as_of: ?Date + today: Date pattern collapsed.
- now_s: i64 (or before_s/after_s pairs) for sub-second metadata
  fields like snapshot captured_at, audit cadence, formatAge/fmtTimeAgo.
  Also pure and testable.
- legitimate Timestamp.now callers (cache TTL math, FetchResult
  timestamps, rate limiter, per-frame TUI "now" captures) gain
  `// wall-clock required: ...` comments justifying the read.

Test discovery: replaced the local refAllDeclsRecursive with bare
std.testing.refAllDecls(@This()). Sema-pulling main.zig's top-level
decls reaches every test file transitively through the import graph;
no explicit _ = @import(...) lines needed.

Cleanup along the way:
- Dropped DataService.allocator()/io() accessor methods; renamed the
  fields to drop the base_ prefix. Callers use self.allocator and
  self.io directly.
- Dropped now-vestigial io parameters from buildSnapshot,
  analyzePortfolio, compareSchwabSummary, compareAccounts,
  buildPortfolioData, divs.display, quote.display, parsePortfolioOpts,
  aggregateLiveStocks, renderEarningsLines, capitalGainsIndicator,
  aggregateDripLots, printLotRow, portfolio.display, printSnapNote.
- Dropped the unused contributions.computeAttribution date-form
  wrapper (only computeAttributionSpec is called).
- formatAge/fmtTimeAgo take (before_s, after_s) instead of io and
  reading the clock internally.
- parseProjectionsConfig uses an internal stack-buffer
  FixedBufferAllocator instead of an allocator parameter.
- ThreadSafeAllocator wrappers in cache concurrency tests dropped
  (0.16's DebugAllocator is thread-safe by default).
- analyzePortfolio bug surfaced by the rename: snapshot.zig was
  passing wall-clock today instead of as_of, mis-valuing cash/CDs
  for historical backfills.

83 new unit tests added due to removal of IO, bringing coverage from 58%
-> 64%
This commit is contained in:
Emil Lerch 2026-05-09 22:40:33 -07:00
parent b75381a9bd
commit fad9be6ce8
Signed by: lobo
GPG key ID: A7B62D657EF764F8
72 changed files with 2975 additions and 1486 deletions

1
.gitignore vendored
View file

@ -1,5 +1,6 @@
.zig-cache/
zig-out/
zig-pkg/
coverage/
.env
*.srf

View file

@ -1,5 +1,5 @@
[tools]
prek = "0.3.1"
zig = "0.15.2"
zls = "0.15.1"
zig = "0.16.0"
zls = "0.16.0"
"ubi:DonIsaac/zlint" = "0.7.9"

View file

@ -29,7 +29,7 @@ repos:
- id: test
name: Run zig build test
entry: zig
args: ["build", "coverage", "-Dcoverage-threshold=60"]
args: ["build", "coverage", "-Dcoverage-threshold=62"]
language: system
types: [file]
pass_filenames: false

154
AGENTS.md
View file

@ -2,6 +2,112 @@
## ⛔ ABSOLUTE PROHIBITIONS — READ FIRST ⛔
### Zig 0.16.0 reference — read the release notes
This codebase is on Zig 0.16.0. The 0.16 release was a major
I/O-as-an-interface refactor that reshaped the standard library.
Before making non-trivial changes (especially anything touching
`std.Io`, `std.fs`, `std.process`, `std.http`, `std.Thread`,
allocators, or `std.time`), **read the release notes** at:
https://ziglang.org/download/0.16.0/release-notes.html
Key migrations that bit us repeatedly during the 0.16 upgrade and
will bite future work too:
- `std.io``std.Io` (namespace rename, no deprecation alias).
- `std.fs.cwd()``std.Io.Dir.cwd()`. All file ops take `io`.
- `std.process.Child.init(argv, alloc)``std.process.spawn(io, .{...})`
or `std.process.run(gpa, io, .{...})`.
- `std.time.timestamp()``std.Io.Timestamp.now(io, .real).toSeconds()`.
- `std.Thread.sleep(ns)``std.Io.sleep(io, duration, clock)`.
- `pub fn main` gains a `std.process.Init` parameter ("Juicy Main");
provides pre-built `gpa`, `io`, `arena`, `environ_map`.
- `std.heap.GeneralPurposeAllocator``std.heap.DebugAllocator`.
- `std.heap.ThreadSafeAllocator` removed; `ArenaAllocator` is now
lock-free thread-safe, `DebugAllocator` is thread-safe by default.
- `std.mem.trimRight`/`trimLeft``trimEnd`/`trimStart`.
- `std.mem.indexOf*``find*` (deprecation aliases still present,
so old names work but warn).
- `std.testing.refAllDeclsRecursive` removed. Only `refAllDecls`
remains. We use the bare `std.testing.refAllDecls(@This())` in
`src/main.zig`'s test block — it sema-touches every top-level decl,
which transitively pulls in every imported file's `test` blocks.
No local reimplementation is needed. See "Test discovery" below.
- `std.fs.File.readToEndAlloc(alloc, N)` → two-step:
`file.reader(io, &.{})` then `.interface.allocRemaining(alloc, .limited(N))`.
For anything not on this list, **read the release notes first.**
The notes are long but they're organized by section; searching for
the specific symbol you're migrating is fast.
### `io` vs `today` / `now_s` — design rule
The 0.16 upgrade made a deliberate choice about which Zig-0.16 `Io`
calls to thread through and which to sidestep. **This rule is
load-bearing**; please read before adding new code that needs the
current time.
- **`io: std.Io` is threaded through anything that actually does
I/O** — file reads/writes, stderr, HTTP, process spawn, terminal
detection. A function taking `io` is announcing that it touches
the outside world. The "code smell" is a feature.
- **`today: Date` is passed as a value** for functions that need
"what day is it" but don't otherwise do I/O. Captured once at
the top of the unit of work (`runCli` for CLI, `App.init` for
TUI) and threaded through. Render output stays deterministic
within a frame even if the clock ticks over mid-render.
- **`now_s: i64` (or similar `before_s`/`after_s` pairs) is passed
as a value** for sub-second-precision metadata fields like
snapshot `captured_at`, rollup `#!created=`, audit cadence
staleness math. Same single-capture-and-thread pattern as
`today`.
When adding a new function that needs the current time, do NOT
reach for `std.Io.Timestamp.now(io, .real)` inside the function.
Take `today: Date` or `now_s: i64` instead. Only take `io` if the
function genuinely needs to do I/O for other reasons.
**Legitimate `Timestamp.now` callers** (each must have a
`// wall-clock required: <why>` comment justifying the read):
- `cache/store.zig` — cache entry timestamps and TTL math
- `service.zig` — per-fetch `FetchResult.timestamp`
- `net/RateLimiter.zig` — token-bucket refill
- TUI per-frame "now" captures for relative-time display
- The single `Timestamp.now` capture in `main.zig`'s dispatch
entry that produces `today` and `now_s` for the rest of the
invocation
- The `format.todayDate(io)` helper itself (the one legitimate
capture function for unit-of-work entry points)
If you find yourself writing `Timestamp.now(io, ...)` somewhere
not on that list, either add a justifying comment or refactor
the function to take a value parameter.
### NEVER invoke ripgrep. EVER.
**Do not run `rg` in the Bash tool.** Not for open-ended search, not for
counting matches, not for "just this one quick check", not ever. Running
ripgrep on this machine hammers the filesystem badly enough to degrade the
whole system — this is a recurring, reproducible problem, not a hunch.
**Use instead:**
- **Grep tool** (built-in) for content search. It handles regex, file
globs, and output shaping without spawning `rg`.
- **Glob tool** (built-in) for finding files by name pattern.
- **Read tool** for reading files (with `offset`/`limit` for large ones).
- Plain `grep` via the Bash tool is acceptable when the built-in Grep
tool can't express what you need — but prefer the built-in first.
**If you catch yourself typing `rg` in a Bash command:** stop, delete it,
use the Grep tool instead. The fact that `rg` is faster in the abstract
does NOT matter here. This machine's filesystem + ripgrep's parallelism
is a bad combination, full stop.
**This applies to every variant:** `rg`, `ripgrep`, piping through
`rg`, backgrounded `rg`, `rg --files`, etc. All banned.
### NEVER delete or modify build caches. EVER.
**This means:**
@ -77,15 +183,15 @@ Ask the user instead.**
```bash
zig build # build the zfin binary (output: zig-out/bin/zfin)
zig build test # run all tests (single binary, discovers all tests via refAllDeclsRecursive)
zig build test # run all tests (single binary, discovers all tests via refAllDecls + the import graph)
zig build run -- <args> # build and run CLI
zig build docs # generate library documentation
zig build coverage -Dcoverage-threshold=60 # run tests with kcov coverage (Linux only)
```
**Tooling** (managed via `.mise.toml`):
- Zig 0.15.2 (minimum)
- ZLS 0.15.1
- Zig 0.16.0 (minimum)
- ZLS 0.16.0
- zlint 0.7.9
**Linting**: `zlint --deny-warnings --fix` (runs via pre-commit on staged `.zig` files).
@ -116,7 +222,7 @@ User input → main.zig (CLI dispatch) or tui.zig (TUI event loop)
### Key design decisions
- **Internal imports use file paths, not module names.** Only external dependencies (`srf`, `vaxis`, `z2d`) use `@import("name")`. Internal code uses relative paths like `@import("models/date.zig")`. This is intentional — it lets `refAllDeclsRecursive` in the test binary discover all tests across the entire source tree.
- **Internal imports use file paths, not module names.** Only external dependencies (`srf`, `vaxis`, `z2d`) use `@import("name")`. Internal code uses relative paths like `@import("models/date.zig")`. This is intentional — it lets `refAllDecls` in the test binary discover all tests across the entire source tree.
- **DataService is the sole data source.** Both CLI and TUI go through `DataService` for all fetched data. Never call provider APIs directly from commands or TUI tabs.
@ -172,7 +278,7 @@ Each provider in `src/providers/` follows the same structure:
### Test pattern
All tests are inline (in `test` blocks within source files). There is a single test binary rooted at `src/main.zig` which uses `refAllDeclsRecursive(@This())` to discover all tests transitively via file imports. The `tests/` directory exists but fixtures are empty — all test data is defined inline.
All tests are inline (in `test` blocks within source files). There is a single test binary rooted at `src/main.zig` which uses `std.testing.refAllDecls(@This())` to sema-touch every top-level decl in main.zig. Each decl that's a `@import(...)` of a source file pulls that file into compilation, which causes its `test` blocks to be collected by the test runner. The `tests/` directory exists but fixtures are empty — all test data is defined inline.
Tests use `std.testing.allocator` (which detects leaks) and are structured as unit tests that verify individual functions. Network-dependent code is not tested (no mocking infrastructure).
@ -181,23 +287,21 @@ Tests use `std.testing.allocator` (which detects leaks) and are structured as un
**This gets fucked up every single session. Read it. Do what it says.**
`zig build test` runs tests from `test` blocks in files that are part of the
test binary's compilation unit AND are reachable from `src/main.zig`'s
import graph in a way that `refAllDeclsRecursive` actually visits the file
struct itself (not just a type extracted from it).
test binary's compilation unit AND get sema-pulled by the import graph from
`src/main.zig`. With the bare `std.testing.refAllDecls(@This())` we use, a
file's tests are collected as long as the file is imported (directly or
transitively) from main.zig.
**The failure mode:** you add `src/models/foo.zig` with 20 tests. You wire
it into `src/service.zig` via `const foo = @import("models/foo.zig");` and
re-export a type from `root.zig` as `pub const foo = @import("models/foo.zig");`.
You run `zig build test` and the test count does NOT go up. The file
**compiles** (because `foo.Bar` is referenced as a function return type),
but the `test` blocks inside it are never run.
it into `src/service.zig` only as a *type extraction*, e.g.
`const Bar = @import("models/foo.zig").Bar;` (assigning the type, not the
file struct). The file **compiles** because `Bar` is referenced, but the
file struct itself was never sema-touched as a struct, so its `test`
blocks are not collected.
**Why:** `root.zig` is imported into `main.zig` via
`const zfin = @import("root.zig");` — non-pub. `refAllDeclsRecursive` walks
`@typeInfo(@This()).@"struct".decls` at main.zig, which only surfaces some
of the decl graph. The `pub const foo = @import(...)` in root.zig is not
reliably traversed from main.zig's test root, so `foo.zig`'s test blocks
aren't collected even though the file is compiled.
**The fix:** ensure at least one importer assigns the file struct to a
`const`, like `const foo = @import("models/foo.zig");`. Even if you only
use a type from it, the `const foo` form pulls in the file's `test` blocks.
**How to verify a new file's tests are discovered:**
@ -209,15 +313,17 @@ aren't collected even though the file is compiled.
```
2. Run `zig build test --summary all 2>&1 | grep -E "tests passed|error:"`.
3. If the canary test appears in failures → discovery works, remove canary.
4. If the canary does NOT appear and total count is unchanged → see fix below.
4. If the canary does NOT appear and total count is unchanged → ensure
the file is imported via a `const x = @import(...)` form somewhere
reachable from main.zig.
**Fix:** add an explicit import in the `test` block at the bottom of
`src/main.zig`:
**Fallback fix:** if you can't fix the import shape, add an explicit
import in the `test` block at the bottom of `src/main.zig`:
```zig
test {
std.testing.refAllDeclsRecursive(@This());
_ = @import("models/foo.zig"); // ← new entry for each orphaned file
std.testing.refAllDecls(@This());
_ = @import("models/foo.zig"); // ← orphaned file
}
```

View file

@ -162,14 +162,6 @@ server starts compressing response bodies, Content-Length reflects
the compressed byte count, not the decoded payload, so it's not a
reliable integrity check.)
## Upgrade to 0.16.0
Pending dependencies:
* SRF: complete (use 0.15.2 tag if needed)
* VAxis: work seems to be in this PR: https://github.com/rockorager/libvaxis/pull/316
* z2d: implemented in 0.11.0 of z2d
## Market-aware cache TTL for daily candles
Daily candle TTL is currently 23h45m, but candle data only becomes meaningful

View file

@ -191,28 +191,20 @@ fn gitHeadTimestamp(b: *std.Build) ?i64 {
/// dupe it through `b.allocator`. Returns null on any error (git
/// missing, non-zero exit, empty output).
fn gitCapture(b: *std.Build, argv: []const []const u8) ?[]const u8 {
var child = std.process.Child.init(argv, b.allocator);
child.cwd = b.build_root.path;
child.stdout_behavior = .Pipe;
child.stderr_behavior = .Ignore;
child.spawn() catch return null;
const io = b.graph.io;
const result = std.process.run(b.allocator, io, .{
.argv = argv,
.cwd = .{ .path = b.build_root.path orelse "." },
}) catch return null;
defer b.allocator.free(result.stdout);
defer b.allocator.free(result.stderr);
const stdout_file = child.stdout orelse {
_ = child.wait() catch {};
return null;
};
const stdout_bytes = stdout_file.readToEndAlloc(b.allocator, 4096) catch {
_ = child.wait() catch {};
return null;
};
const term = child.wait() catch return null;
switch (term) {
.Exited => |code| if (code != 0) return null,
switch (result.term) {
.exited => |code| if (code != 0) return null,
else => return null,
}
const trimmed = std.mem.trim(u8, stdout_bytes, " \t\r\n");
const trimmed = std.mem.trim(u8, result.stdout, " \t\r\n");
if (trimmed.len == 0) return null;
return b.dupe(trimmed);
}

View file

@ -2,19 +2,19 @@
.name = .zfin,
.version = "0.0.0",
.fingerprint = 0x77a9b4c7d676e027,
.minimum_zig_version = "0.15.2",
.minimum_zig_version = "0.16.0",
.dependencies = .{
.vaxis = .{
.url = "git+https://github.com/rockorager/libvaxis.git#67bbc1ee072aa390838c66caf4ed47edee282dc4",
.hash = "vaxis-0.5.1-BWNV_IxJCQC5OGNaXQfNnqgn9_Vku0PMgey-dplubcQK",
.url = "git+https://github.com/rockorager/libvaxis.git?ref=main#1dbbe575dff4586fe51e3217aa5c3fecdcbb6089",
.hash = "vaxis-0.6.0-BWNV_CrbCQCscGpzsAlR402rYQ_tV3aAl081c2iRRkka",
},
.z2d = .{
.url = "git+https://github.com/vancluever/z2d?ref=v0.10.0#6d1d7bda6b696c0941d204e6042f1e8ee900e001",
.hash = "z2d-0.10.0-j5P_Hu-6FgBsZNgwphIqh17jDnj8_yPtD8yzjO6PpHRQ",
.url = "git+https://github.com/vancluever/z2d?ref=v0.11.0#5184a79622dce6b885c45ef6666f8c92385bed10",
.hash = "z2d-0.11.0-j5P_HtLzDwBGyQt49DrT0v4BuVqI_SRs6CXsuj7eBVhR",
},
.srf = .{
.url = "git+https://git.lerch.org/lobo/srf.git#353f8bca359d35872c1869dca906f34f9579d073",
.hash = "srf-0.0.0-qZj577GyAQBpIS3e1hiOb6Gi-4KUmFxaNsk3jzZMszoO",
.url = "git+https://git.lerch.org/lobo/srf.git?ref=master#512eab0db082f1679af4de77b1f1713409766fcf",
.hash = "srf-0.0.0-qZj57-7CAQBdAFgdiSB2bE5Socq8QNId8PFzynVQbSUN",
},
},
.paths = .{

View file

@ -174,13 +174,15 @@ fn make(step: *Build.Step, options: Build.Step.MakeOptions) !void {
_ = options;
const check: *Coverage = @fieldParentPtr("step", step);
const allocator = step.owner.allocator;
const io = step.owner.graph.io;
const file = std.fs.cwd().openFile(check.json_path, .{}) catch |err| {
const file = std.Io.Dir.cwd().openFile(io, check.json_path, .{}) catch |err| {
return step.fail("Failed to open coverage report {s}: {}", .{ check.json_path, err });
};
defer file.close();
defer file.close(io);
const content = try file.readToEndAlloc(allocator, 10 * 1024 * 1024);
var file_reader = file.reader(io, &.{});
const content = try file_reader.interface.allocRemaining(allocator, .limited(10 * 1024 * 1024));
defer allocator.free(content);
const json = std.json.parseFromSlice(CoverageReport, allocator, content, .{
@ -214,7 +216,7 @@ fn make(step: *Build.Step, options: Build.Step.MakeOptions) !void {
std.mem.sort(File, file_list.items, {}, File.coverageLessThanDesc);
var stdout_buffer: [1024]u8 = undefined;
var stdout_writer = std.fs.File.stdout().writer(&stdout_buffer);
var stdout_writer = std.Io.File.stdout().writer(io, &stdout_buffer);
const stdout = &stdout_writer.interface;
if (step.owner.verbose) {
for (file_list.items) |f| {

View file

@ -1,11 +1,14 @@
const std = @import("std");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
const allocator = gpa.allocator();
pub fn main(init: std.process.Init) !void {
// Build-time helper: short-lived process that downloads a single
// file. Arena lets us skip per-allocation `defer free(...)` and
// amortizes the allocation cost across the run via the arena's
// exponential block growth. Process exit reclaims everything.
const allocator = init.arena.allocator();
const io = init.io;
const args = try std.process.argsAlloc(allocator);
defer std.process.argsFree(allocator, args);
const args = try init.minimal.args.toSlice(allocator);
if (args.len != 3) return error.InvalidArgs;
@ -13,20 +16,20 @@ pub fn main() !void {
const arch_name = args[2];
// Check to see if file exists. If it does, we have nothing more to do
const stat = std.fs.cwd().statFile(kcov_path) catch |err| blk: {
const stat = std.Io.Dir.cwd().statFile(io, kcov_path, .{}) catch |err| blk: {
if (err == error.FileNotFound) break :blk null else return err;
};
// This might be better checking whether it's executable and >= 7MB, but
// for now, we'll do a simple exists check
if (stat != null) return;
var stdout_buffer: [1024]u8 = undefined;
var stdout_writer = std.fs.File.stdout().writer(&stdout_buffer);
var stdout_writer = std.Io.File.stdout().writer(io, &stdout_buffer);
const stdout = &stdout_writer.interface;
try stdout.writeAll("Determining latest kcov version\n");
try stdout.flush();
var client = std.http.Client{ .allocator = allocator };
var client = std.http.Client{ .allocator = allocator, .io = io };
defer client.deinit();
// Get redirect to find latest version
@ -41,7 +44,7 @@ pub fn main() !void {
if (response.head.status != .see_other) return error.UnexpectedResponse;
const location = response.head.location orelse return error.NoLocation;
const version_start = std.mem.lastIndexOf(u8, location, "/") orelse return error.InvalidLocation;
const version_start = std.mem.lastIndexOfScalar(u8, location, '/') orelse return error.InvalidLocation;
const version = location[version_start + 1 ..];
try stdout.print(
@ -55,20 +58,20 @@ pub fn main() !void {
"https://git.lerch.org/api/packages/lobo/generic/kcov/{s}/kcov-{s}",
.{ version, arch_name },
);
defer allocator.free(binary_url);
const cache_dir = std.fs.path.dirname(kcov_path) orelse return error.InvalidPath;
std.fs.cwd().makeDir(cache_dir) catch |e| switch (e) {
std.Io.Dir.cwd().createDir(io, cache_dir, std.Io.File.Permissions.default_dir) catch |e| switch (e) {
error.PathAlreadyExists => {},
else => return e,
};
const uri = try std.Uri.parse(binary_url);
const file = try std.fs.cwd().createFile(kcov_path, .{ .mode = 0o755 });
defer file.close();
const file = try std.Io.Dir.cwd().createFile(io, kcov_path, .{});
defer file.close(io);
file.setPermissions(io, @enumFromInt(0o755)) catch {};
var buffer: [8192]u8 = undefined;
var writer = file.writer(&buffer);
var writer = file.writer(io, &buffer);
const result = try client.fetch(.{
.location = .{ .uri = uri },
.response_writer = &writer.interface,

View file

@ -10,28 +10,27 @@
const std = @import("std");
const ShillerYear = @import("shiller").ShillerYear;
pub fn main() !void {
var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
defer arena.deinit();
const allocator = arena.allocator();
pub fn main(init: std.process.Init) !void {
const allocator = init.arena.allocator();
const io = init.io;
const args = try std.process.argsAlloc(allocator);
const args = try init.minimal.args.toSlice(allocator);
if (args.len < 3) {
std.debug.print("Usage: gen_shiller <ie_data.csv> <output.zig>\n", .{});
std.process.exit(1);
}
const csv_data = try std.fs.cwd().readFileAlloc(allocator, args[1], 10 * 1024 * 1024);
const csv_data = try std.Io.Dir.cwd().readFileAlloc(io, args[1], allocator, .limited(10 * 1024 * 1024));
var results: [200]ShillerYear = undefined;
// Write output .zig file just raw parallel arrays, no type dependencies.
const out_file = try std.fs.cwd().createFile(args[2], .{});
defer out_file.close();
const out_file = try std.Io.Dir.cwd().createFile(io, args[2], .{});
defer out_file.close(io);
const parsed = try parseCsv(csv_data, &results);
var out_buf: [1024]u8 = undefined;
var file_writer = out_file.writer(&out_buf);
var file_writer = out_file.writer(io, &out_buf);
const writer = &file_writer.interface;
try writer.writeAll(
\\// Auto-generated from ie_data.csv — do not edit.

View file

@ -43,21 +43,23 @@ zfin_home: ?[]const u8 = null,
allocator: ?std.mem.Allocator = null,
/// Raw .env file contents (keys/values in env_map point into this).
env_buf: ?[]const u8 = null,
/// Parsed KEY=VALUE pairs from .env file.
/// Parsed KEY=VALUE pairs from .env file (fallback when process env is missing a key).
env_map: ?EnvMap = null,
/// Strings allocated by resolve() from process environment variables.
env_owned: std.ArrayList([]const u8) = .empty,
/// Process-level environment variable map (from Juicy Main). First-priority
/// lookup source before falling back to the .env file.
environ_map: ?*const std.process.Environ.Map = null,
// Construction / teardown
pub fn fromEnv(allocator: std.mem.Allocator) @This() {
pub fn fromEnv(io: std.Io, allocator: std.mem.Allocator, environ_map: *const std.process.Environ.Map) @This() {
var self = @This(){
.cache_dir = undefined,
.allocator = allocator,
.environ_map = environ_map,
};
// Try loading .env file from the current working directory
self.env_buf = std.fs.cwd().readFileAlloc(allocator, ".env", 4096) catch null;
self.env_buf = std.Io.Dir.cwd().readFileAlloc(io, ".env", allocator, .limited(4096)) catch null;
if (self.env_buf) |buf| {
self.env_map = parseEnvFile(allocator, buf);
}
@ -70,7 +72,7 @@ pub fn fromEnv(allocator: std.mem.Allocator) @This() {
const env_path = std.fs.path.join(allocator, &.{ home, ".env" }) catch null;
if (env_path) |p| {
defer allocator.free(p);
self.env_buf = std.fs.cwd().readFileAlloc(allocator, p, 4096) catch null;
self.env_buf = std.Io.Dir.cwd().readFileAlloc(io, p, allocator, .limited(4096)) catch null;
if (self.env_buf) |buf| {
self.env_map = parseEnvFile(allocator, buf);
}
@ -110,8 +112,6 @@ pub fn deinit(self: *@This()) void {
map.deinit();
}
if (self.env_buf) |buf| a.free(buf);
for (self.env_owned.items) |s| a.free(s);
self.env_owned.deinit(a);
if (self.cache_dir_owned) {
a.free(self.cache_dir);
}
@ -131,14 +131,14 @@ pub const ResolvedPath = struct {
/// Resolve a user file, trying cwd first then ZFIN_HOME.
/// Returns the path to use; caller must call `deinit()` on the result.
pub fn resolveUserFile(self: @This(), allocator: std.mem.Allocator, rel_path: []const u8) ?ResolvedPath {
if (std.fs.cwd().access(rel_path, .{})) |_| {
pub fn resolveUserFile(self: @This(), io: std.Io, allocator: std.mem.Allocator, rel_path: []const u8) ?ResolvedPath {
if (std.Io.Dir.cwd().access(io, rel_path, .{})) |_| {
return .{ .path = rel_path, .owned = false };
} else |_| {}
if (self.zfin_home) |home| {
const full = std.fs.path.join(allocator, &.{ home, rel_path }) catch return null;
if (std.fs.cwd().access(full, .{})) |_| {
if (std.Io.Dir.cwd().access(io, full, .{})) |_| {
return .{ .path = full, .owned = true };
} else |_| {
allocator.free(full);
@ -161,11 +161,8 @@ pub fn hasAnyKey(self: @This()) bool {
/// Look up a key: process environment first, then .env file fallback.
fn resolve(self: *@This(), key: []const u8) ?[]const u8 {
if (self.allocator) |a| {
if (std.process.getEnvVarOwned(a, key)) |v| {
self.env_owned.append(a, v) catch {};
return v;
} else |_| {}
if (self.environ_map) |em| {
if (em.get(key)) |v| return v;
}
if (self.env_map) |m| return m.get(key);
return null;
@ -309,8 +306,8 @@ test "ResolvedPath.deinit: frees when owned, no-op when not owned" {
// (If this leaked, the test allocator would fail the test.)
}
test "resolve: env_map fallback when allocator is null (skips process env)" {
// Setting allocator=null disables the getEnvVarOwned branch, so the
test "resolve: env_map fallback when environ_map is null (skips process env)" {
// Setting environ_map=null disables the process-env branch, so the
// lookup must come from env_map alone. This exercises the .env-only
// code path without depending on host environment variables.
var map = EnvMap.init(testing.allocator);
@ -326,7 +323,7 @@ test "resolve: env_map fallback when allocator is null (skips process env)" {
try testing.expect(c.resolve("MISSING") == null);
}
test "resolve: allocator=null and env_map=null returns null" {
test "resolve: environ_map=null and env_map=null returns null" {
var c: @This() = .{ .cache_dir = "/tmp" };
try testing.expect(c.resolve("ANYTHING") == null);
}

View file

@ -261,7 +261,7 @@ pub fn analyzePortfolio(
portfolio: Portfolio,
total_portfolio_value: f64,
account_map: ?AccountMap,
as_of: ?Date,
as_of: Date,
) !AnalysisResult {
// Accumulators: label -> dollar amount
var ac_map = std.StringHashMap(f64).init(allocator);
@ -327,12 +327,11 @@ pub fn analyzePortfolio(
}
// Account breakdown from individual lots (avoids "Multiple" aggregation issue).
// Use `lotIsOpenAsOf(as_of)` when provided so backfilled snapshots
// correctly include/exclude lots based on the target date rather
// than wall-clock today. `isOpen()` = `lotIsOpenAsOf(today)`.
const reference_date = as_of orelse Date.fromEpoch(std.time.timestamp());
// Use `lotIsOpenAsOf(as_of)` so backfilled snapshots correctly include/
// exclude lots based on the target date. For "live" callers the right
// thing is to pass today; the resolution happens at the call site.
for (portfolio.lots) |lot| {
if (!lot.lotIsOpenAsOf(reference_date)) continue;
if (!lot.lotIsOpenAsOf(as_of)) continue;
const acct = lot.account orelse continue;
const value: f64 = switch (lot.security_type) {
.stock => blk: {
@ -353,8 +352,8 @@ pub fn analyzePortfolio(
}
// Add non-stock asset classes (combine Cash + CDs)
const cash_total = portfolio.totalCash();
const cd_total = portfolio.totalCdFaceValue();
const cash_total = portfolio.totalCash(as_of);
const cd_total = portfolio.totalCdFaceValue(as_of);
const cash_cd_total = cash_total + cd_total;
if (cash_cd_total > 0) {
const prev = ac_map.get("Cash & CDs") orelse 0;
@ -362,7 +361,7 @@ pub fn analyzePortfolio(
const gprev = geo_map.get("US") orelse 0;
geo_map.put("US", gprev + cash_cd_total) catch {};
}
const opt_total = portfolio.totalOptionCost();
const opt_total = portfolio.totalOptionCost(as_of);
if (opt_total > 0) {
const prev = ac_map.get("Options") orelse 0;
ac_map.put("Options", prev + opt_total) catch {};
@ -608,7 +607,7 @@ test "account breakdown applies price_ratio" {
portfolio,
142_500,
null,
null,
Date.fromYmd(2024, 6, 1),
);
defer result.deinit(allocator);

View file

@ -206,11 +206,11 @@ pub fn trailingReturnsWithDividends(
/// End date = last calendar day of prior month. Start date = that month-end minus N years.
/// Both dates snap backward to the last trading day on or before, matching
/// Morningstar's "last business day of the month" convention.
pub fn trailingReturnsMonthEnd(candles: []const Candle, today: Date) TrailingReturns {
pub fn trailingReturnsMonthEnd(candles: []const Candle, as_of: Date) TrailingReturns {
if (candles.len == 0) return .{};
// End reference = last day of the prior month (snaps backward to last trading day)
const month_end = today.lastDayOfPriorMonth();
const month_end = as_of.lastDayOfPriorMonth();
return .{
.one_year = totalReturnFromAdjCloseBackward(candles, month_end.subtractYears(1), month_end),
@ -224,11 +224,11 @@ pub fn trailingReturnsMonthEnd(candles: []const Candle, today: Date) TrailingRet
pub fn trailingReturnsMonthEndWithDividends(
candles: []const Candle,
dividends: []const Dividend,
today: Date,
as_of: Date,
) TrailingReturns {
if (candles.len == 0) return .{};
const month_end = today.lastDayOfPriorMonth();
const month_end = as_of.lastDayOfPriorMonth();
return .{
.one_year = totalReturnWithDividendsBackward(candles, dividends, month_end.subtractYears(1), month_end),

View file

@ -153,13 +153,9 @@ pub const UserConfig = struct {
return self.events[0..self.event_count];
}
/// Compute current ages (in whole years) from birthdates.
pub fn currentAges(self: *const UserConfig) [max_persons]u16 {
return currentAgesAsOf(self, Date.fromEpoch(std.time.timestamp()));
}
/// Compute ages as of a specific date (for testing).
pub fn currentAgesAsOf(self: *const UserConfig, as_of: Date) [max_persons]u16 {
/// Compute ages (in whole years) as of `as_of`. Pass today's date
/// for "current ages"; pass a historical date for backfill.
pub fn currentAges(self: *const UserConfig, as_of: Date) [max_persons]u16 {
var ages: [max_persons]u16 = @splat(0);
for (0..self.birthdate_count) |i| {
const years = Date.yearsBetween(self.birthdates[i], as_of);
@ -170,7 +166,7 @@ pub const UserConfig = struct {
/// Resolve age-based horizons (`horizon_ages`) into year counts and
/// append them to `horizons`. For each target age, computes
/// `target_age max(currentAgesAsOf(as_of))` the number of years
/// `target_age max(currentAges(as_of))` the number of years
/// until the oldest configured person hits that age. Targets that are
/// already in the past (oldest age target) are silently skipped.
///
@ -181,7 +177,7 @@ pub const UserConfig = struct {
if (self.horizon_age_count == 0) return;
if (self.birthdate_count == 0) return error.HorizonAgeWithoutBirthdate;
const ages = self.currentAgesAsOf(as_of);
const ages = self.currentAges(as_of);
var oldest: u16 = 0;
for (0..self.birthdate_count) |i| {
if (ages[i] > oldest) oldest = ages[i];
@ -217,8 +213,8 @@ pub const UserConfig = struct {
/// Resolve all events into ResolvedEvents for the simulation.
/// Skips events with invalid person indices.
pub fn resolveEvents(self: *const UserConfig) [max_events]ResolvedEvent {
const ages = self.currentAges();
pub fn resolveEvents(self: *const UserConfig, as_of: Date) [max_events]ResolvedEvent {
const ages = self.currentAges(as_of);
return resolveEventsWithAges(self, &ages);
}
@ -272,6 +268,14 @@ const SrfProjection = union(enum) {
/// Parse a projections.srf file into a UserConfig.
/// Returns default config if data is null or unparseable.
///
/// Uses an internal stack-backed FixedBufferAllocator for the SRF
/// iterator's scratch (`alloc_strings = false` keeps strings borrowing
/// from `data`, so the iterator only needs scratch for field-row
/// bookkeeping). The 8 KB buffer comfortably fits any realistic
/// projections.srf a handful of config + birthdate + event records.
/// On overflow the parse aborts and we return the default config,
/// matching the existing "unparseable → defaults" contract.
///
/// Format (union-tagged SRF records):
/// type::config,target_stock_pct:num:80
/// type::config,horizon:num:30
@ -282,8 +286,12 @@ pub fn parseProjectionsConfig(data: ?[]const u8) UserConfig {
const raw = data orelse return config;
if (raw.len == 0) return config;
var scratch_buf: [8 * 1024]u8 = undefined;
var fba = std.heap.FixedBufferAllocator.init(&scratch_buf);
const scratch = fba.allocator();
var reader = std.Io.Reader.fixed(raw);
var it = srf.iterator(&reader, std.heap.smp_allocator, .{ .alloc_strings = false }) catch return config;
var it = srf.iterator(&reader, scratch, .{ .alloc_strings = false }) catch return config;
defer it.deinit();
var saw_horizon = false;

View file

@ -411,7 +411,7 @@ pub fn computeWindowSet(
allocator: std.mem.Allocator,
points: []const TimelinePoint,
metric: Metric,
today: Date,
as_of: Date,
) !WindowSet {
if (points.len == 0) {
return .{ .rows = &.{}, .allocator = allocator };
@ -425,7 +425,7 @@ pub fn computeWindowSet(
const end_value = extractValue(end_point, metric);
for (windows, 0..) |period, i| {
const target = period.targetDate(today);
const target = period.targetDate(as_of);
const anchor_opt = pointAtOrBefore(points, target);
rows[i] = if (anchor_opt) |a| .{

View file

@ -27,10 +27,10 @@ pub const PortfolioSummary = struct {
/// Options add at cost basis (no live pricing).
/// This keeps unrealized_gain_loss correct (only stocks contribute market gains)
/// but dilutes the return% against the full portfolio cost base.
fn adjustForNonStockAssets(self: *PortfolioSummary, portfolio: portfolio_mod.Portfolio) void {
const cash_total = portfolio.totalCash();
const cd_total = portfolio.totalCdFaceValue();
const opt_total = portfolio.totalOptionCost();
fn adjustForNonStockAssets(self: *PortfolioSummary, as_of: Date, portfolio: portfolio_mod.Portfolio) void {
const cash_total = portfolio.totalCash(as_of);
const cd_total = portfolio.totalCdFaceValue(as_of);
const opt_total = portfolio.totalOptionCost(as_of);
const non_stock = cash_total + cd_total + opt_total;
self.total_value += non_stock;
self.total_cost += non_stock;
@ -150,8 +150,8 @@ pub const Allocation = struct {
/// model already exposes. Every display site CLI `portfolio` command,
/// TUI portfolio tab, planned snapshot writer should call this instead
/// of re-summing inline.
pub fn netWorth(portfolio: portfolio_mod.Portfolio, summary: PortfolioSummary) f64 {
return summary.total_value + portfolio.totalIlliquid();
pub fn netWorth(as_of: Date, portfolio: portfolio_mod.Portfolio, summary: PortfolioSummary) f64 {
return summary.total_value + portfolio.totalIlliquid(as_of);
}
/// `netWorth` evaluated against an arbitrary date used by historical
@ -352,6 +352,7 @@ fn mergeAllocsBySymbol(allocs: *std.ArrayList(Allocation), allocator: std.mem.Al
/// Automatically adjusts for covered calls (ITM sold calls capped at strike) and
/// non-stock assets (cash, CDs, options added to totals).
pub fn portfolioSummary(
as_of: Date,
allocator: std.mem.Allocator,
portfolio: portfolio_mod.Portfolio,
positions: []const portfolio_mod.Position,
@ -421,7 +422,7 @@ pub fn portfolioSummary(
};
summary.adjustForCoveredCalls(portfolio.lots, prices);
summary.adjustForNonStockAssets(portfolio);
summary.adjustForNonStockAssets(as_of, portfolio);
return summary;
}
@ -517,27 +518,27 @@ pub const HistoricalPeriod = enum {
};
}
/// Compute the target date by subtracting this period from `today`.
/// Compute the target date by subtracting this period from `as_of`.
///
/// `1D` subtracts one calendar day. Downstream snap-backward logic
/// will then pick the latest available data point on or before that
/// date so a Saturday-run view with no Saturday snapshot naturally
/// compares today against Friday's close.
/// compares as_of against Friday's close.
///
/// `ytd` resolves to Jan 1 of today's year. Jan 1 is always a market
/// `ytd` resolves to Jan 1 of `as_of`'s year. Jan 1 is always a market
/// holiday; the snap primitive will fall back to the prior year's
/// final trading day, which is exactly the brokerage YTD convention.
pub fn targetDate(self: HistoricalPeriod, today: Date) Date {
pub fn targetDate(self: HistoricalPeriod, as_of: Date) Date {
return switch (self) {
.@"1D" => today.addDays(-1),
.@"1W" => today.addDays(-7),
.@"1M" => today.subtractMonths(1),
.@"3M" => today.subtractMonths(3),
.ytd => Date.fromYmd(today.year(), 1, 1),
.@"1Y" => today.subtractYears(1),
.@"3Y" => today.subtractYears(3),
.@"5Y" => today.subtractYears(5),
.@"10Y" => today.subtractYears(10),
.@"1D" => as_of.addDays(-1),
.@"1W" => as_of.addDays(-7),
.@"1M" => as_of.subtractMonths(1),
.@"3M" => as_of.subtractMonths(3),
.ytd => Date.fromYmd(as_of.year(), 1, 1),
.@"1Y" => as_of.subtractYears(1),
.@"3Y" => as_of.subtractYears(3),
.@"5Y" => as_of.subtractYears(5),
.@"10Y" => as_of.subtractYears(10),
};
}
@ -596,7 +597,7 @@ fn findPriceAtDate(candles: []const Candle, target: Date) ?f64 {
/// `current_prices` maps symbol -> current price.
/// Only equity positions are considered.
pub fn computeHistoricalSnapshots(
today: Date,
as_of: Date,
positions: []const portfolio_mod.Position,
current_prices: std.StringHashMap(f64),
candle_map: std.StringHashMap([]const Candle),
@ -604,7 +605,7 @@ pub fn computeHistoricalSnapshots(
var result: [HistoricalPeriod.all.len]HistoricalSnapshot = undefined;
for (HistoricalPeriod.all, 0..) |period, pi| {
const target = period.targetDate(today);
const target = period.targetDate(as_of);
var hist_value: f64 = 0;
var curr_value: f64 = 0;
var count: usize = 0;
@ -891,7 +892,7 @@ test "adjustForNonStockAssets" {
.realized_gain_loss = 0,
.allocations = &allocs,
};
summary.adjustForNonStockAssets(pf);
summary.adjustForNonStockAssets(Date.fromYmd(2026, 5, 8), pf);
// non_stock = 5000 + 10000 + (2 * 5 * 100) = 16000
try std.testing.expectApproxEqAbs(@as(f64, 18200), summary.total_value, 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 18000), summary.total_cost, 0.01);
@ -945,7 +946,7 @@ test "portfolioSummary applies price_ratio" {
try prices.put("AAPL", 175.0);
const empty_pf = portfolio_mod.Portfolio{ .lots = &.{}, .allocator = alloc };
var summary = try portfolioSummary(alloc, empty_pf, &positions, prices, null);
var summary = try portfolioSummary(Date.fromYmd(2026, 5, 8), alloc, empty_pf, &positions, prices, null);
defer summary.deinit(alloc);
try std.testing.expectEqual(@as(usize, 2), summary.allocations.len);
@ -982,7 +983,7 @@ test "portfolioSummary skips price_ratio for manual/fallback prices" {
defer manual.deinit();
try manual.put("VTTHX", {});
var summary = try portfolioSummary(alloc, .{ .lots = &.{}, .allocator = alloc }, &positions, prices, manual);
var summary = try portfolioSummary(Date.fromYmd(2026, 5, 8), alloc, .{ .lots = &.{}, .allocator = alloc }, &positions, prices, manual);
defer summary.deinit(alloc);
try std.testing.expectEqual(@as(usize, 1), summary.allocations.len);
@ -1166,7 +1167,7 @@ test "netWorth / netWorthAsOf: illiquid respects target date" {
// illiquid is excluded. Asserts the no-arg form delegates correctly.
try std.testing.expectApproxEqAbs(
@as(f64, 100_000.0),
netWorth(portfolio, summary),
netWorth(Date.fromYmd(2026, 5, 8), portfolio, summary),
0.01,
);
}

View file

@ -31,6 +31,7 @@ pub const tmp_suffix = ".tmp";
/// The allocator is used for a short-lived temp-path buffer
/// (`path.len + tmp_suffix.len` bytes) and freed before return.
pub fn writeFileAtomic(
io: std.Io,
allocator: std.mem.Allocator,
path: []const u8,
bytes: []const u8,
@ -39,25 +40,25 @@ pub fn writeFileAtomic(
defer allocator.free(tmp_path);
{
var tmp_file = try std.fs.cwd().createFile(tmp_path, .{
var tmp_file = try std.Io.Dir.cwd().createFile(io, tmp_path, .{
.truncate = true,
.exclusive = false,
});
errdefer {
tmp_file.close();
std.fs.cwd().deleteFile(tmp_path) catch {};
tmp_file.close(io);
std.Io.Dir.cwd().deleteFile(io, tmp_path) catch {};
}
try tmp_file.writeAll(bytes);
try tmp_file.writeStreamingAll(io, bytes);
// fsync so the kernel flushes data to disk before the rename
// appears. Without this, a crash between rename() and the data
// hitting disk could leave an empty-but-present file at `path`.
try tmp_file.sync();
tmp_file.close();
try tmp_file.sync(io);
tmp_file.close(io);
}
std.fs.cwd().rename(tmp_path, path) catch |err| {
std.fs.cwd().deleteFile(tmp_path) catch {};
std.Io.Dir.cwd().rename(tmp_path, std.Io.Dir.cwd(), path, io) catch |err| {
std.Io.Dir.cwd().deleteFile(io, tmp_path) catch {};
return err;
};
}
@ -65,52 +66,56 @@ pub fn writeFileAtomic(
// Tests
test "writeFileAtomic creates new file" {
const io = std.testing.io;
var tmp_dir = std.testing.tmpDir(.{});
defer tmp_dir.cleanup();
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const dir_path = try tmp_dir.dir.realpath(".", &path_buf);
const dir_len = try tmp_dir.dir.realPathFile(io, ".", &path_buf);
const dir_path = path_buf[0..dir_len];
const file_path = try std.fs.path.join(std.testing.allocator, &.{ dir_path, "atomic_new.txt" });
defer std.testing.allocator.free(file_path);
try writeFileAtomic(std.testing.allocator, file_path, "hello world\n");
try writeFileAtomic(io, std.testing.allocator, file_path, "hello world\n");
const contents = try std.fs.cwd().readFileAlloc(std.testing.allocator, file_path, 4096);
const contents = try std.Io.Dir.cwd().readFileAlloc(io, file_path, std.testing.allocator, .limited(4096));
defer std.testing.allocator.free(contents);
try std.testing.expectEqualStrings("hello world\n", contents);
// Tmp file should have been consumed by rename.
const tmp_path = try std.fmt.allocPrint(std.testing.allocator, "{s}{s}", .{ file_path, tmp_suffix });
defer std.testing.allocator.free(tmp_path);
try std.testing.expectError(error.FileNotFound, std.fs.cwd().access(tmp_path, .{}));
try std.testing.expectError(error.FileNotFound, std.Io.Dir.cwd().access(io, tmp_path, .{}));
// Clean up for the next test run.
std.fs.cwd().deleteFile(file_path) catch {};
std.Io.Dir.cwd().deleteFile(io, file_path) catch {};
}
test "writeFileAtomic overwrites existing file" {
const io = std.testing.io;
var tmp_dir = std.testing.tmpDir(.{});
defer tmp_dir.cleanup();
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const dir_path = try tmp_dir.dir.realpath(".", &path_buf);
const dir_len = try tmp_dir.dir.realPathFile(io, ".", &path_buf);
const dir_path = path_buf[0..dir_len];
const file_path = try std.fs.path.join(std.testing.allocator, &.{ dir_path, "atomic_over.txt" });
defer std.testing.allocator.free(file_path);
// Seed with old content.
{
var f = try std.fs.cwd().createFile(file_path, .{});
try f.writeAll("old contents");
f.close();
var f = try std.Io.Dir.cwd().createFile(io, file_path, .{});
try f.writeStreamingAll(io, "old contents");
f.close(io);
}
try writeFileAtomic(std.testing.allocator, file_path, "new contents");
try writeFileAtomic(io, std.testing.allocator, file_path, "new contents");
const contents = try std.fs.cwd().readFileAlloc(std.testing.allocator, file_path, 4096);
const contents = try std.Io.Dir.cwd().readFileAlloc(io, file_path, std.testing.allocator, .limited(4096));
defer std.testing.allocator.free(contents);
try std.testing.expectEqualStrings("new contents", contents);
std.fs.cwd().deleteFile(file_path) catch {};
std.Io.Dir.cwd().deleteFile(io, file_path) catch {};
}
test "writeFileAtomic: missing parent directory surfaces FileNotFound" {
@ -118,11 +123,13 @@ test "writeFileAtomic: missing parent directory surfaces FileNotFound" {
// itself exists (so the filesystem is fine), but the "missing"
// subdirectory does not createFile on the .tmp file must fail
// with FileNotFound regardless of platform.
const io = std.testing.io;
var tmp_dir = std.testing.tmpDir(.{});
defer tmp_dir.cleanup();
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const dir_path = try tmp_dir.dir.realpath(".", &path_buf);
const dir_len = try tmp_dir.dir.realPathFile(io, ".", &path_buf);
const dir_path = path_buf[0..dir_len];
const bad_path = try std.fs.path.join(
std.testing.allocator,
&.{ dir_path, "missing", "file.txt" },
@ -131,6 +138,6 @@ test "writeFileAtomic: missing parent directory surfaces FileNotFound" {
try std.testing.expectError(
error.FileNotFound,
writeFileAtomic(std.testing.allocator, bad_path, "x"),
writeFileAtomic(io, std.testing.allocator, bad_path, "x"),
);
}

182
src/cache/store.zig vendored
View file

@ -13,6 +13,16 @@ const ReportTime = @import("../models/earnings.zig").ReportTime;
const EtfProfile = @import("../models/etf_profile.zig").EtfProfile;
const Holding = @import("../models/etf_profile.zig").Holding;
const SectorWeight = @import("../models/etf_profile.zig").SectorWeight;
// Wall-clock policy
//
// Every `std.Io.Timestamp.now(...)` call in this file is intentional:
// the cache layer's job is to record *when data landed on disk* and
// compute expiry relative to that. Threading a `now_s: i64` in from the
// caller wouldn't buy anything we'd just push the clock read up one
// frame. The torn-SRF diagnostic filenames (line ~473) additionally
// require millisecond precision to avoid collisions, which a
// caller-provided second-resolution `now_s` couldn't give us.
const Lot = @import("../models/portfolio.zig").Lot;
const LotType = @import("../models/portfolio.zig").LotType;
const Portfolio = @import("../models/portfolio.zig").Portfolio;
@ -83,13 +93,15 @@ pub const DataType = enum {
pub const Store = struct {
cache_dir: []const u8,
allocator: std.mem.Allocator,
io: std.Io,
/// Optional post-processing callback applied to each record during deserialization.
/// Used to dupe strings that outlive the SRF iterator, or apply domain-specific transforms.
pub const PostProcessFn = fn (*anyopaque, std.mem.Allocator) anyerror!void;
pub fn init(allocator: std.mem.Allocator, cache_dir: []const u8) Store {
pub fn init(io: std.Io, allocator: std.mem.Allocator, cache_dir: []const u8) Store {
return .{
.io = io,
.cache_dir = cache_dir,
.allocator = allocator,
};
@ -152,9 +164,9 @@ pub const Store = struct {
if (freshness == .fresh_only) {
// Negative entries are always fresh return empty data
if (T == EtfProfile)
return .{ .data = EtfProfile{ .symbol = "" }, .timestamp = std.time.timestamp() };
return .{ .data = EtfProfile{ .symbol = "" }, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds() };
if (T == OptionsChain)
return .{ .data = &.{}, .timestamp = std.time.timestamp() };
return .{ .data = &.{}, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds() };
}
return null;
}
@ -165,9 +177,9 @@ pub const Store = struct {
if (freshness == .fresh_only) {
if (it.expires == null) return null;
if (!it.isFresh()) return null;
if (!it.isFresh(self.io)) return null;
}
const timestamp = it.created orelse std.time.timestamp();
const timestamp = it.created orelse std.Io.Timestamp.now(self.io, .real).toSeconds();
if (T == EtfProfile) {
const profile = deserializeEtfProfile(self.allocator, &it) catch return null;
@ -179,7 +191,7 @@ pub const Store = struct {
}
}
return readSlice(T, self.allocator, data, postProcess, freshness);
return readSlice(T, self.io, self.allocator, data, postProcess, freshness);
}
/// Serialize data and write to cache with the given TTL.
@ -191,10 +203,10 @@ pub const Store = struct {
items: DataFor(T),
ttl: i64,
) void {
const expires = std.time.timestamp() + ttl;
const expires = std.Io.Timestamp.now(self.io, .real).toSeconds() + ttl;
const data_type = dataTypeFor(T);
if (T == EtfProfile) {
const srf_data = serializeEtfProfile(self.allocator, items, .{ .expires = expires }) catch |err| {
const srf_data = serializeEtfProfile(self.io, self.allocator, items, .{ .expires = expires }) catch |err| {
log.warn("{s}: failed to serialize ETF profile: {s}", .{ symbol, @errorName(err) });
return;
};
@ -205,7 +217,7 @@ pub const Store = struct {
return;
}
if (T == OptionsChain) {
const srf_data = serializeOptions(self.allocator, items, .{ .expires = expires }) catch |err| {
const srf_data = serializeOptions(self.io, self.allocator, items, .{ .expires = expires }) catch |err| {
log.warn("{s}: failed to serialize options: {s}", .{ symbol, @errorName(err) });
return;
};
@ -215,7 +227,7 @@ pub const Store = struct {
};
return;
}
const srf_data = serializeWithMeta(T, self.allocator, items, .{ .expires = expires }) catch |err| {
const srf_data = serializeWithMeta(T, self.io, self.allocator, items, .{ .expires = expires }) catch |err| {
log.warn("{s}: failed to serialize {s}: {s}", .{ symbol, @tagName(data_type), @errorName(err) });
return;
};
@ -282,14 +294,14 @@ pub const Store = struct {
/// Write (or refresh) candle metadata with a specific provider source.
pub fn updateCandleMeta(self: *Store, symbol: []const u8, last_close: f64, last_date: Date, provider: CandleProvider, fail_count: u8) void {
const expires = std.time.timestamp() + Ttl.candles_latest;
const expires = std.Io.Timestamp.now(self.io, .real).toSeconds() + Ttl.candles_latest;
const meta = CandleMeta{
.last_close = last_close,
.last_date = last_date,
.provider = provider,
.fail_count = fail_count,
};
if (serializeCandleMeta(self.allocator, meta, .{ .expires = expires })) |meta_data| {
if (serializeCandleMeta(self.io, self.allocator, meta, .{ .expires = expires })) |meta_data| {
defer self.allocator.free(meta_data);
self.writeRaw(symbol, .candles_meta, meta_data) catch |err| {
log.warn("{s}: failed to write candle metadata: {s}", .{ symbol, @errorName(err) });
@ -305,7 +317,7 @@ pub const Store = struct {
pub fn ensureSymbolDir(self: *Store, symbol: []const u8) !void {
const path = try self.symbolPath(symbol, "");
defer self.allocator.free(path);
std.fs.cwd().makePath(path) catch |err| switch (err) {
std.Io.Dir.cwd().createDirPath(self.io, path) catch |err| switch (err) {
error.PathAlreadyExists => {},
else => return err,
};
@ -315,7 +327,7 @@ pub const Store = struct {
pub fn clearSymbol(self: *Store, symbol: []const u8) !void {
const path = try self.symbolPath(symbol, "");
defer self.allocator.free(path);
std.fs.cwd().deleteTree(path) catch {};
std.Io.Dir.cwd().deleteTree(self.io, path) catch {};
}
/// Content of a negative cache entry (fetch failed, don't retry until --refresh).
@ -446,6 +458,7 @@ pub const Store = struct {
/// detection path's return value. Callers log.debug the outcome and
/// move on.
pub fn archiveTornBody(
io: std.Io,
allocator: std.mem.Allocator,
cache_dir: []const u8,
symbol: []const u8,
@ -456,7 +469,7 @@ pub const Store = struct {
// Ensure the _torn/ directory exists.
const torn_dir = try std.fs.path.join(allocator, &.{ cache_dir, "_torn" });
defer allocator.free(torn_dir);
std.fs.cwd().makePath(torn_dir) catch |err| switch (err) {
std.Io.Dir.cwd().createDirPath(io, torn_dir) catch |err| switch (err) {
error.PathAlreadyExists => {},
else => return err,
};
@ -467,8 +480,8 @@ pub const Store = struct {
// produce distinct archive entries rather than overwriting each
// other two back-to-back tears from a refresh retry are the
// most valuable forensic signal we can capture.
const ts = std.time.timestamp();
const ts_ms = std.time.milliTimestamp();
const ts = std.Io.Timestamp.now(io, .real).toSeconds();
const ts_ms = @divTrunc(std.Io.Timestamp.now(io, .real).nanoseconds, std.time.ns_per_ms);
const bin_name = try std.fmt.allocPrint(
allocator,
@ -490,7 +503,7 @@ pub const Store = struct {
// Write the raw body first if this fails we don't bother with
// the sidecar, since the sidecar is only useful paired with bytes.
try atomic.writeFileAtomic(allocator, bin_path, bytes);
try atomic.writeFileAtomic(io, allocator, bin_path, bytes);
// Compute sha256 of the body for the sidecar.
var hash: [std.crypto.hash.sha2.Sha256.digest_length]u8 = undefined;
@ -546,7 +559,7 @@ pub const Store = struct {
const records = [_]TearRecord{record};
try aw.writer.print("{f}", .{srf.fmtFrom(TearRecord, allocator, &records, .{})});
try atomic.writeFileAtomic(allocator, meta_path, aw.writer.buffered());
try atomic.writeFileAtomic(io, allocator, meta_path, aw.writer.buffered());
}
/// Read-path self-heal for candle data. On detecting a torn
@ -560,6 +573,7 @@ pub const Store = struct {
/// the read path recoverable; diagnostics are a bonus.
fn selfHealTornCandles(self: *Store, symbol: []const u8, data: []const u8) void {
archiveTornBody(
self.io,
self.allocator,
self.cache_dir,
symbol,
@ -586,11 +600,12 @@ pub const Store = struct {
const path = self.symbolPath(symbol, data_type.fileName()) catch return false;
defer self.allocator.free(path);
const file = std.fs.cwd().openFile(path, .{}) catch return false;
defer file.close();
const file = std.Io.Dir.cwd().openFile(self.io, path, .{}) catch return false;
defer file.close(self.io);
var buf: [negative_cache_content.len]u8 = undefined;
const n = file.readAll(&buf) catch return false;
var file_reader = file.reader(self.io, &.{});
const n = file_reader.interface.readSliceShort(&buf) catch return false;
return n == negative_cache_content.len and
std.mem.eql(u8, buf[0..n], negative_cache_content);
}
@ -599,7 +614,7 @@ pub const Store = struct {
pub fn clearData(self: *Store, symbol: []const u8, data_type: DataType) void {
const path = self.symbolPath(symbol, data_type.fileName()) catch return;
defer self.allocator.free(path);
std.fs.cwd().deleteFile(path) catch {};
std.Io.Dir.cwd().deleteFile(self.io, path) catch {};
}
/// Read the close price from the candle metadata file.
@ -623,7 +638,7 @@ pub const Store = struct {
var it = srf.iterator(&reader, self.allocator, .{ .alloc_strings = false }) catch return null;
defer it.deinit();
const created = it.created orelse std.time.timestamp();
const created = it.created orelse std.Io.Timestamp.now(self.io, .real).toSeconds();
const fields = (it.next() catch return null) orelse return null;
const meta = fields.to(CandleMeta) catch return null;
return .{ .meta = meta, .created = created };
@ -642,12 +657,12 @@ pub const Store = struct {
defer it.deinit();
if (it.expires == null) return false;
return it.isFresh();
return it.isFresh(self.io);
}
/// Clear all cached data.
pub fn clearAll(self: *Store) !void {
std.fs.cwd().deleteTree(self.cache_dir) catch {};
std.Io.Dir.cwd().deleteTree(self.io, self.cache_dir) catch {};
}
// Public types
@ -685,7 +700,7 @@ pub const Store = struct {
const path = try self.symbolPath(symbol, data_type.fileName());
defer self.allocator.free(path);
return std.fs.cwd().readFileAlloc(self.allocator, path, 50 * 1024 * 1024) catch |err| switch (err) {
return std.Io.Dir.cwd().readFileAlloc(self.io, path, self.allocator, .limited(50 * 1024 * 1024)) catch |err| switch (err) {
error.FileNotFound => return null,
else => return err,
};
@ -713,7 +728,7 @@ pub const Store = struct {
const path = try self.symbolPath(symbol, data_type.fileName());
defer self.allocator.free(path);
try atomic.writeFileAtomic(self.allocator, path, data);
try atomic.writeFileAtomic(self.io, self.allocator, path, data);
}
/// Append raw bytes to an existing cache file.
@ -733,7 +748,7 @@ pub const Store = struct {
const path = try self.symbolPath(symbol, data_type.fileName());
defer self.allocator.free(path);
const existing = std.fs.cwd().readFileAlloc(self.allocator, path, 50 * 1024 * 1024) catch |err| switch (err) {
const existing = std.Io.Dir.cwd().readFileAlloc(self.io, path, self.allocator, .limited(50 * 1024 * 1024)) catch |err| switch (err) {
error.FileNotFound => return error.FileNotFound,
else => return err,
};
@ -744,7 +759,7 @@ pub const Store = struct {
@memcpy(combined[0..existing.len], existing);
@memcpy(combined[existing.len..], data);
try atomic.writeFileAtomic(self.allocator, path, combined);
try atomic.writeFileAtomic(self.io, self.allocator, path, combined);
}
fn symbolPath(self: *Store, symbol: []const u8, file_name: []const u8) ![]const u8 {
@ -761,6 +776,7 @@ pub const Store = struct {
/// `#!created=` timestamp, and deserializes all records.
fn readSlice(
comptime T: type,
io: std.Io,
allocator: std.mem.Allocator,
data: []const u8,
comptime postProcess: ?*const fn (*T, std.mem.Allocator) anyerror!void,
@ -775,11 +791,11 @@ pub const Store = struct {
const is_negative = std.mem.eql(u8, data, negative_cache_content);
if (!is_negative) {
if (it.expires == null) return null;
if (!it.isFresh()) return null;
if (!it.isFresh(io)) return null;
}
}
const timestamp: i64 = it.created orelse std.time.timestamp();
const timestamp: i64 = it.created orelse std.Io.Timestamp.now(io, .real).toSeconds();
var items: std.ArrayList(T) = .empty;
defer {
@ -813,6 +829,7 @@ pub const Store = struct {
/// Generic SRF serializer: emit directives (including `#!created=`) then data records.
fn serializeWithMeta(
comptime T: type,
io: std.Io,
allocator: std.mem.Allocator,
items: []const T,
options: srf.FormatOptions,
@ -820,7 +837,7 @@ pub const Store = struct {
var aw: std.Io.Writer.Allocating = .init(allocator);
errdefer aw.deinit();
var opts = options;
opts.created = std.time.timestamp();
opts.created = std.Io.Timestamp.now(io, .real).toSeconds();
try aw.writer.print("{f}", .{srf.fmtFrom(T, allocator, items, opts)});
return aw.toOwnedSlice();
}
@ -834,12 +851,12 @@ pub const Store = struct {
return aw.toOwnedSlice();
}
fn serializeCandleMeta(allocator: std.mem.Allocator, meta: CandleMeta, options: srf.FormatOptions) ![]const u8 {
fn serializeCandleMeta(io: std.Io, allocator: std.mem.Allocator, meta: CandleMeta, options: srf.FormatOptions) ![]const u8 {
var aw: std.Io.Writer.Allocating = .init(allocator);
errdefer aw.deinit();
const items = [_]CandleMeta{meta};
var opts = options;
opts.created = std.time.timestamp();
opts.created = std.Io.Timestamp.now(io, .real).toSeconds();
try aw.writer.print("{f}", .{srf.fmtFrom(CandleMeta, allocator, &items, opts)});
return aw.toOwnedSlice();
}
@ -868,7 +885,7 @@ pub const Store = struct {
put: OptionContract,
};
fn serializeOptions(allocator: std.mem.Allocator, chains: []const OptionsChain, options: srf.FormatOptions) ![]const u8 {
fn serializeOptions(io: std.Io, allocator: std.mem.Allocator, chains: []const OptionsChain, options: srf.FormatOptions) ![]const u8 {
var records: std.ArrayList(OptionsRecord) = .empty;
defer records.deinit(allocator);
@ -887,7 +904,7 @@ pub const Store = struct {
var aw: std.Io.Writer.Allocating = .init(allocator);
errdefer aw.deinit();
var opts = options;
opts.created = std.time.timestamp();
opts.created = std.Io.Timestamp.now(io, .real).toSeconds();
try aw.writer.print("{f}", .{srf.fmtFrom(OptionsRecord, allocator, records.items, opts)});
return aw.toOwnedSlice();
}
@ -971,7 +988,7 @@ pub const Store = struct {
holding: Holding,
};
fn serializeEtfProfile(allocator: std.mem.Allocator, profile: EtfProfile, options: srf.FormatOptions) ![]const u8 {
fn serializeEtfProfile(io: std.Io, allocator: std.mem.Allocator, profile: EtfProfile, options: srf.FormatOptions) ![]const u8 {
var records: std.ArrayList(EtfRecord) = .empty;
defer records.deinit(allocator);
@ -986,7 +1003,7 @@ pub const Store = struct {
var aw: std.Io.Writer.Allocating = .init(allocator);
errdefer aw.deinit();
var opts = options;
opts.created = std.time.timestamp();
opts.created = std.Io.Timestamp.now(io, .real).toSeconds();
try aw.writer.print("{f}", .{srf.fmtFrom(EtfRecord, allocator, records.items, opts)});
return aw.toOwnedSlice();
}
@ -1114,17 +1131,18 @@ pub fn deserializePortfolio(allocator: std.mem.Allocator, data: []const u8) !Por
}
test "dividend serialize/deserialize round-trip" {
const io = std.testing.io;
const allocator = std.testing.allocator;
const divs = [_]Dividend{
.{ .ex_date = Date.fromYmd(2024, 3, 15), .amount = 0.8325, .pay_date = Date.fromYmd(2024, 3, 28), .frequency = 4, .type = .regular },
.{ .ex_date = Date.fromYmd(2024, 6, 14), .amount = 0.9148, .type = .special },
};
const data = try Store.serializeWithMeta(Dividend, allocator, &divs, .{});
const data = try Store.serializeWithMeta(Dividend, io, allocator, &divs, .{});
defer allocator.free(data);
// No postProcess needed test data has no currency strings to dupe
const result = Store.readSlice(Dividend, allocator, data, null, .any) orelse return error.TestUnexpectedResult;
const result = Store.readSlice(Dividend, io, allocator, data, null, .any) orelse return error.TestUnexpectedResult;
const parsed = result.data;
defer allocator.free(parsed);
@ -1144,16 +1162,17 @@ test "dividend serialize/deserialize round-trip" {
}
test "split serialize/deserialize round-trip" {
const io = std.testing.io;
const allocator = std.testing.allocator;
const splits = [_]Split{
.{ .date = Date.fromYmd(2020, 8, 31), .numerator = 4, .denominator = 1 },
.{ .date = Date.fromYmd(2014, 6, 9), .numerator = 7, .denominator = 1 },
};
const data = try Store.serializeWithMeta(Split, allocator, &splits, .{});
const data = try Store.serializeWithMeta(Split, io, allocator, &splits, .{});
defer allocator.free(data);
const result = Store.readSlice(Split, allocator, data, null, .any) orelse return error.TestUnexpectedResult;
const result = Store.readSlice(Split, io, allocator, data, null, .any) orelse return error.TestUnexpectedResult;
const parsed = result.data;
defer allocator.free(parsed);
@ -1169,6 +1188,9 @@ test "split serialize/deserialize round-trip" {
test "portfolio serialize/deserialize round-trip" {
const allocator = std.testing.allocator;
// Today is after the lots' open_dates and after the one close_date,
// so "open" means "no close_date and not matured".
const today = Date.fromYmd(2024, 6, 1);
const lots = [_]Lot{
.{ .symbol = "AMZN", .shares = 10, .open_date = Date.fromYmd(2022, 3, 15), .open_price = 150.25 },
.{ .symbol = "AMZN", .shares = 5, .open_date = Date.fromYmd(2023, 6, 1), .open_price = 125.00, .close_date = Date.fromYmd(2024, 1, 15), .close_price = 185.50 },
@ -1185,11 +1207,11 @@ test "portfolio serialize/deserialize round-trip" {
try std.testing.expectEqualStrings("AMZN", portfolio.lots[0].symbol);
try std.testing.expectApproxEqAbs(@as(f64, 10), portfolio.lots[0].shares, 0.01);
try std.testing.expect(portfolio.lots[0].isOpen());
try std.testing.expect(portfolio.lots[0].isOpen(today));
try std.testing.expectEqualStrings("AMZN", portfolio.lots[1].symbol);
try std.testing.expectApproxEqAbs(@as(f64, 5), portfolio.lots[1].shares, 0.01);
try std.testing.expect(!portfolio.lots[1].isOpen());
try std.testing.expect(!portfolio.lots[1].isOpen(today));
try std.testing.expect(portfolio.lots[1].close_date.?.eql(Date.fromYmd(2024, 1, 15)));
try std.testing.expectApproxEqAbs(@as(f64, 185.50), portfolio.lots[1].close_price.?, 0.01);
@ -1343,10 +1365,11 @@ test "looksCompleteSrf: well-formed body accepted" {
}
test "archiveTornBody writes .bin + .meta pair with expected SRF content" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const dir_path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const dir_path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(dir_path);
// Shape mirrors the canonical FRDM torn body: a header plus a
@ -1355,6 +1378,7 @@ test "archiveTornBody writes .bin + .meta pair with expected SRF content" {
const torn_bytes = "#!srfv1\ndate::2026-04-22,open:num:62.82,close:num:63.23\ndate::2026-04";
try Store.archiveTornBody(
std.testing.io,
testing.allocator,
dir_path,
"FRDM",
@ -1376,8 +1400,8 @@ test "archiveTornBody writes .bin + .meta pair with expected SRF content" {
const torn_dir_path = try std.fs.path.join(testing.allocator, &.{ dir_path, "_torn" });
defer testing.allocator.free(torn_dir_path);
var torn_dir = try std.fs.cwd().openDir(torn_dir_path, .{ .iterate = true });
defer torn_dir.close();
var torn_dir = try std.Io.Dir.cwd().openDir(std.testing.io, torn_dir_path, .{ .iterate = true });
defer torn_dir.close(io);
var bin_name_buf: [128]u8 = undefined;
var meta_name_buf: [128]u8 = undefined;
@ -1385,7 +1409,7 @@ test "archiveTornBody writes .bin + .meta pair with expected SRF content" {
var meta_len: usize = 0;
var it = torn_dir.iterate();
while (try it.next()) |entry| {
while (try it.next(io)) |entry| {
if (std.mem.endsWith(u8, entry.name, ".bin")) {
@memcpy(bin_name_buf[0..entry.name.len], entry.name);
bin_len = entry.name.len;
@ -1407,14 +1431,14 @@ test "archiveTornBody writes .bin + .meta pair with expected SRF content" {
// .bin round-trips verbatim.
const bin_path = try std.fs.path.join(testing.allocator, &.{ torn_dir_path, bin_name_buf[0..bin_len] });
defer testing.allocator.free(bin_path);
const bin_contents = try std.fs.cwd().readFileAlloc(testing.allocator, bin_path, 1024 * 1024);
const bin_contents = try std.Io.Dir.cwd().readFileAlloc(std.testing.io, bin_path, testing.allocator, .limited(1024 * 1024));
defer testing.allocator.free(bin_contents);
try std.testing.expectEqualStrings(torn_bytes, bin_contents);
// .meta is valid SRF and carries the fields we care about.
const meta_path = try std.fs.path.join(testing.allocator, &.{ torn_dir_path, meta_name_buf[0..meta_len] });
defer testing.allocator.free(meta_path);
const meta_contents = try std.fs.cwd().readFileAlloc(testing.allocator, meta_path, 1024 * 1024);
const meta_contents = try std.Io.Dir.cwd().readFileAlloc(std.testing.io, meta_path, testing.allocator, .limited(1024 * 1024));
defer testing.allocator.free(meta_contents);
try std.testing.expect(std.mem.startsWith(u8, meta_contents, "#!srfv1\n"));
@ -1444,13 +1468,14 @@ test "archiveTornBody writes .bin + .meta pair with expected SRF content" {
}
test "Store.read self-heals torn candles_daily and wipes the pair" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const dir_path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const dir_path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(dir_path);
var store = Store.init(testing.allocator, dir_path);
var store = Store.init(std.testing.io, testing.allocator, dir_path);
try store.ensureSymbolDir("FRDM");
// Seed a torn daily and an intact meta the exact state we saw
@ -1469,18 +1494,18 @@ test "Store.read self-heals torn candles_daily and wipes the pair" {
defer testing.allocator.free(daily_path);
const meta_path = try std.fs.path.join(testing.allocator, &.{ dir_path, "FRDM", "candles_meta.srf" });
defer testing.allocator.free(meta_path);
try std.testing.expectError(error.FileNotFound, std.fs.cwd().access(daily_path, .{}));
try std.testing.expectError(error.FileNotFound, std.fs.cwd().access(meta_path, .{}));
try std.testing.expectError(error.FileNotFound, std.Io.Dir.cwd().access(std.testing.io, daily_path, .{}));
try std.testing.expectError(error.FileNotFound, std.Io.Dir.cwd().access(std.testing.io, meta_path, .{}));
// And the torn body was archived under _torn/.
const torn_dir_path = try std.fs.path.join(testing.allocator, &.{ dir_path, "_torn" });
defer testing.allocator.free(torn_dir_path);
var torn_dir = try std.fs.cwd().openDir(torn_dir_path, .{ .iterate = true });
defer torn_dir.close();
var torn_dir = try std.Io.Dir.cwd().openDir(std.testing.io, torn_dir_path, .{ .iterate = true });
defer torn_dir.close(io);
var found_bin = false;
var found_meta = false;
var it = torn_dir.iterate();
while (try it.next()) |entry| {
while (try it.next(io)) |entry| {
if (std.mem.startsWith(u8, entry.name, "FRDM_candles_daily_")) {
if (std.mem.endsWith(u8, entry.name, ".bin")) found_bin = true;
if (std.mem.endsWith(u8, entry.name, ".meta")) found_meta = true;
@ -1491,13 +1516,14 @@ test "Store.read self-heals torn candles_daily and wipes the pair" {
}
test "Store.read does not self-heal an intact candles_daily" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const dir_path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const dir_path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(dir_path);
var store = Store.init(testing.allocator, dir_path);
var store = Store.init(std.testing.io, testing.allocator, dir_path);
try store.ensureSymbolDir("OK");
// Seed a complete (well-formed) candles_daily.srf a single
@ -1519,13 +1545,13 @@ test "Store.read does not self-heal an intact candles_daily" {
const daily_path = try std.fs.path.join(testing.allocator, &.{ dir_path, "OK", "candles_daily.srf" });
defer testing.allocator.free(daily_path);
const daily_stat = try std.fs.cwd().statFile(daily_path);
const daily_stat = try std.Io.Dir.cwd().statFile(std.testing.io, daily_path, .{});
try std.testing.expect(daily_stat.size > 0);
// And no _torn/ directory was created.
const torn_dir_path = try std.fs.path.join(testing.allocator, &.{ dir_path, "_torn" });
defer testing.allocator.free(torn_dir_path);
try std.testing.expectError(error.FileNotFound, std.fs.cwd().access(torn_dir_path, .{}));
try std.testing.expectError(error.FileNotFound, std.Io.Dir.cwd().access(std.testing.io, torn_dir_path, .{}));
}
test "Store.dataTypeFor maps model types correctly" {
@ -1562,7 +1588,7 @@ test "CandleProvider.fromString parses provider names" {
test "Store init creates valid store" {
const allocator = std.testing.allocator;
const store = Store.init(allocator, "/tmp/zfin-test");
const store = Store.init(std.testing.io, allocator, "/tmp/zfin-test");
try std.testing.expectEqualStrings("/tmp/zfin-test", store.cache_dir);
}
@ -1592,6 +1618,7 @@ test "CandleMeta default provider is tiingo" {
const testing = std.testing;
test "writeRaw atomicity: concurrent readers never observe a truncated file" {
const io = std.testing.io;
// Two SRF blobs with dates at different byte offsets. A non-atomic
// writer that truncates + writes would leak partial bytes of
// whichever blob is mid-write; that would show up as either a
@ -1617,15 +1644,12 @@ test "writeRaw atomicity: concurrent readers never observe a truncated file" {
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const dir_path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const dir_path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(dir_path);
// Use a thread-safe allocator because multiple worker threads
// allocate through Store/writeFileAtomic concurrently.
var thread_safe: std.heap.ThreadSafeAllocator = .{ .child_allocator = testing.allocator };
const alloc = thread_safe.allocator();
var store = Store.init(alloc, dir_path);
// 0.16's DebugAllocator (which testing.allocator uses) is
// thread-safe by default, so no ThreadSafeAllocator wrapper needed.
var store = Store.init(std.testing.io, testing.allocator, dir_path);
// Make sure the symbol subdir exists once up-front (writeRaw will
// also call ensureSymbolDir, but doing it here keeps the writer
@ -1655,7 +1679,7 @@ test "writeRaw atomicity: concurrent readers never observe a truncated file" {
defer self.store.allocator.free(path);
while (!self.stop.load(.acquire)) {
const bytes = std.fs.cwd().readFileAlloc(self.store.allocator, path, 1 * 1024 * 1024) catch |err| switch (err) {
const bytes = std.Io.Dir.cwd().readFileAlloc(self.store.io, path, self.store.allocator, .limited(1 * 1024 * 1024)) catch |err| switch (err) {
error.FileNotFound => continue, // pre-first-write race
else => continue,
};
@ -1688,7 +1712,7 @@ test "writeRaw atomicity: concurrent readers never observe a truncated file" {
// Run the stress for a fixed duration. 200ms is plenty for thousands
// of iterations on any reasonable machine enough to reliably catch
// a non-atomic write window at the scheduler granularity.
std.Thread.sleep(200 * std.time.ns_per_ms);
try io.sleep(.fromMilliseconds(200), .boot);
ctx.stop.store(true, .release);
writer_thread.join();
@ -1704,6 +1728,7 @@ test "writeRaw atomicity: concurrent readers never observe a truncated file" {
}
test "appendRaw atomicity: concurrent readers see either pre- or post-append, never mid" {
const io = std.testing.io;
// Seed an initial file with a complete SRF doc, then have one thread
// append more records repeatedly while readers race to read it. Every
// successful read must parse cleanly and have a valid termination
@ -1722,13 +1747,12 @@ test "appendRaw atomicity: concurrent readers see either pre- or post-append, ne
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const dir_path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const dir_path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(dir_path);
var thread_safe: std.heap.ThreadSafeAllocator = .{ .child_allocator = testing.allocator };
const alloc = thread_safe.allocator();
var store = Store.init(alloc, dir_path);
// 0.16's DebugAllocator (which testing.allocator uses) is
// thread-safe by default, so no ThreadSafeAllocator wrapper needed.
var store = Store.init(std.testing.io, testing.allocator, dir_path);
try store.ensureSymbolDir("SYM");
// Write seed atomically up front.
@ -1752,7 +1776,7 @@ test "appendRaw atomicity: concurrent readers see either pre- or post-append, ne
defer self.store.allocator.free(path);
while (!self.stop.load(.acquire)) {
const bytes = std.fs.cwd().readFileAlloc(self.store.allocator, path, 4 * 1024 * 1024) catch continue;
const bytes = std.Io.Dir.cwd().readFileAlloc(self.store.io, path, self.store.allocator, .limited(4 * 1024 * 1024)) catch continue;
defer self.store.allocator.free(bytes);
// Invariants for a valid appended file:
@ -1783,7 +1807,7 @@ test "appendRaw atomicity: concurrent readers see either pre- or post-append, ne
var reader_threads: [3]std.Thread = undefined;
for (&reader_threads) |*t| t.* = try std.Thread.spawn(.{}, Ctx.reader, .{&ctx});
std.Thread.sleep(200 * std.time.ns_per_ms);
try io.sleep(.fromMilliseconds(200), .boot);
ctx.stop.store(true, .release);
appender_thread.join();

View file

@ -4,8 +4,8 @@ const cli = @import("common.zig");
const fmt = cli.fmt;
/// CLI `analysis` command: show portfolio analysis breakdowns.
pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []const u8, color: bool, out: *std.Io.Writer) !void {
var loaded = cli.loadPortfolio(allocator, file_path) orelse return;
pub fn run(io: std.Io, allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []const u8, as_of: zfin.Date, color: bool, out: *std.Io.Writer) !void {
var loaded = cli.loadPortfolio(io, allocator, file_path, as_of) orelse return;
defer loaded.deinit(allocator);
const portfolio = loaded.portfolio;
@ -26,9 +26,9 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []co
}
// Build summary via shared pipeline
var pf_data = cli.buildPortfolioData(allocator, portfolio, positions, syms, &prices, svc) catch |err| switch (err) {
var pf_data = cli.buildPortfolioData(allocator, portfolio, positions, syms, &prices, svc, as_of) catch |err| switch (err) {
error.NoAllocations, error.SummaryFailed => {
try cli.stderrPrint("Error computing portfolio summary.\n");
try cli.stderrPrint(io, "Error computing portfolio summary.\n");
return;
},
else => return err,
@ -40,14 +40,14 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []co
const meta_path = std.fmt.allocPrint(allocator, "{s}metadata.srf", .{file_path[0..dir_end]}) catch return;
defer allocator.free(meta_path);
const meta_data = std.fs.cwd().readFileAlloc(allocator, meta_path, 1024 * 1024) catch {
try cli.stderrPrint("Error: No metadata.srf found. Run: zfin enrich <portfolio.srf> > metadata.srf\n");
const meta_data = std.Io.Dir.cwd().readFileAlloc(io, meta_path, allocator, .limited(1024 * 1024)) catch {
try cli.stderrPrint(io, "Error: No metadata.srf found. Run: zfin enrich <portfolio.srf> > metadata.srf\n");
return;
};
defer allocator.free(meta_data);
var cm = zfin.classification.parseClassificationFile(allocator, meta_data) catch {
try cli.stderrPrint("Error: Cannot parse metadata.srf\n");
try cli.stderrPrint(io, "Error: Cannot parse metadata.srf\n");
return;
};
defer cm.deinit();
@ -63,9 +63,9 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []co
portfolio,
pf_data.summary.total_value,
acct_map_opt,
null, // null => use wall-clock today (interactive, not backfill)
as_of,
) catch {
try cli.stderrPrint("Error computing analysis.\n");
try cli.stderrPrint(io, "Error computing analysis.\n");
return;
};
defer result.deinit(allocator);
@ -75,8 +75,8 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []co
pf_data.summary.allocations,
cm.entries,
pf_data.summary.total_value,
portfolio.totalCash(),
portfolio.totalCdFaceValue(),
portfolio.totalCash(as_of),
portfolio.totalCdFaceValue(as_of),
);
try display(result, split.stock_pct, split.bond_pct, pf_data.summary.total_value, file_path, color, out);

View file

@ -102,7 +102,7 @@ pub fn parseFidelityCsv(allocator: std.mem.Allocator, data: []const u8) ![]Broke
// Validate header row
const header_line = lines.next() orelse return error.EmptyFile;
const header_trimmed = std.mem.trimRight(u8, header_line, &.{ '\r', ' ' });
const header_trimmed = std.mem.trimEnd(u8, header_line, &.{ '\r', ' ' });
if (header_trimmed.len == 0) return error.EmptyFile;
if (!std.mem.startsWith(u8, header_trimmed, "Account Number")) {
return error.UnexpectedHeader;
@ -110,7 +110,7 @@ pub fn parseFidelityCsv(allocator: std.mem.Allocator, data: []const u8) ![]Broke
// Parse data rows
while (lines.next()) |line| {
const trimmed = std.mem.trimRight(u8, line, &.{ '\r', ' ' });
const trimmed = std.mem.trimEnd(u8, line, &.{ '\r', ' ' });
if (trimmed.len == 0) break;
// Skip lines starting with " (disclaimer text)
@ -338,7 +338,7 @@ fn parseSchwabTitle(line: []const u8) ?struct { name: []const u8, number: []cons
// Find "..." which separates name from account number
const dots_idx = std.mem.indexOf(u8, rest, "...") orelse return null;
const name = std.mem.trimRight(u8, rest[0..dots_idx], &.{' '});
const name = std.mem.trimEnd(u8, rest[0..dots_idx], &.{' '});
// Account number: after "..." until " as of" or end
const after_dots = rest[dots_idx + 3 ..];
@ -365,7 +365,7 @@ pub fn parseSchwabCsv(allocator: std.mem.Allocator, data: []const u8) !struct {
// Data rows
while (lines.next()) |line| {
const trimmed = std.mem.trimRight(u8, line, &.{ '\r', ' ' });
const trimmed = std.mem.trimEnd(u8, line, &.{ '\r', ' ' });
if (trimmed.len == 0) continue;
var cols: [schwab_expected_columns][]const u8 = undefined;
@ -580,6 +580,7 @@ pub fn compareSchwabSummary(
schwab_accounts: []const SchwabAccountSummary,
account_map: analysis.AccountMap,
prices: std.StringHashMap(f64),
as_of: Date,
) ![]SchwabAccountComparison {
var results = std.ArrayList(SchwabAccountComparison).empty;
errdefer results.deinit(allocator);
@ -592,7 +593,7 @@ pub fn compareSchwabSummary(
if (portfolio_acct) |pa| {
pf_cash = portfolio.cashForAccount(pa);
pf_total = portfolio.totalForAccount(allocator, pa, prices);
pf_total = portfolio.totalForAccount(as_of, allocator, pa, prices);
}
const cash_delta = if (sa.cash) |sc| sc - pf_cash else null;
@ -777,6 +778,7 @@ pub fn compareAccounts(
account_map: analysis.AccountMap,
institution: []const u8,
prices: std.StringHashMap(f64),
as_of: Date,
) ![]AccountComparison {
var results = std.ArrayList(AccountComparison).empty;
errdefer results.deinit(allocator);
@ -855,7 +857,7 @@ pub fn compareAccounts(
pf_shares = portfolio.cashForAccount(portfolio_acct_name.?);
pf_value = pf_shares;
} else {
const acct_positions = portfolio.positionsForAccount(allocator, portfolio_acct_name.?) catch &.{};
const acct_positions = portfolio.positionsForAccount(as_of, allocator, portfolio_acct_name.?) catch &.{};
defer allocator.free(acct_positions);
var found_stock = false;
@ -876,7 +878,7 @@ pub fn compareAccounts(
for (portfolio.lots) |lot| {
const lot_acct = lot.account orelse continue;
if (!std.mem.eql(u8, lot_acct, portfolio_acct_name.?)) continue;
if (!lot.isOpen()) continue;
if (!lot.isOpen(as_of)) continue;
// Match by exact symbol, or by parsed option components
// (Fidelity uses compact OCC format like "-AMZN260515C220"
// while portfolio uses "AMZN 05/15/2026 220.00 C")
@ -942,7 +944,7 @@ pub fn compareAccounts(
// Find portfolio-only positions (in portfolio but not in brokerage)
if (portfolio_acct_name) |pa| {
const acct_positions = portfolio.positionsForAccount(allocator, pa) catch &.{};
const acct_positions = portfolio.positionsForAccount(as_of, allocator, pa) catch &.{};
defer allocator.free(acct_positions);
for (acct_positions) |pos| {
@ -977,7 +979,7 @@ pub fn compareAccounts(
for (portfolio.lots) |lot| {
const lot_acct = lot.account orelse continue;
if (!std.mem.eql(u8, lot_acct, pa)) continue;
if (!lot.isOpen()) continue;
if (!lot.isOpen(as_of)) continue;
if (lot.security_type != .cd and lot.security_type != .option) continue;
if (matched_symbols.contains(lot.symbol)) continue;
@ -992,7 +994,7 @@ pub fn compareAccounts(
for (portfolio.lots) |lot2| {
const la2 = lot2.account orelse continue;
if (!std.mem.eql(u8, la2, pa)) continue;
if (!lot2.isOpen()) continue;
if (!lot2.isOpen(as_of)) continue;
if (!std.mem.eql(u8, lot2.symbol, lot.symbol)) continue;
switch (lot2.security_type) {
.cd => {
@ -1510,27 +1512,28 @@ fn detectBrokerFileKind(data: []const u8) ?BrokerFileKind {
/// Discover brokerage files in a directory. Filters by recency (< 24h)
/// and applies size limits for non-CSV files.
fn discoverBrokerFiles(
io: std.Io,
allocator: std.mem.Allocator,
dir_path: []const u8,
dir_label: []const u8,
now_s: i64,
) ![]DiscoveredFile {
var results = std.ArrayList(DiscoveredFile).empty;
defer results.deinit(allocator);
var dir = std.fs.cwd().openDir(dir_path, .{ .iterate = true }) catch return try results.toOwnedSlice(allocator);
defer dir.close();
var dir = std.Io.Dir.cwd().openDir(io, dir_path, .{ .iterate = true }) catch return try results.toOwnedSlice(allocator);
defer dir.close(io);
const now_ts = std.time.timestamp();
const max_age_s: i128 = audit_file_max_age_hours * 3600;
var it = dir.iterate();
while (try it.next()) |entry| {
while (try it.next(io)) |entry| {
if (entry.kind != .file) continue;
// Check file modification time
const stat = dir.statFile(entry.name) catch continue;
const mtime_s: i128 = @divFloor(stat.mtime, std.time.ns_per_s);
const age_s = now_ts - mtime_s;
const stat = dir.statFile(io, entry.name, .{}) catch continue;
const mtime_s: i128 = @divFloor(stat.mtime.nanoseconds, std.time.ns_per_s);
const age_s = now_s - mtime_s;
if (age_s > max_age_s) continue;
// Check if it's a CSV (no size limit) or non-CSV (size limit applies)
@ -1538,7 +1541,7 @@ fn discoverBrokerFiles(
if (!is_csv and stat.size > audit_file_max_size_non_csv) continue;
// Read and detect content type
const data = dir.readFileAlloc(allocator, entry.name, 10 * 1024 * 1024) catch continue;
const data = dir.readFileAlloc(io, entry.name, allocator, .limited(10 * 1024 * 1024)) catch continue;
defer allocator.free(data);
const kind = detectBrokerFileKind(data) orelse continue;
@ -1730,30 +1733,33 @@ fn printLargeLotWarning(
/// Run the flagless portfolio hygiene check.
fn runHygieneCheck(
io: std.Io,
allocator: std.mem.Allocator,
svc: *zfin.DataService,
portfolio_path: []const u8,
stale_days: u32,
verbose: bool,
as_of: Date,
now_s: i64,
color: bool,
out: *std.Io.Writer,
) !void {
// Load portfolio
const pf_data = std.fs.cwd().readFileAlloc(allocator, portfolio_path, 10 * 1024 * 1024) catch {
try cli.stderrPrint("Error: Cannot read portfolio file\n");
const pf_data = std.Io.Dir.cwd().readFileAlloc(io, portfolio_path, allocator, .limited(10 * 1024 * 1024)) catch {
try cli.stderrPrint(io, "Error: Cannot read portfolio file\n");
return;
};
defer allocator.free(pf_data);
var portfolio = zfin.cache.deserializePortfolio(allocator, pf_data) catch {
try cli.stderrPrint("Error: Cannot parse portfolio file\n");
try cli.stderrPrint(io, "Error: Cannot parse portfolio file\n");
return;
};
defer portfolio.deinit();
// Load accounts.srf
var account_map = svc.loadAccountMap(portfolio_path) orelse {
try cli.stderrPrint("Error: Cannot read/parse accounts.srf (needed for account mapping)\n");
try cli.stderrPrint(io, "Error: Cannot read/parse accounts.srf (needed for account mapping)\n");
return;
};
defer account_map.deinit();
@ -1762,7 +1768,6 @@ fn runHygieneCheck(
// Section 1: Stale manual prices
const today = fmt.todayDate();
var stale_count: usize = 0;
// Collect and display stale manual prices
@ -1771,7 +1776,7 @@ fn runHygieneCheck(
for (portfolio.lots) |lot| {
if (lot.price == null) continue;
const pd = lot.price_date orelse continue;
const age_days = today.days - pd.days;
const age_days = as_of.days - pd.days;
const threshold: i32 = @intCast(stale_days);
if (age_days <= threshold) continue;
@ -1808,7 +1813,7 @@ fn runHygieneCheck(
{
// Try to get committed version via git
const git = @import("../git.zig");
const repo_info: ?git.RepoInfo = git.findRepo(allocator, portfolio_path) catch null;
const repo_info: ?git.RepoInfo = git.findRepo(io, allocator, portfolio_path) catch null;
defer if (repo_info) |ri| {
allocator.free(ri.root);
allocator.free(ri.rel_path);
@ -1822,7 +1827,7 @@ fn runHygieneCheck(
defer if (committed_data) |d| allocator.free(d);
if (repo_info) |ri| {
committed_data = git.show(allocator, ri.root, "HEAD", ri.rel_path) catch null;
committed_data = git.show(io, allocator, ri.root, "HEAD", ri.rel_path) catch null;
if (committed_data) |cd| {
committed_portfolio = zfin.cache.deserializePortfolio(allocator, cd) catch null;
}
@ -1866,7 +1871,7 @@ fn runHygieneCheck(
var since_buf: [32]u8 = undefined;
const since = std.fmt.bufPrint(&since_buf, "{d} days ago", .{max_threshold}) catch "30 days ago";
const commits = git.listCommitsTouching(allocator, ri.root, ri.rel_path, since) catch &.{};
const commits = git.listCommitsTouching(io, allocator, ri.root, ri.rel_path, since) catch &.{};
defer git.freeCommitTouches(allocator, commits);
var prev_data: ?[]const u8 = null;
@ -1876,7 +1881,7 @@ fn runHygieneCheck(
// Stop early if every account already has a timestamp
if (last_update_ts.count() >= all_accounts.count()) break;
const rev_data = git.show(allocator, ri.root, ct.commit, ri.rel_path) catch continue;
const rev_data = git.show(io, allocator, ri.root, ct.commit, ri.rel_path) catch continue;
if (ci > 0) {
if (prev_data) |pd| {
@ -1942,10 +1947,9 @@ fn runHygieneCheck(
const threshold_days = cadence.thresholdDays() orelse continue; // skip 'none'
// Find last update time
const now_ts = std.time.timestamp();
var age_days: ?i32 = null;
if (last_update_ts.get(acct_name)) |ts| {
const age_s = now_ts - ts;
const age_s = now_s - ts;
age_days = @intCast(@divFloor(age_s, std.time.s_per_day));
}
@ -1990,9 +1994,9 @@ fn runHygieneCheck(
}
// Check $ZFIN_AUDIT_FILES first
const env_audit_dir = std.posix.getenv("ZFIN_AUDIT_FILES");
const env_audit_dir = if (svc.config.environ_map) |em| em.get("ZFIN_AUDIT_FILES") else null;
if (env_audit_dir) |edir| {
const env_files = try discoverBrokerFiles(allocator, edir, "$ZFIN_AUDIT_FILES");
const env_files = try discoverBrokerFiles(io, allocator, edir, "$ZFIN_AUDIT_FILES", now_s);
defer allocator.free(env_files);
for (env_files) |f| try all_files.append(allocator, f);
}
@ -2002,7 +2006,7 @@ fn runHygieneCheck(
defer if (default_audit_dir) |d| allocator.free(d);
if (default_audit_dir) |adir| {
const dir_files = try discoverBrokerFiles(allocator, adir, "audit/");
const dir_files = try discoverBrokerFiles(io, allocator, adir, "audit/", now_s);
defer allocator.free(dir_files);
for (dir_files) |f| try all_files.append(allocator, f);
}
@ -2031,7 +2035,7 @@ fn runHygieneCheck(
const pos_syms = try portfolio.stockSymbols(allocator);
defer allocator.free(pos_syms);
if (pos_syms.len > 0) {
var load_result = cli.loadPortfolioPrices(svc, pos_syms, &.{}, false, color);
var load_result = cli.loadPortfolioPrices(io, svc, pos_syms, &.{}, false, color);
defer load_result.deinit();
var pit = load_result.prices.iterator();
while (pit.next()) |entry| {
@ -2051,7 +2055,7 @@ fn runHygieneCheck(
try cli.printBold(out, color, " Reconciliation\n", .{});
for (all_files.items) |f| {
const file_data = std.fs.cwd().readFileAlloc(allocator, f.path, 10 * 1024 * 1024) catch continue;
const file_data = std.Io.Dir.cwd().readFileAlloc(io, f.path, allocator, .limited(10 * 1024 * 1024)) catch continue;
defer allocator.free(file_data);
switch (f.kind) {
@ -2059,7 +2063,7 @@ fn runHygieneCheck(
const schwab_accounts = parseSchwabSummary(allocator, file_data) catch continue;
defer allocator.free(schwab_accounts);
const results = compareSchwabSummary(allocator, portfolio, schwab_accounts, account_map, prices) catch continue;
const results = compareSchwabSummary(allocator, portfolio, schwab_accounts, account_map, prices, as_of) catch continue;
defer allocator.free(results);
if (verbose or hasSchwabDiscrepancies(results)) {
@ -2082,7 +2086,7 @@ fn runHygieneCheck(
const brokerage_positions = parseFidelityCsv(allocator, file_data) catch continue;
defer allocator.free(brokerage_positions);
const results = compareAccounts(allocator, portfolio, brokerage_positions, account_map, "fidelity", prices) catch continue;
const results = compareAccounts(allocator, portfolio, brokerage_positions, account_map, "fidelity", prices, as_of) catch continue;
defer {
for (results) |r| allocator.free(r.comparisons);
allocator.free(results);
@ -2102,7 +2106,7 @@ fn runHygieneCheck(
const parsed = parseSchwabCsv(allocator, file_data) catch continue;
defer allocator.free(parsed.positions);
const results = compareAccounts(allocator, portfolio, parsed.positions, account_map, "schwab", prices) catch continue;
const results = compareAccounts(allocator, portfolio, parsed.positions, account_map, "schwab", prices, as_of) catch continue;
defer {
for (results) |r| allocator.free(r.comparisons);
allocator.free(results);
@ -2133,7 +2137,7 @@ fn runHygieneCheck(
// there are no new lots at all, or when the pipeline can't run
// (not in a git repo). Threshold is a judgment call; see
// `audit_large_lot_threshold`.
if (contributions.findUnmatchedLargeLots(allocator, svc, portfolio_path, audit_large_lot_threshold, color)) |found| {
if (contributions.findUnmatchedLargeLots(io, allocator, svc, portfolio_path, audit_large_lot_threshold, as_of, color)) |found| {
var found_mut = found;
defer found_mut.deinit();
@ -2167,7 +2171,7 @@ fn hasAccountDiscrepancies(results: []const AccountComparison) bool {
// CLI entry point
pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, portfolio_path: []const u8, args: []const []const u8, color: bool, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, allocator: std.mem.Allocator, svc: *zfin.DataService, portfolio_path: []const u8, args: []const []const u8, as_of: Date, now_s: i64, color: bool, out: *std.Io.Writer) !void {
var fidelity_csv: ?[]const u8 = null;
var schwab_csv: ?[]const u8 = null;
var schwab_summary = false;
@ -2194,25 +2198,25 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, portfolio_path:
// Flagless mode: run portfolio hygiene check
if (fidelity_csv == null and schwab_csv == null and !schwab_summary) {
return runHygieneCheck(allocator, svc, portfolio_path, stale_days, verbose, color, out);
return runHygieneCheck(io, allocator, svc, portfolio_path, stale_days, verbose, as_of, now_s, color, out);
}
// Load portfolio
const pf_data = std.fs.cwd().readFileAlloc(allocator, portfolio_path, 10 * 1024 * 1024) catch {
try cli.stderrPrint("Error: Cannot read portfolio file\n");
const pf_data = std.Io.Dir.cwd().readFileAlloc(io, portfolio_path, allocator, .limited(10 * 1024 * 1024)) catch {
try cli.stderrPrint(io, "Error: Cannot read portfolio file\n");
return;
};
defer allocator.free(pf_data);
var portfolio = zfin.cache.deserializePortfolio(allocator, pf_data) catch {
try cli.stderrPrint("Error: Cannot parse portfolio file\n");
try cli.stderrPrint(io, "Error: Cannot parse portfolio file\n");
return;
};
defer portfolio.deinit();
// Load accounts.srf
var account_map = svc.loadAccountMap(portfolio_path) orelse {
try cli.stderrPrint("Error: Cannot read/parse accounts.srf (needed for account number mapping)\n");
try cli.stderrPrint(io, "Error: Cannot read/parse accounts.srf (needed for account number mapping)\n");
return;
};
defer account_map.deinit();
@ -2232,7 +2236,7 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, portfolio_path:
defer allocator.free(pos_syms);
if (pos_syms.len > 0) {
var load_result = cli.loadPortfolioPrices(svc, pos_syms, &.{}, false, color);
var load_result = cli.loadPortfolioPrices(io, svc, pos_syms, &.{}, false, color);
defer load_result.deinit();
var it = load_result.prices.iterator();
while (it.next()) |entry| {
@ -2254,20 +2258,22 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, portfolio_path:
// Schwab summary from stdin
if (schwab_summary) {
try cli.stderrPrint("Paste Schwab account summary, then press Ctrl+D:\n");
const stdin_data = std.fs.File.stdin().readToEndAlloc(allocator, 1024 * 1024) catch {
try cli.stderrPrint("Error: Cannot read stdin\n");
try cli.stderrPrint(io, "Paste Schwab account summary, then press Ctrl+D:\n");
var stdin_reader_buf: [4096]u8 = undefined;
var stdin_reader = std.Io.File.stdin().reader(io, &stdin_reader_buf);
const stdin_data = stdin_reader.interface.allocRemaining(allocator, .limited(1024 * 1024)) catch {
try cli.stderrPrint(io, "Error: Cannot read stdin\n");
return;
};
defer allocator.free(stdin_data);
const schwab_accounts = parseSchwabSummary(allocator, stdin_data) catch {
try cli.stderrPrint("Error: Cannot parse Schwab summary (no 'Account number ending in' lines found)\n");
try cli.stderrPrint(io, "Error: Cannot parse Schwab summary (no 'Account number ending in' lines found)\n");
return;
};
defer allocator.free(schwab_accounts);
const results = try compareSchwabSummary(allocator, portfolio, schwab_accounts, account_map, prices);
const results = try compareSchwabSummary(allocator, portfolio, schwab_accounts, account_map, prices, as_of);
defer allocator.free(results);
try displaySchwabResults(results, color, out);
@ -2276,21 +2282,21 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, portfolio_path:
// Fidelity CSV
if (fidelity_csv) |csv_path| {
const csv_data = std.fs.cwd().readFileAlloc(allocator, csv_path, 10 * 1024 * 1024) catch {
const csv_data = std.Io.Dir.cwd().readFileAlloc(io, csv_path, allocator, .limited(10 * 1024 * 1024)) catch {
var msg_buf: [256]u8 = undefined;
const msg = std.fmt.bufPrint(&msg_buf, "Error: Cannot read CSV file: {s}\n", .{csv_path}) catch "Error: Cannot read CSV file\n";
try cli.stderrPrint(msg);
try cli.stderrPrint(io, msg);
return;
};
defer allocator.free(csv_data);
const brokerage_positions = parseFidelityCsv(allocator, csv_data) catch {
try cli.stderrPrint("Error: Cannot parse Fidelity CSV (unexpected format?)\n");
try cli.stderrPrint(io, "Error: Cannot parse Fidelity CSV (unexpected format?)\n");
return;
};
defer allocator.free(brokerage_positions);
const results = try compareAccounts(allocator, portfolio, brokerage_positions, account_map, "fidelity", prices);
const results = try compareAccounts(allocator, portfolio, brokerage_positions, account_map, "fidelity", prices, as_of);
defer {
for (results) |r| allocator.free(r.comparisons);
allocator.free(results);
@ -2302,21 +2308,21 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, portfolio_path:
// Schwab per-account CSV
if (schwab_csv) |csv_path| {
const csv_data = std.fs.cwd().readFileAlloc(allocator, csv_path, 10 * 1024 * 1024) catch {
const csv_data = std.Io.Dir.cwd().readFileAlloc(io, csv_path, allocator, .limited(10 * 1024 * 1024)) catch {
var msg_buf: [256]u8 = undefined;
const msg = std.fmt.bufPrint(&msg_buf, "Error: Cannot read CSV file: {s}\n", .{csv_path}) catch "Error: Cannot read CSV file\n";
try cli.stderrPrint(msg);
try cli.stderrPrint(io, msg);
return;
};
defer allocator.free(csv_data);
const parsed = parseSchwabCsv(allocator, csv_data) catch {
try cli.stderrPrint("Error: Cannot parse Schwab CSV (unexpected format?)\n");
try cli.stderrPrint(io, "Error: Cannot parse Schwab CSV (unexpected format?)\n");
return;
};
defer allocator.free(parsed.positions);
const results = try compareAccounts(allocator, portfolio, parsed.positions, account_map, "schwab", prices);
const results = try compareAccounts(allocator, portfolio, parsed.positions, account_map, "schwab", prices, as_of);
defer {
for (results) |r| allocator.free(r.comparisons);
allocator.free(results);
@ -2784,7 +2790,7 @@ test "option delta tracking in compareAccounts" {
var prices = std.StringHashMap(f64).init(allocator);
defer prices.deinit();
const results = try compareAccounts(allocator, portfolio, &brokerage, acct_map, "schwab", prices);
const results = try compareAccounts(allocator, portfolio, &brokerage, acct_map, "schwab", prices, Date.fromYmd(2026, 5, 8));
defer {
for (results) |r| allocator.free(r.comparisons);
allocator.free(results);
@ -3032,6 +3038,7 @@ test "UpdateCadence label" {
}
test "discoverBrokerFiles: finds files in temp directory" {
const io = std.testing.io;
const allocator = std.testing.allocator;
// Create a temp directory with test files
@ -3039,28 +3046,32 @@ test "discoverBrokerFiles: finds files in temp directory" {
defer tmp.cleanup();
// Write a fidelity CSV
tmp.dir.writeFile(.{
tmp.dir.writeFile(io, .{
.sub_path = "fidelity.csv",
.data = "Account Number,Account Name,Symbol,Description,Quantity,Last Price,Current Value\nZ123,Test,AAPL,Apple,100,200,20000\n",
}) catch unreachable;
// Write a schwab summary (non-CSV)
tmp.dir.writeFile(.{
tmp.dir.writeFile(io, .{
.sub_path = "schwab.txt",
.data = "Brokerage ...1234\nAccount number ending in 1234\n$500,000.00\n",
}) catch unreachable;
// Write a random non-matching file
tmp.dir.writeFile(.{
tmp.dir.writeFile(io, .{
.sub_path = "notes.txt",
.data = "Just some random notes",
}) catch unreachable;
// Get the temp dir path
const tmp_path = tmp.dir.realpathAlloc(allocator, ".") catch unreachable;
const tmp_path = tmp.dir.realPathFileAlloc(io, ".", allocator) catch unreachable;
defer allocator.free(tmp_path);
const files = try discoverBrokerFiles(allocator, tmp_path, "test/");
// wall-clock required: test writes real files and verifies they're
// treated as fresh. A fixed synthetic `now_s` would drift relative
// to the file mtime and produce flaky results.
const now_s = std.Io.Timestamp.now(io, .real).toSeconds();
const files = try discoverBrokerFiles(io, allocator, tmp_path, "test/", now_s);
defer {
for (files) |f| allocator.free(f.path);
allocator.free(files);
@ -3083,24 +3094,28 @@ test "discoverBrokerFiles: finds files in temp directory" {
}
test "discoverBrokerFiles: empty directory returns empty" {
const io = std.testing.io;
const allocator = std.testing.allocator;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const tmp_path = tmp.dir.realpathAlloc(allocator, ".") catch unreachable;
const tmp_path = tmp.dir.realPathFileAlloc(io, ".", allocator) catch unreachable;
defer allocator.free(tmp_path);
const files = try discoverBrokerFiles(allocator, tmp_path, "test/");
const now_s = std.Io.Timestamp.now(io, .real).toSeconds();
const files = try discoverBrokerFiles(io, allocator, tmp_path, "test/", now_s);
defer allocator.free(files);
try std.testing.expectEqual(@as(usize, 0), files.len);
}
test "discoverBrokerFiles: nonexistent directory returns empty" {
const io = std.testing.io;
const allocator = std.testing.allocator;
const files = try discoverBrokerFiles(allocator, "/nonexistent/path/audit", "test/");
const now_s = std.Io.Timestamp.now(io, .real).toSeconds();
const files = try discoverBrokerFiles(io, allocator, "/nonexistent/path/audit", "test/", now_s);
defer allocator.free(files);
try std.testing.expectEqual(@as(usize, 0), files.len);
@ -3149,3 +3164,291 @@ test "printLargeLotWarning: stock destination emits dest_lot::SYM@DATE template"
try std.testing.expect(std.mem.indexOf(u8, output, "+$25,000.00") != null);
try std.testing.expect(std.mem.indexOf(u8, output, "transfer::2026-05-03,type::cash,amount:num:25000,from::<SOURCE>,to::Acct B,dest_lot::SYM@2026-05-03") != null);
}
test "isUnitPriceCash: $1.00 + $1.00 returns true" {
try std.testing.expect(isUnitPriceCash("$1.00", "$1.00"));
try std.testing.expect(isUnitPriceCash("1.00", "1.00"));
try std.testing.expect(isUnitPriceCash("$1", "$1"));
}
test "isUnitPriceCash: non-$1 price returns false" {
try std.testing.expect(!isUnitPriceCash("$1.01", "$1.00"));
try std.testing.expect(!isUnitPriceCash("$1.00", "$1.01"));
try std.testing.expect(!isUnitPriceCash("$150.00", "$120.00"));
try std.testing.expect(!isUnitPriceCash("$0.99", "$1.00"));
}
test "isUnitPriceCash: unparseable inputs return false" {
try std.testing.expect(!isUnitPriceCash("", "$1.00"));
try std.testing.expect(!isUnitPriceCash("$1.00", ""));
try std.testing.expect(!isUnitPriceCash("N/A", "$1.00"));
}
test "strLessThan: orders strings lexicographically" {
try std.testing.expect(strLessThan({}, "AAPL", "MSFT"));
try std.testing.expect(!strLessThan({}, "MSFT", "AAPL"));
try std.testing.expect(!strLessThan({}, "AAPL", "AAPL"));
try std.testing.expect(strLessThan({}, "AAPL", "AAPLE"));
}
test "lotToString: stock lot includes symbol, shares, date" {
const allocator = std.testing.allocator;
const lot = portfolio_mod.Lot{
.symbol = "AAPL",
.shares = 100,
.open_date = Date.fromYmd(2024, 3, 15),
.open_price = 150.50,
};
const s = try lotToString(allocator, lot);
defer allocator.free(s);
try std.testing.expect(std.mem.indexOf(u8, s, "AAPL") != null);
try std.testing.expect(std.mem.indexOf(u8, s, "100") != null);
try std.testing.expect(std.mem.indexOf(u8, s, "2024-03-15") != null);
}
test "compareSchwabSummary: matching account → no discrepancy" {
const allocator = std.testing.allocator;
const today = Date.fromYmd(2026, 5, 8);
// Portfolio: $5000 cash + 10 AAPL @ open_price 150 = $1500 cost basis.
// With AAPL price=200, total = 5000 + 10*200 = 7000.
const lots = [_]portfolio_mod.Lot{
.{
.symbol = "CASH",
.shares = 5000,
.open_date = Date.fromYmd(2024, 1, 1),
.open_price = 1.0,
.security_type = .cash,
.account = "Emil Brokerage",
},
.{
.symbol = "AAPL",
.shares = 10,
.open_date = Date.fromYmd(2024, 1, 1),
.open_price = 150,
.account = "Emil Brokerage",
},
};
const portfolio = portfolio_mod.Portfolio{ .lots = @constCast(&lots), .allocator = allocator };
const schwab_accounts = [_]SchwabAccountSummary{
.{
.account_name = "Emil Brokerage",
.account_number = "1234",
.cash = 5000.0,
.total_value = 7000.0,
},
};
var entries = [_]analysis.AccountTaxEntry{
.{
.account = "Emil Brokerage",
.tax_type = .taxable,
.institution = "schwab",
.account_number = "1234",
},
};
const acct_map = analysis.AccountMap{ .entries = &entries, .allocator = allocator };
var prices = std.StringHashMap(f64).init(allocator);
defer prices.deinit();
try prices.put("AAPL", 200.0);
const results = try compareSchwabSummary(allocator, portfolio, &schwab_accounts, acct_map, prices, today);
defer allocator.free(results);
try std.testing.expectEqual(@as(usize, 1), results.len);
try std.testing.expectEqualStrings("Emil Brokerage", results[0].account_name);
try std.testing.expectApproxEqAbs(@as(f64, 5000), results[0].portfolio_cash, 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 7000), results[0].portfolio_total, 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 0), results[0].cash_delta.?, 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 0), results[0].total_delta.?, 0.01);
try std.testing.expect(!results[0].has_discrepancy);
}
test "compareSchwabSummary: cash mismatch → has_discrepancy true" {
const allocator = std.testing.allocator;
const today = Date.fromYmd(2026, 5, 8);
// Portfolio cash = 5000, Schwab reports 5500 $500 delta.
const lots = [_]portfolio_mod.Lot{
.{
.symbol = "CASH",
.shares = 5000,
.open_date = Date.fromYmd(2024, 1, 1),
.open_price = 1.0,
.security_type = .cash,
.account = "Brokerage",
},
};
const portfolio = portfolio_mod.Portfolio{ .lots = @constCast(&lots), .allocator = allocator };
const schwab_accounts = [_]SchwabAccountSummary{
.{
.account_name = "Brokerage",
.account_number = "1234",
.cash = 5500.0,
.total_value = 5500.0,
},
};
var entries = [_]analysis.AccountTaxEntry{
.{
.account = "Brokerage",
.tax_type = .taxable,
.institution = "schwab",
.account_number = "1234",
},
};
const acct_map = analysis.AccountMap{ .entries = &entries, .allocator = allocator };
var prices = std.StringHashMap(f64).init(allocator);
defer prices.deinit();
const results = try compareSchwabSummary(allocator, portfolio, &schwab_accounts, acct_map, prices, today);
defer allocator.free(results);
try std.testing.expectEqual(@as(usize, 1), results.len);
try std.testing.expectApproxEqAbs(@as(f64, 500), results[0].cash_delta.?, 0.01);
try std.testing.expect(results[0].has_discrepancy);
}
test "compareSchwabSummary: account_number with no match → empty account_name" {
const allocator = std.testing.allocator;
const today = Date.fromYmd(2026, 5, 8);
const lots = [_]portfolio_mod.Lot{};
const portfolio = portfolio_mod.Portfolio{ .lots = @constCast(&lots), .allocator = allocator };
const schwab_accounts = [_]SchwabAccountSummary{
.{
.account_name = "Unknown Acct",
.account_number = "9999",
.cash = 1000.0,
.total_value = 1000.0,
},
};
var entries = [_]analysis.AccountTaxEntry{};
const acct_map = analysis.AccountMap{ .entries = &entries, .allocator = allocator };
var prices = std.StringHashMap(f64).init(allocator);
defer prices.deinit();
const results = try compareSchwabSummary(allocator, portfolio, &schwab_accounts, acct_map, prices, today);
defer allocator.free(results);
try std.testing.expectEqual(@as(usize, 1), results.len);
try std.testing.expectEqualStrings("", results[0].account_name);
try std.testing.expectEqualStrings("Unknown Acct", results[0].schwab_name);
// No portfolio match cash and total are zero, schwab values become deltas
try std.testing.expectApproxEqAbs(@as(f64, 0), results[0].portfolio_cash, 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 1000), results[0].cash_delta.?, 0.01);
}
test "compareSchwabSummary: null cash/total fields produce null deltas (within tolerance)" {
const allocator = std.testing.allocator;
const today = Date.fromYmd(2026, 5, 8);
const lots = [_]portfolio_mod.Lot{
.{
.symbol = "CASH",
.shares = 5000,
.open_date = Date.fromYmd(2024, 1, 1),
.open_price = 1.0,
.security_type = .cash,
.account = "X",
},
};
const portfolio = portfolio_mod.Portfolio{ .lots = @constCast(&lots), .allocator = allocator };
// Schwab summary missing cash + total fields (.cash = null, .total_value = null).
const schwab_accounts = [_]SchwabAccountSummary{
.{
.account_name = "X",
.account_number = "1234",
.cash = null,
.total_value = null,
},
};
var entries = [_]analysis.AccountTaxEntry{
.{
.account = "X",
.tax_type = .taxable,
.institution = "schwab",
.account_number = "1234",
},
};
const acct_map = analysis.AccountMap{ .entries = &entries, .allocator = allocator };
var prices = std.StringHashMap(f64).init(allocator);
defer prices.deinit();
const results = try compareSchwabSummary(allocator, portfolio, &schwab_accounts, acct_map, prices, today);
defer allocator.free(results);
try std.testing.expectEqual(@as(usize, 1), results.len);
try std.testing.expect(results[0].cash_delta == null);
try std.testing.expect(results[0].total_delta == null);
// Null deltas are treated as "ok" (no discrepancy possible to assert).
try std.testing.expect(!results[0].has_discrepancy);
}
test "compareSchwabSummary: today affects valuation of held assets" {
const allocator = std.testing.allocator;
// Lot opens 2024-06-01 with 10 shares. With today=2024-01-01 (before
// open), it's not held portfolio_total excludes it. With
// today=2025-01-01 (after open), portfolio_total includes 10 * price.
const lots = [_]portfolio_mod.Lot{
.{
.symbol = "AAPL",
.shares = 10,
.open_date = Date.fromYmd(2024, 6, 1),
.open_price = 150,
.account = "Acct",
},
};
const portfolio = portfolio_mod.Portfolio{ .lots = @constCast(&lots), .allocator = allocator };
const schwab_accounts = [_]SchwabAccountSummary{
.{
.account_name = "Acct",
.account_number = "1234",
.cash = 0,
.total_value = 2000,
},
};
var entries = [_]analysis.AccountTaxEntry{
.{
.account = "Acct",
.tax_type = .taxable,
.institution = "schwab",
.account_number = "1234",
},
};
const acct_map = analysis.AccountMap{ .entries = &entries, .allocator = allocator };
var prices = std.StringHashMap(f64).init(allocator);
defer prices.deinit();
try prices.put("AAPL", 200.0);
// Before open: portfolio holds nothing for this account.
{
const results = try compareSchwabSummary(allocator, portfolio, &schwab_accounts, acct_map, prices, Date.fromYmd(2024, 1, 1));
defer allocator.free(results);
try std.testing.expectApproxEqAbs(@as(f64, 0), results[0].portfolio_total, 0.01);
}
// After open: portfolio holds 10 * 200 = 2000.
{
const results = try compareSchwabSummary(allocator, portfolio, &schwab_accounts, acct_map, prices, Date.fromYmd(2025, 1, 1));
defer allocator.free(results);
try std.testing.expectApproxEqAbs(@as(f64, 2000), results[0].portfolio_total, 0.01);
// Matches schwab no discrepancy.
try std.testing.expectApproxEqAbs(@as(f64, 0), results[0].total_delta.?, 0.01);
try std.testing.expect(!results[0].has_discrepancy);
}
}

View file

@ -25,15 +25,18 @@ const display_labels = [_][]const u8{
"etf_profile",
};
pub fn run(allocator: std.mem.Allocator, config: zfin.Config, subcommand: []const u8, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, allocator: std.mem.Allocator, config: zfin.Config, subcommand: []const u8, out: *std.Io.Writer) !void {
if (std.mem.eql(u8, subcommand, "stats")) {
// Capture wall-clock once per invocation so every "X ago" display
// line in the table is computed against the same reference point.
const now_s = std.Io.Timestamp.now(io, .real).toSeconds();
try out.print("Cache directory: {s}\n\n", .{config.cache_dir});
var dir = std.fs.cwd().openDir(config.cache_dir, .{ .iterate = true }) catch {
var dir = std.Io.Dir.cwd().openDir(io, config.cache_dir, .{ .iterate = true }) catch {
try out.print(" (empty -- no cached data)\n", .{});
return;
};
defer dir.close();
defer dir.close(io);
// Collect and sort symbol names
var symbols: std.ArrayList([]const u8) = .empty;
@ -43,7 +46,7 @@ pub fn run(allocator: std.mem.Allocator, config: zfin.Config, subcommand: []cons
}
var iter = dir.iterate();
while (iter.next() catch null) |entry| {
while (iter.next(io) catch null) |entry| {
if (entry.kind == .directory) {
const name = allocator.dupe(u8, entry.name) catch continue;
symbols.append(allocator, name) catch {
@ -79,7 +82,7 @@ pub fn run(allocator: std.mem.Allocator, config: zfin.Config, subcommand: []cons
var symbol_files: usize = 0;
for (display_types, display_labels) |dt, label| {
const info = getFileInfo(allocator, config.cache_dir, symbol, dt);
const info = getFileInfo(io, allocator, config.cache_dir, symbol, dt);
if (info.exists) {
symbol_files += 1;
symbol_size += info.size;
@ -91,7 +94,7 @@ pub fn run(allocator: std.mem.Allocator, config: zfin.Config, subcommand: []cons
try out.print(" {s:<14} {s:>10} (negative cache)\n", .{ label, size_str });
} else if (info.created) |ts| {
var age_buf: [24]u8 = undefined;
const age_str = formatAge(&age_buf, ts);
const age_str = formatAge(&age_buf, ts, now_s);
const thru = info.lastDate() orelse "";
if (info.expired) {
if (thru.len > 0) {
@ -113,7 +116,7 @@ pub fn run(allocator: std.mem.Allocator, config: zfin.Config, subcommand: []cons
}
// Also count candles_meta size (not displayed as its own row but is part of total)
const meta_info = getFileInfo(allocator, config.cache_dir, symbol, .candles_meta);
const meta_info = getFileInfo(io, allocator, config.cache_dir, symbol, .candles_meta);
if (meta_info.exists) {
symbol_size += meta_info.size;
symbol_files += 1;
@ -131,7 +134,7 @@ pub fn run(allocator: std.mem.Allocator, config: zfin.Config, subcommand: []cons
const cusip_path = std.fs.path.join(allocator, &.{ config.cache_dir, "cusip_tickers.srf" }) catch null;
if (cusip_path) |path| {
defer allocator.free(path);
if (std.fs.cwd().statFile(path)) |stat| {
if (std.Io.Dir.cwd().statFile(io, path, .{})) |stat| {
total_size += stat.size;
total_files += 1;
} else |_| {}
@ -144,11 +147,11 @@ pub fn run(allocator: std.mem.Allocator, config: zfin.Config, subcommand: []cons
formatSize(&total_buf, total_size),
});
} else if (std.mem.eql(u8, subcommand, "clear")) {
var store = Store.init(allocator, config.cache_dir);
var store = Store.init(io, allocator, config.cache_dir);
try store.clearAll();
try out.writeAll("Cache cleared.\n");
} else {
try cli.stderrPrint("Unknown cache subcommand. Use 'stats' or 'clear'.\n");
try cli.stderrPrint(io, "Unknown cache subcommand. Use 'stats' or 'clear'.\n");
}
}
@ -167,13 +170,13 @@ const FileInfo = struct {
}
};
fn getFileInfo(allocator: std.mem.Allocator, cache_dir: []const u8, symbol: []const u8, dt: DataType) FileInfo {
var store = Store.init(allocator, cache_dir);
fn getFileInfo(io: std.Io, allocator: std.mem.Allocator, cache_dir: []const u8, symbol: []const u8, dt: DataType) FileInfo {
var store = Store.init(io, allocator, cache_dir);
// Get file size via stat
const path = std.fs.path.join(allocator, &.{ cache_dir, symbol, dt.fileName() }) catch return .{};
defer allocator.free(path);
const stat = std.fs.cwd().statFile(path) catch return .{};
const stat = std.Io.Dir.cwd().statFile(io, path, .{}) catch return .{};
// Check for negative cache
if (store.isNegative(symbol, dt)) {
@ -199,7 +202,7 @@ fn getFileInfo(allocator: std.mem.Allocator, cache_dir: []const u8, symbol: []co
}
// For all other types, read the file and use the srf iterator for directives
const data = std.fs.cwd().readFileAlloc(allocator, path, 50 * 1024 * 1024) catch
const data = std.Io.Dir.cwd().readFileAlloc(io, path, allocator, .limited(50 * 1024 * 1024)) catch
return .{ .exists = true, .size = stat.size };
defer allocator.free(data);
@ -212,7 +215,7 @@ fn getFileInfo(allocator: std.mem.Allocator, cache_dir: []const u8, symbol: []co
.exists = true,
.size = stat.size,
.created = it.created,
.expired = if (it.expires != null) !it.isFresh() else false,
.expired = if (it.expires != null) !it.isFresh(io) else false,
};
}
@ -228,9 +231,12 @@ fn formatSize(buf: *[10]u8, size: u64) []const u8 {
}
}
fn formatAge(buf: *[24]u8, timestamp: i64) []const u8 {
const now = std.time.timestamp();
const age = now - timestamp;
/// Pure age formatter: renders `after_s - before_s` as a human string
/// (`"5m ago"`, `"2h ago"`, `"3d ago"`, `"just now"`). Caller captures
/// `after_s` via `std.Io.Timestamp.now(io, .real).toSeconds()` once per
/// frame or command and passes it in.
fn formatAge(buf: *[24]u8, before_s: i64, after_s: i64) []const u8 {
const age = after_s - before_s;
if (age < 0) {
return std.fmt.bufPrint(buf, "just now", .{}) catch "?";
@ -244,3 +250,58 @@ fn formatAge(buf: *[24]u8, timestamp: i64) []const u8 {
return std.fmt.bufPrint(buf, "{d}d ago", .{@as(u64, @intCast(@divTrunc(age, 86400)))}) catch "?";
}
}
// Tests
test "formatAge: future timestamp renders 'just now'" {
var buf: [24]u8 = undefined;
try std.testing.expectEqualStrings("just now", formatAge(&buf, 1_700_000_100, 1_700_000_000));
}
test "formatAge: seconds" {
var buf: [24]u8 = undefined;
try std.testing.expectEqualStrings("0s ago", formatAge(&buf, 1_700_000_000, 1_700_000_000));
try std.testing.expectEqualStrings("5s ago", formatAge(&buf, 1_700_000_000, 1_700_000_005));
try std.testing.expectEqualStrings("59s ago", formatAge(&buf, 1_700_000_000, 1_700_000_059));
}
test "formatAge: minutes" {
var buf: [24]u8 = undefined;
// 1m at exactly 60s
try std.testing.expectEqualStrings("1m ago", formatAge(&buf, 1_700_000_000, 1_700_000_060));
// 59m at 59*60+59 seconds
try std.testing.expectEqualStrings("59m ago", formatAge(&buf, 1_700_000_000, 1_700_000_000 + 59 * 60 + 59));
}
test "formatAge: hours" {
var buf: [24]u8 = undefined;
try std.testing.expectEqualStrings("1h ago", formatAge(&buf, 1_700_000_000, 1_700_000_000 + 3600));
// Just under a day
try std.testing.expectEqualStrings("23h ago", formatAge(&buf, 1_700_000_000, 1_700_000_000 + 23 * 3600 + 59 * 60));
}
test "formatAge: days" {
var buf: [24]u8 = undefined;
try std.testing.expectEqualStrings("1d ago", formatAge(&buf, 1_700_000_000, 1_700_000_000 + 86_400));
try std.testing.expectEqualStrings("30d ago", formatAge(&buf, 1_700_000_000, 1_700_000_000 + 30 * 86_400));
}
test "formatSize: bytes" {
var buf: [10]u8 = undefined;
try std.testing.expectEqualStrings("0 B", formatSize(&buf, 0));
try std.testing.expectEqualStrings("512 B", formatSize(&buf, 512));
try std.testing.expectEqualStrings("1023 B", formatSize(&buf, 1023));
}
test "formatSize: kilobytes" {
var buf: [10]u8 = undefined;
try std.testing.expectEqualStrings("1.0 KB", formatSize(&buf, 1024));
try std.testing.expectEqualStrings("1.5 KB", formatSize(&buf, 1536));
try std.testing.expectEqualStrings("100.0 KB", formatSize(&buf, 100 * 1024));
}
test "formatSize: megabytes" {
var buf: [10]u8 = undefined;
try std.testing.expectEqualStrings("1.0 MB", formatSize(&buf, 1024 * 1024));
try std.testing.expectEqualStrings("2.5 MB", formatSize(&buf, 2 * 1024 * 1024 + 512 * 1024));
}

View file

@ -109,23 +109,23 @@ pub fn printGainLoss(
// Stderr helpers
pub fn stderrPrint(msg: []const u8) !void {
pub fn stderrPrint(io: std.Io, msg: []const u8) !void {
// Under `zig build test` these messages are just noise tests
// that exercise error paths emit the same usage/hint strings on
// every run. Real CLI users always reach the real stderr.
if (builtin.is_test) return;
var buf: [1024]u8 = undefined;
var writer = std.fs.File.stderr().writer(&buf);
var writer = std.Io.File.stderr().writer(io, &buf);
const out = &writer.interface;
try out.writeAll(msg);
try out.flush();
}
/// Print progress line to stderr: " [N/M] SYMBOL (status)"
pub fn stderrProgress(symbol: []const u8, status: []const u8, current: usize, total: usize, color: bool) !void {
pub fn stderrProgress(io: std.Io, symbol: []const u8, status: []const u8, current: usize, total: usize, color: bool) !void {
if (builtin.is_test) return;
var buf: [256]u8 = undefined;
var writer = std.fs.File.stderr().writer(&buf);
var writer = std.Io.File.stderr().writer(io, &buf);
const out = &writer.interface;
if (color) try fmt.ansiSetFg(out, CLR_MUTED[0], CLR_MUTED[1], CLR_MUTED[2]);
try out.print(" [{d}/{d}] ", .{ current, total });
@ -138,10 +138,10 @@ pub fn stderrProgress(symbol: []const u8, status: []const u8, current: usize, to
}
/// Print rate-limit wait message to stderr
pub fn stderrRateLimitWait(wait_seconds: u64, color: bool) !void {
pub fn stderrRateLimitWait(io: std.Io, wait_seconds: u64, color: bool) !void {
if (builtin.is_test) return;
var buf: [256]u8 = undefined;
var writer = std.fs.File.stderr().writer(&buf);
var writer = std.Io.File.stderr().writer(io, &buf);
const out = &writer.interface;
if (color) try fmt.ansiSetFg(out, CLR_NEGATIVE[0], CLR_NEGATIVE[1], CLR_NEGATIVE[2]);
if (wait_seconds >= 60) {
@ -162,6 +162,7 @@ pub fn stderrRateLimitWait(wait_seconds: u64, color: bool) !void {
/// Progress callback for loadPrices that prints to stderr.
/// Shared between the CLI portfolio command and TUI pre-fetch.
pub const LoadProgress = struct {
io: std.Io,
svc: *zfin.DataService,
color: bool,
/// Offset added to index for display (e.g. stock count when loading watch symbols).
@ -176,21 +177,21 @@ pub const LoadProgress = struct {
.fetching => {
// Show rate-limit wait before the fetch
if (self.svc.estimateWaitSeconds()) |w| {
if (w > 0) stderrRateLimitWait(w, self.color) catch {};
if (w > 0) stderrRateLimitWait(self.io, w, self.color) catch {};
}
stderrProgress(symbol, " (fetching)", display_idx, self.grand_total, self.color) catch {};
stderrProgress(self.io, symbol, " (fetching)", display_idx, self.grand_total, self.color) catch {};
},
.cached => {
stderrProgress(symbol, " (cached)", display_idx, self.grand_total, self.color) catch {};
stderrProgress(self.io, symbol, " (cached)", display_idx, self.grand_total, self.color) catch {};
},
.fetched => {
// Already showed "(fetching)" no extra line needed
},
.failed_used_stale => {
stderrProgress(symbol, " FAILED (using cached)", display_idx, self.grand_total, self.color) catch {};
stderrProgress(self.io, symbol, " FAILED (using cached)", display_idx, self.grand_total, self.color) catch {};
},
.failed => {
stderrProgress(symbol, " FAILED", display_idx, self.grand_total, self.color) catch {};
stderrProgress(self.io, symbol, " FAILED", display_idx, self.grand_total, self.color) catch {};
},
}
}
@ -206,6 +207,7 @@ pub const LoadProgress = struct {
/// Aggregate progress callback for parallel loading operations.
/// Displays a single updating line with progress bar.
pub const AggregateProgress = struct {
io: std.Io,
color: bool,
last_phase: ?zfin.DataService.AggregateProgressCallback.Phase = null,
last_completed: usize = 0,
@ -217,7 +219,7 @@ pub const AggregateProgress = struct {
self.last_phase = phase;
var buf: [256]u8 = undefined;
var writer = std.fs.File.stderr().writer(&buf);
var writer = std.Io.File.stderr().writer(self.io, &buf);
const w = &writer.interface;
switch (phase) {
@ -255,14 +257,16 @@ pub const AggregateProgress = struct {
/// Handles parallel server sync when ZFIN_SERVER is configured,
/// with sequential provider fallback for failures.
pub fn loadPortfolioPrices(
io: std.Io,
svc: *zfin.DataService,
portfolio_syms: ?[]const []const u8,
watch_syms: []const []const u8,
force_refresh: bool,
color: bool,
) zfin.DataService.LoadAllResult {
var aggregate = AggregateProgress{ .color = color };
var aggregate = AggregateProgress{ .io = io, .color = color };
var symbol_progress = LoadProgress{
.io = io,
.svc = svc,
.color = color,
.index_offset = 0,
@ -286,7 +290,7 @@ pub fn loadPortfolioPrices(
const stale = result.stale_count;
var buf: [256]u8 = undefined;
var writer = std.fs.File.stderr().writer(&buf);
var writer = std.Io.File.stderr().writer(io, &buf);
const out = &writer.interface;
if (from_cache == total) {
@ -336,22 +340,22 @@ pub const LoadedPortfolio = struct {
/// Read, deserialize, and extract positions + symbols from a portfolio file.
/// Returns null (with stderr message) on read/parse errors.
pub fn loadPortfolio(allocator: std.mem.Allocator, file_path: []const u8) ?LoadedPortfolio {
const file_data = std.fs.cwd().readFileAlloc(allocator, file_path, 10 * 1024 * 1024) catch {
stderrPrint("Error: Cannot read portfolio file\n") catch {};
pub fn loadPortfolio(io: std.Io, allocator: std.mem.Allocator, file_path: []const u8, as_of: zfin.Date) ?LoadedPortfolio {
const file_data = std.Io.Dir.cwd().readFileAlloc(io, file_path, allocator, .limited(10 * 1024 * 1024)) catch {
stderrPrint(io, "Error: Cannot read portfolio file\n") catch {};
return null;
};
var portfolio = zfin.cache.deserializePortfolio(allocator, file_data) catch {
allocator.free(file_data);
stderrPrint("Error: Cannot parse portfolio file\n") catch {};
stderrPrint(io, "Error: Cannot parse portfolio file\n") catch {};
return null;
};
const positions = portfolio.positions(allocator) catch {
const positions = portfolio.positions(as_of, allocator) catch {
portfolio.deinit();
allocator.free(file_data);
stderrPrint("Error: Cannot compute positions\n") catch {};
stderrPrint(io, "Error: Cannot compute positions\n") catch {};
return null;
};
@ -359,7 +363,7 @@ pub fn loadPortfolio(allocator: std.mem.Allocator, file_path: []const u8) ?Loade
allocator.free(positions);
portfolio.deinit();
allocator.free(file_data);
stderrPrint("Error: Cannot get stock symbols\n") catch {};
stderrPrint(io, "Error: Cannot get stock symbols\n") catch {};
return null;
};
@ -403,11 +407,12 @@ pub fn buildPortfolioData(
syms: []const []const u8,
prices: *std.StringHashMap(f64),
svc: *zfin.DataService,
as_of: zfin.Date,
) !PortfolioData {
var manual_price_set = try zfin.valuation.buildFallbackPrices(allocator, portfolio.lots, positions, prices);
defer manual_price_set.deinit();
var summary = zfin.valuation.portfolioSummary(allocator, portfolio, positions, prices.*, manual_price_set) catch
var summary = zfin.valuation.portfolioSummary(as_of, allocator, portfolio, positions, prices.*, manual_price_set) catch
return error.SummaryFailed;
errdefer summary.deinit(allocator);
@ -424,7 +429,7 @@ pub fn buildPortfolioData(
}
for (syms) |sym| {
if (svc.getCachedCandles(sym)) |cs| {
// cs.data is owned by svc.allocator(), which matches the
// cs.data is owned by svc.allocator, which matches the
// caller's `allocator` in practice (they're wired to the
// same root). Store the raw slice; PortfolioData.deinit
// below frees via the caller's allocator.
@ -433,7 +438,7 @@ pub fn buildPortfolioData(
}
const snapshots = zfin.valuation.computeHistoricalSnapshots(
fmt.todayDate(),
as_of,
positions,
prices.*,
candle_map,
@ -475,12 +480,12 @@ pub const AsOfParseError = error{
/// - Q = quarters (3 months)
/// - Y = years (calendar; Feb 29 - 1Y Feb 28)
///
/// `today` is injected rather than read from the clock so tests are
/// deterministic. In production call sites this is `fmt.todayDate()`.
/// `as_of` is injected rather than read from the clock so tests are
/// deterministic. In production call sites this is `fmt.todayDate(io)`.
///
/// Fractional forms like `1.5Y` are not accepted keep the parser
/// small and unambiguous.
pub fn parseAsOfDate(input: []const u8, today: zfin.Date) AsOfParseError!?zfin.Date {
pub fn parseAsOfDate(input: []const u8, as_of: zfin.Date) AsOfParseError!?zfin.Date {
const s = std.mem.trim(u8, input, " \t\r\n");
if (s.len == 0) return null;
@ -510,10 +515,10 @@ pub fn parseAsOfDate(input: []const u8, today: zfin.Date) AsOfParseError!?zfin.D
const unit = std.ascii.toLower(s[i]);
return switch (unit) {
'w' => today.addDays(-@as(i32, n) * 7),
'm' => today.subtractMonths(n),
'q' => today.subtractMonths(n * 3),
'y' => today.subtractYears(n),
'w' => as_of.addDays(-@as(i32, n) * 7),
'm' => as_of.subtractMonths(n),
'q' => as_of.subtractMonths(n * 3),
'y' => as_of.subtractYears(n),
else => error.UnknownUnit,
};
}
@ -538,12 +543,12 @@ pub fn fmtAsOfParseError(buf: []u8, input: []const u8, err: AsOfParseError) []co
/// specific date makes sense but "live" doesn't e.g. `compare`'s
/// positional args, `history --since`/`--until`, `snapshot --as-of`.
///
/// `today` is injected for test determinism. Production callers pass
/// `fmt.todayDate()`.
/// `as_of` is injected for test determinism. Production callers pass
/// `fmt.todayDate(io)`.
pub const RequiredDateError = AsOfParseError || error{LiveNotAllowed};
pub fn parseRequiredDate(input: []const u8, today: zfin.Date) RequiredDateError!zfin.Date {
const parsed = try parseAsOfDate(input, today);
pub fn parseRequiredDate(input: []const u8, as_of: zfin.Date) RequiredDateError!zfin.Date {
const parsed = try parseAsOfDate(input, as_of);
return parsed orelse error.LiveNotAllowed;
}
@ -553,13 +558,14 @@ pub fn parseRequiredDate(input: []const u8, today: zfin.Date) RequiredDateError!
/// message that tells the user exactly what grammar is accepted
/// including the relative-shortcut syntax.
///
/// `today` is injected for test determinism.
/// `as_of` is injected for test determinism.
pub fn parseRequiredDateOrStderr(
io: std.Io,
input: []const u8,
today: zfin.Date,
as_of: zfin.Date,
arg_label: []const u8,
) error{InvalidDate}!zfin.Date {
return parseRequiredDate(input, today) catch |err| {
return parseRequiredDate(input, as_of) catch |err| {
var ebuf: [256]u8 = undefined;
const msg = switch (err) {
error.LiveNotAllowed => std.fmt.bufPrint(
@ -577,7 +583,7 @@ pub fn parseRequiredDateOrStderr(
) catch "Error: invalid date\n";
},
};
stderrPrint(msg) catch {};
stderrPrint(io, msg) catch {};
return error.InvalidDate;
};
}
@ -611,9 +617,9 @@ pub const CommitSpecError = error{
///
/// Anything else is rejected as `UnknownForm`. Trimming applied.
///
/// `today` is injected for test determinism, matching
/// `as_of` is injected for test determinism, matching
/// `parseAsOfDate`'s contract.
pub fn parseCommitSpec(input: []const u8, today: zfin.Date) CommitSpecError!CommitSpec {
pub fn parseCommitSpec(input: []const u8, as_of: zfin.Date) CommitSpecError!CommitSpec {
const s = std.mem.trim(u8, input, " \t\r\n");
if (s.len == 0) return error.Empty;
@ -639,8 +645,8 @@ pub fn parseCommitSpec(input: []const u8, today: zfin.Date) CommitSpecError!Comm
if (s.len >= 2 and std.ascii.isDigit(s[0])) {
const last = std.ascii.toLower(s[s.len - 1]);
if (last == 'w' or last == 'm' or last == 'q' or last == 'y') {
const as_of = parseAsOfDate(s, today) catch return error.InvalidFormat;
if (as_of) |d| return .{ .date_at_or_before = d };
const resolved = parseAsOfDate(s, as_of) catch return error.InvalidFormat;
if (resolved) |d| return .{ .date_at_or_before = d };
return error.InvalidFormat;
}
}
@ -783,29 +789,30 @@ test "parseCommitSpec: trims whitespace" {
/// Uses `arena` for the intermediate message strings; pass a
/// short-lived arena.
pub fn resolveSnapshotOrExplain(
io: std.Io,
arena: std.mem.Allocator,
hist_dir: []const u8,
requested: zfin.Date,
) !history.ResolvedSnapshot {
return history.resolveSnapshotDate(arena, hist_dir, requested) catch |err| switch (err) {
return history.resolveSnapshotDate(io, arena, hist_dir, requested) catch |err| switch (err) {
error.NoSnapshotAtOrBefore => {
var req_buf: [10]u8 = undefined;
const req_str = requested.format(&req_buf);
const msg = std.fmt.allocPrint(arena, "No snapshot at or before {s}.\n", .{req_str}) catch "No snapshot at or before the requested date.\n";
stderrPrint(msg) catch {};
stderrPrint(io, msg) catch {};
// Second look at the nearest table for the "later
// available" hint. Cheap (filesystem scan, same dir).
const nearest = history.findNearestSnapshot(hist_dir, requested) catch {
stderrPrint("No snapshots in history/ — run `zfin snapshot` to create one.\n") catch {};
const nearest = history.findNearestSnapshot(io, hist_dir, requested) catch {
stderrPrint(io, "No snapshots in history/ — run `zfin snapshot` to create one.\n") catch {};
return err;
};
if (nearest.later) |later| {
var later_buf: [10]u8 = undefined;
const later_str = later.format(&later_buf);
const later_msg = std.fmt.allocPrint(arena, "Earliest available: {s} (later than requested).\n", .{later_str}) catch "A later snapshot exists but was not used.\n";
stderrPrint(later_msg) catch {};
stderrPrint(io, later_msg) catch {};
} else {
stderrPrint("No snapshots in history/ — run `zfin snapshot` to create one.\n") catch {};
stderrPrint(io, "No snapshots in history/ — run `zfin snapshot` to create one.\n") catch {};
}
return err;
},
@ -817,8 +824,8 @@ pub fn resolveSnapshotOrExplain(
/// Load a watchlist SRF file containing symbol records.
/// Returns owned symbol strings. Returns null if file missing or empty.
pub fn loadWatchlist(allocator: std.mem.Allocator, path: []const u8) ?[][]const u8 {
const file_data = std.fs.cwd().readFileAlloc(allocator, path, 1024 * 1024) catch return null;
pub fn loadWatchlist(io: std.Io, allocator: std.mem.Allocator, path: []const u8) ?[][]const u8 {
const file_data = std.Io.Dir.cwd().readFileAlloc(io, path, allocator, .limited(1024 * 1024)) catch return null;
defer allocator.free(file_data);
const WatchEntry = struct { symbol: []const u8 };
@ -1075,3 +1082,131 @@ test "fmtAsOfParseError: no trailing newline" {
try std.testing.expect(msg.len > 0);
try std.testing.expect(msg[msg.len - 1] != '\n');
}
// loadPortfolio / buildPortfolioData tests
test "loadPortfolio: missing file returns null" {
const io = std.testing.io;
const result = loadPortfolio(io, std.testing.allocator, "/nonexistent/portfolio-never-exists.srf", zfin.Date.fromYmd(2026, 5, 8));
try std.testing.expect(result == null);
}
test "loadPortfolio: malformed file returns null" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.writeFile(io, .{ .sub_path = "bad.srf", .data = "this is not srf format" });
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const dir_len = try tmp.dir.realPathFile(io, ".", &path_buf);
const path = try std.fs.path.join(std.testing.allocator, &.{ path_buf[0..dir_len], "bad.srf" });
defer std.testing.allocator.free(path);
const result = loadPortfolio(io, std.testing.allocator, path, zfin.Date.fromYmd(2026, 5, 8));
try std.testing.expect(result == null);
}
test "loadPortfolio: happy path returns LoadedPortfolio with positions and syms" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const data =
\\#!srfv1
\\symbol::AAPL,shares:num:100,open_date::2024-01-15,open_price:num:150.00
\\symbol::MSFT,shares:num:50,open_date::2024-02-20,open_price:num:300.00
\\
;
try tmp.dir.writeFile(io, .{ .sub_path = "portfolio.srf", .data = data });
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const dir_len = try tmp.dir.realPathFile(io, ".", &path_buf);
const path = try std.fs.path.join(std.testing.allocator, &.{ path_buf[0..dir_len], "portfolio.srf" });
defer std.testing.allocator.free(path);
var loaded = loadPortfolio(io, std.testing.allocator, path, zfin.Date.fromYmd(2026, 5, 8)) orelse return error.TestUnexpectedResult;
defer loaded.deinit(std.testing.allocator);
try std.testing.expectEqual(@as(usize, 2), loaded.portfolio.lots.len);
try std.testing.expectEqual(@as(usize, 2), loaded.positions.len);
try std.testing.expectEqual(@as(usize, 2), loaded.syms.len);
}
test "loadPortfolio: today value flows through to position computation" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
// Lot opens 2024-06-01. With today=2024-01-01 (before open), the
// position record exists but with 0 open shares. With today=2025-01-01
// (after open), shares = 100.
const data =
\\#!srfv1
\\symbol::AAPL,shares:num:100,open_date::2024-06-01,open_price:num:150.00
\\
;
try tmp.dir.writeFile(io, .{ .sub_path = "portfolio.srf", .data = data });
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const dir_len = try tmp.dir.realPathFile(io, ".", &path_buf);
const path = try std.fs.path.join(std.testing.allocator, &.{ path_buf[0..dir_len], "portfolio.srf" });
defer std.testing.allocator.free(path);
// today before open_date position exists but no open shares
var loaded_before = loadPortfolio(io, std.testing.allocator, path, zfin.Date.fromYmd(2024, 1, 1)) orelse return error.TestUnexpectedResult;
defer loaded_before.deinit(std.testing.allocator);
try std.testing.expectEqual(@as(usize, 1), loaded_before.positions.len);
try std.testing.expectApproxEqAbs(@as(f64, 0), loaded_before.positions[0].shares, 0.01);
try std.testing.expectEqual(@as(u32, 0), loaded_before.positions[0].open_lots);
// today after open_date 100 shares open
var loaded_after = loadPortfolio(io, std.testing.allocator, path, zfin.Date.fromYmd(2025, 1, 1)) orelse return error.TestUnexpectedResult;
defer loaded_after.deinit(std.testing.allocator);
try std.testing.expectEqual(@as(usize, 1), loaded_after.positions.len);
try std.testing.expectApproxEqAbs(@as(f64, 100), loaded_after.positions[0].shares, 0.01);
try std.testing.expectEqual(@as(u32, 1), loaded_after.positions[0].open_lots);
}
test "buildPortfolioData: empty positions returns NoAllocations" {
const config = zfin.Config{ .cache_dir = "/tmp" };
var svc = zfin.DataService.init(std.testing.io, std.testing.allocator, config);
defer svc.deinit();
const lots = [_]zfin.Lot{};
const portfolio: zfin.Portfolio = .{ .lots = @constCast(&lots), .allocator = std.testing.allocator };
const positions: []const zfin.Position = &.{};
const syms: []const []const u8 = &.{};
var prices: std.StringHashMap(f64) = .init(std.testing.allocator);
defer prices.deinit();
const result = buildPortfolioData(std.testing.allocator, portfolio, positions, syms, &prices, &svc, zfin.Date.fromYmd(2026, 5, 8));
try std.testing.expectError(error.NoAllocations, result);
}
test "buildPortfolioData: builds summary + candle_map for stock positions" {
const config = zfin.Config{ .cache_dir = "/tmp" };
var svc = zfin.DataService.init(std.testing.io, std.testing.allocator, config);
defer svc.deinit();
const today = zfin.Date.fromYmd(2026, 5, 8);
const lots = [_]zfin.Lot{
.{ .symbol = "AAPL", .shares = 100, .open_date = zfin.Date.fromYmd(2024, 1, 1), .open_price = 150 },
};
var portfolio: zfin.Portfolio = .{ .lots = @constCast(&lots), .allocator = std.testing.allocator };
const positions = try portfolio.positions(today, std.testing.allocator);
defer std.testing.allocator.free(positions);
const syms = try portfolio.stockSymbols(std.testing.allocator);
defer std.testing.allocator.free(syms);
var prices: std.StringHashMap(f64) = .init(std.testing.allocator);
defer prices.deinit();
try prices.put("AAPL", 200.0);
var pf_data = try buildPortfolioData(std.testing.allocator, portfolio, positions, syms, &prices, &svc, today);
defer pf_data.deinit(std.testing.allocator);
try std.testing.expect(pf_data.summary.allocations.len > 0);
try std.testing.expectApproxEqAbs(@as(f64, 20_000), pf_data.summary.total_value, 1.0);
}

View file

@ -69,10 +69,12 @@ pub const Error = error{
/// Command entry point.
pub fn run(
io: std.Io,
allocator: std.mem.Allocator,
svc: *zfin.DataService,
portfolio_path: []const u8,
cmd_args: []const []const u8,
as_of: Date,
color: bool,
out: *std.Io.Writer,
) !void {
@ -112,7 +114,6 @@ pub fn run(
var positional: std.ArrayList([]const u8) = .empty;
defer positional.deinit(allocator);
const today_for_parse = fmt.todayDate();
var arg_i: usize = 0;
while (arg_i < cmd_args.len) : (arg_i += 1) {
const a = cmd_args[arg_i];
@ -122,20 +123,20 @@ pub fn run(
events_enabled = false;
} else if (std.mem.eql(u8, a, "--snapshot-before") or std.mem.eql(u8, a, "--snapshot-after")) {
if (arg_i + 1 >= cmd_args.len) {
try cli.stderrPrint("Error: ");
try cli.stderrPrint(a);
try cli.stderrPrint(" requires a date (YYYY-MM-DD, relative like 1W, or 'live' for --snapshot-after).\n");
try cli.stderrPrint(io, "Error: ");
try cli.stderrPrint(io, a);
try cli.stderrPrint(io, " requires a date (YYYY-MM-DD, relative like 1W, or 'live' for --snapshot-after).\n");
return error.UnexpectedArg;
}
const value = cmd_args[arg_i + 1];
const is_after = std.mem.eql(u8, a, "--snapshot-after");
// --snapshot-after supports 'live' (= current portfolio,
// not a snapshot file). --snapshot-before does not.
const parsed = cli.parseAsOfDate(value, today_for_parse) catch |err| {
const parsed = cli.parseAsOfDate(value, as_of) catch |err| {
var ebuf: [256]u8 = undefined;
const msg = cli.fmtAsOfParseError(&ebuf, value, err);
try cli.stderrPrint(msg);
try cli.stderrPrint("\n");
try cli.stderrPrint(io, msg);
try cli.stderrPrint(io, "\n");
return error.InvalidDate;
};
if (parsed) |d| {
@ -143,28 +144,28 @@ pub fn run(
} else if (is_after) {
snapshot_after_live = true;
} else {
try cli.stderrPrint("Error: --snapshot-before cannot be 'live' — the before side must be an actual snapshot.\n");
try cli.stderrPrint(io, "Error: --snapshot-before cannot be 'live' — the before side must be an actual snapshot.\n");
return error.InvalidDate;
}
arg_i += 1;
} else if (std.mem.eql(u8, a, "--commit-before") or std.mem.eql(u8, a, "--commit-after")) {
if (arg_i + 1 >= cmd_args.len) {
try cli.stderrPrint("Error: ");
try cli.stderrPrint(a);
try cli.stderrPrint(" requires a value (working, YYYY-MM-DD, 1W/1M/1Q/1Y, HEAD, HEAD~N, or SHA).\n");
try cli.stderrPrint(io, "Error: ");
try cli.stderrPrint(io, a);
try cli.stderrPrint(io, " requires a value (working, YYYY-MM-DD, 1W/1M/1Q/1Y, HEAD, HEAD~N, or SHA).\n");
return error.UnexpectedArg;
}
const value = cmd_args[arg_i + 1];
const spec = cli.parseCommitSpec(value, today_for_parse) catch |err| {
const spec = cli.parseCommitSpec(value, as_of) catch |err| {
var ebuf: [256]u8 = undefined;
const msg = cli.fmtCommitSpecError(&ebuf, value, err);
try cli.stderrPrint(msg);
try cli.stderrPrint("\n");
try cli.stderrPrint(io, msg);
try cli.stderrPrint(io, "\n");
return error.InvalidDate;
};
if (std.mem.eql(u8, a, "--commit-before")) {
if (spec == .working_copy) {
try cli.stderrPrint("Error: --commit-before cannot be `working` — diffing the working copy against itself is meaningless.\n");
try cli.stderrPrint(io, "Error: --commit-before cannot be `working` — diffing the working copy against itself is meaningless.\n");
return error.InvalidDate;
}
commit_before_override = spec;
@ -173,13 +174,13 @@ pub fn run(
}
arg_i += 1;
} else if (a.len > 0 and a[0] == '-' and !std.mem.eql(u8, a, "-")) {
try cli.stderrPrint("Error: unknown flag for 'compare': ");
try cli.stderrPrint(a);
try cli.stderrPrint("\nKnown flags: --projections, --no-events, --snapshot-before, --snapshot-after, --commit-before, --commit-after.\n");
try cli.stderrPrint(io, "Error: unknown flag for 'compare': ");
try cli.stderrPrint(io, a);
try cli.stderrPrint(io, "\nKnown flags: --projections, --no-events, --snapshot-before, --snapshot-after, --commit-before, --commit-after.\n");
if (std.mem.eql(u8, a, "-p")) {
try cli.stderrPrint(" (Tip: the projections flag is spelled `--projections` in full.\n");
try cli.stderrPrint(" `-p` is reserved for the global --portfolio option and must appear\n");
try cli.stderrPrint(" before the subcommand, e.g. `zfin -p /path/to/portfolio.srf compare ...`.)\n");
try cli.stderrPrint(io, " (Tip: the projections flag is spelled `--projections` in full.\n");
try cli.stderrPrint(io, " `-p` is reserved for the global --portfolio option and must appear\n");
try cli.stderrPrint(io, " before the subcommand, e.g. `zfin -p /path/to/portfolio.srf compare ...`.)\n");
}
return error.UnexpectedArg;
} else {
@ -193,32 +194,30 @@ pub fn run(
// override that gives us an anchor.
const have_then_anchor = args.len >= 1 or snapshot_before_override != null or commit_before_override != null;
if (!have_then_anchor) {
try cli.stderrPrint("Error: 'compare' requires a before-side anchor (positional date, --snapshot-before, or --commit-before).\n");
try cli.stderrPrint("Usage:\n");
try cli.stderrPrint(" zfin compare <DATE> (compare date vs current)\n");
try cli.stderrPrint(" zfin compare <DATE1> <DATE2> (compare two dates)\n");
try cli.stderrPrint(" zfin compare --snapshot-before <DATE> [--commit-before <SPEC>] (explicit axes)\n");
try cli.stderrPrint("Dates accept YYYY-MM-DD or relative shortcuts: 1W, 1M, 1Q, 1Y.\n");
try cli.stderrPrint("See `zfin help` for --commit-before/--commit-after/--snapshot-before/--snapshot-after details.\n");
try cli.stderrPrint(io, "Error: 'compare' requires a before-side anchor (positional date, --snapshot-before, or --commit-before).\n");
try cli.stderrPrint(io, "Usage:\n");
try cli.stderrPrint(io, " zfin compare <DATE> (compare date vs current)\n");
try cli.stderrPrint(io, " zfin compare <DATE1> <DATE2> (compare two dates)\n");
try cli.stderrPrint(io, " zfin compare --snapshot-before <DATE> [--commit-before <SPEC>] (explicit axes)\n");
try cli.stderrPrint(io, "Dates accept YYYY-MM-DD or relative shortcuts: 1W, 1M, 1Q, 1Y.\n");
try cli.stderrPrint(io, "See `zfin help` for --commit-before/--commit-after/--snapshot-before/--snapshot-after details.\n");
return error.MissingDateArg;
}
if (args.len > 2) {
try cli.stderrPrint("Error: 'compare' takes at most two positional dates.\n");
try cli.stderrPrint(io, "Error: 'compare' takes at most two positional dates.\n");
return error.UnexpectedArg;
}
const today = fmt.todayDate();
// Parse positional dates (if any). These feed the snapshot axes
// by default; explicit overrides win.
const date1: ?Date = if (args.len >= 1)
(cli.parseRequiredDateOrStderr(args[0], today, "date1") catch |err| switch (err) {
(cli.parseRequiredDateOrStderr(io, args[0], as_of, "date1") catch |err| switch (err) {
error.InvalidDate => return error.InvalidDate,
})
else
null;
const date2: ?Date = if (args.len == 2)
(cli.parseRequiredDateOrStderr(args[1], today, "date2") catch |err| switch (err) {
(cli.parseRequiredDateOrStderr(io, args[1], as_of, "date2") catch |err| switch (err) {
error.InvalidDate => return error.InvalidDate,
})
else
@ -244,7 +243,7 @@ pub fn run(
switch (cb) {
.date_at_or_before => |d| break :fallback d,
else => {
try cli.stderrPrint("Error: --commit-before with a non-date SPEC requires an explicit --snapshot-before date for the liquid comparison.\n");
try cli.stderrPrint(io, "Error: --commit-before with a non-date SPEC requires an explicit --snapshot-before date for the liquid comparison.\n");
return error.MissingDateArg;
},
}
@ -253,21 +252,21 @@ pub fn run(
};
const now_is_live = !snapshot_after_live and snapshot_after_override == null and date2 == null;
const now_requested: Date = if (snapshot_after_override) |d| d else if (date2) |d| d else today;
const now_requested: Date = if (snapshot_after_override) |d| d else if (date2) |d| d else as_of;
// Validate snapshot date ordering.
if (now_is_live) {
if (then_requested.days == today.days) {
try cli.stderrPrint("Error: cannot compare today against today's live portfolio.\n");
if (then_requested.days == as_of.days) {
try cli.stderrPrint(io, "Error: cannot compare today against today's live portfolio.\n");
return error.SameDate;
}
if (then_requested.days > today.days) {
try cli.stderrPrint("Error: cannot compare against a future date.\n");
if (then_requested.days > as_of.days) {
try cli.stderrPrint(io, "Error: cannot compare against a future date.\n");
return error.InvalidDate;
}
} else if (!snapshot_after_live) {
if (then_requested.days == now_requested.days) {
try cli.stderrPrint("Error: before and after dates are the same — nothing to compare.\n");
try cli.stderrPrint(io, "Error: before and after dates are the same — nothing to compare.\n");
return error.SameDate;
}
}
@ -303,16 +302,22 @@ pub fn run(
const then_date_requested = then_date;
const now_date_requested = now_date;
const then_resolved = cli.resolveSnapshotOrExplain(arena, hist_dir, then_date) catch return error.SnapshotNotFound;
const then_resolved = cli.resolveSnapshotOrExplain(io, arena, hist_dir, then_date) catch return error.SnapshotNotFound;
if (!then_resolved.exact) {
try printSnapNote(color, then_resolved.requested, then_resolved.actual, "then");
var stderr_buf: [256]u8 = undefined;
var stderr_writer = std.Io.File.stderr().writer(io, &stderr_buf);
try printSnapNote(&stderr_writer.interface, color, then_resolved.requested, then_resolved.actual, "then");
try stderr_writer.interface.flush();
}
then_date = then_resolved.actual;
if (!now_is_live) {
const now_resolved = cli.resolveSnapshotOrExplain(arena, hist_dir, now_date) catch return error.SnapshotNotFound;
const now_resolved = cli.resolveSnapshotOrExplain(io, arena, hist_dir, now_date) catch return error.SnapshotNotFound;
if (!now_resolved.exact) {
try printSnapNote(color, now_resolved.requested, now_resolved.actual, "now");
var stderr_buf: [256]u8 = undefined;
var stderr_writer = std.Io.File.stderr().writer(io, &stderr_buf);
try printSnapNote(&stderr_writer.interface, color, now_resolved.requested, now_resolved.actual, "now");
try stderr_writer.interface.flush();
}
now_date = now_resolved.actual;
}
@ -328,7 +333,7 @@ pub fn run(
// actual snapshot files FileNotFound here would be a disk race
// (file deleted between the snap check and the load), not a
// missing-snapshot UX problem.
var then_side = try compare_core.loadSnapshotSide(allocator, hist_dir, then_date);
var then_side = try compare_core.loadSnapshotSide(io, allocator, hist_dir, then_date);
defer then_side.deinit(allocator);
// Projections: only computed when --projections/-p flag is set.
@ -341,22 +346,24 @@ pub fn run(
defer if (projections_result) |r| r.cleanup();
var projections_block: ?ProjectionsBlock = null;
if (with_projections) {
const now_date_for_proj: ?Date = if (now_is_live) null else now_date;
const proj_now_date: Date = if (now_is_live) as_of else now_date;
projections_result = projections.computeKeyComparison(
io,
allocator,
arena,
svc,
portfolio_path,
events_enabled,
then_date,
now_date_for_proj,
proj_now_date,
!now_is_live,
) catch |err| blk: {
// Projections computation failed fall back to compare
// output without the block. User still gets the core
// Liquid/attribution/per-symbol view.
var ebuf: [160]u8 = undefined;
const msg = std.fmt.bufPrint(&ebuf, "(projections block failed: {s} — continuing without)\n", .{@errorName(err)}) catch "(projections block failed)\n";
cli.stderrPrint(msg) catch {};
cli.stderrPrint(io, msg) catch {};
break :blk null;
};
if (projections_result) |r| {
@ -378,15 +385,13 @@ pub fn run(
.{ .date_at_or_before = now_date_requested };
if (now_is_live) {
var now_live = try LiveSide.load(allocator, svc, portfolio_path, color);
var now_live = try LiveSide.load(io, allocator, svc, portfolio_path, as_of, color);
defer now_live.deinit(allocator);
// Attribution uses the resolved CommitSpecs so --commit-*
// overrides + date fallbacks share one classifier. The
// spec-form computeAttribution is identical to the date form
// wrapper same classifier, same windowing just without
// the Date CommitSpec.date_at_or_before translation.
const attribution = contributions.computeAttributionSpec(allocator, svc, portfolio_path, attr_before, attr_after_opt, color);
// overrides + date fallbacks share one classifier. The caller
// adapts dates to `CommitSpec.date_at_or_before` upstream.
const attribution = contributions.computeAttributionSpec(io, allocator, svc, portfolio_path, attr_before, attr_after_opt, as_of, color);
try renderFromParts(out, color, allocator, .{
.then_date = then_date,
@ -400,10 +405,10 @@ pub fn run(
.projections = projections_block,
});
} else {
var now_side = try compare_core.loadSnapshotSide(allocator, hist_dir, now_date);
var now_side = try compare_core.loadSnapshotSide(io, allocator, hist_dir, now_date);
defer now_side.deinit(allocator);
const attribution = contributions.computeAttributionSpec(allocator, svc, portfolio_path, attr_before, attr_after_opt, color);
const attribution = contributions.computeAttributionSpec(io, allocator, svc, portfolio_path, attr_before, attr_after_opt, as_of, color);
try renderFromParts(out, color, allocator, .{
.then_date = then_date,
@ -419,12 +424,11 @@ pub fn run(
}
}
/// Snap a requested date to the nearest-earlier snapshot that exists
/// on disk. On `NoSnapshotAtOrBefore`, prints a user-facing "no
/// snapshot" error to stderr (with the nearest-later suggestion when
/// available) and propagates the underlying error so the caller can
/// map it to its own domain (e.g. `error.SnapshotNotFound`).
fn printSnapNote(color: bool, requested: Date, actual: Date, label: []const u8) !void {
/// Render a muted "(requested X for Y; nearest snapshot: Z, N day(s)
/// earlier)" note explaining that a requested as-of date was snapped
/// backward to the nearest available snapshot. Pure formatter caller
/// supplies the writer (typically stderr) and decides about flushing.
fn printSnapNote(out: *std.Io.Writer, color: bool, requested: Date, actual: Date, label: []const u8) !void {
var req_buf: [10]u8 = undefined;
var act_buf: [10]u8 = undefined;
const req_str = requested.format(&req_buf);
@ -436,13 +440,9 @@ fn printSnapNote(color: bool, requested: Date, actual: Date, label: []const u8)
"(requested {s} for {s}; nearest snapshot: {s}, {d} day{s} earlier)\n",
.{ req_str, label, act_str, days, if (days == 1) "" else "s" },
) catch "(snapped to nearest snapshot)\n";
var stderr_buf: [256]u8 = undefined;
var writer = std.fs.File.stderr().writer(&stderr_buf);
const out = &writer.interface;
if (color) try fmt.ansiSetFg(out, cli.CLR_MUTED[0], cli.CLR_MUTED[1], cli.CLR_MUTED[2]);
try out.writeAll(msg);
if (color) try fmt.ansiReset(out);
try out.flush();
}
/// Inputs needed to build + render a `CompareView`. Bundled into a
@ -533,16 +533,18 @@ const LiveSide = struct {
liquid: f64,
fn load(
io: std.Io,
allocator: std.mem.Allocator,
svc: *zfin.DataService,
portfolio_path: []const u8,
as_of: Date,
color: bool,
) !LiveSide {
var loaded_pf = cli.loadPortfolio(allocator, portfolio_path) orelse return error.PortfolioLoadFailed;
var loaded_pf = cli.loadPortfolio(io, allocator, portfolio_path, as_of) orelse return error.PortfolioLoadFailed;
errdefer loaded_pf.deinit(allocator);
if (loaded_pf.portfolio.lots.len == 0) {
try cli.stderrPrint("Portfolio is empty.\n");
try cli.stderrPrint(io, "Portfolio is empty.\n");
return error.PortfolioLoadFailed;
}
@ -550,7 +552,7 @@ const LiveSide = struct {
errdefer prices.deinit();
if (loaded_pf.syms.len > 0) {
var load_result = cli.loadPortfolioPrices(svc, loaded_pf.syms, &.{}, false, color);
var load_result = cli.loadPortfolioPrices(io, svc, loaded_pf.syms, &.{}, false, color);
defer load_result.deinit();
var it = load_result.prices.iterator();
while (it.next()) |entry| prices.put(entry.key_ptr.*, entry.value_ptr.*) catch {};
@ -563,9 +565,10 @@ const LiveSide = struct {
loaded_pf.syms,
&prices,
svc,
as_of,
) catch |err| switch (err) {
error.NoAllocations, error.SummaryFailed => {
try cli.stderrPrint("Error computing portfolio summary.\n");
try cli.stderrPrint(io, "Error computing portfolio summary.\n");
return error.PortfolioLoadFailed;
},
else => return err,
@ -574,7 +577,7 @@ const LiveSide = struct {
var map: view.HoldingMap = .init(allocator);
errdefer map.deinit();
try compare_core.aggregateLiveStocks(&loaded_pf.portfolio, &prices, &map);
try compare_core.aggregateLiveStocks(as_of, &loaded_pf.portfolio, &prices, &map);
return .{
.loaded = loaded_pf,
@ -1162,146 +1165,156 @@ fn makeTestSvc() zfin.DataService {
// Minimal in-memory config. `cache_dir` must be set; "/tmp" is fine
// since these tests never hit the cache.
const config = zfin.Config{ .cache_dir = "/tmp" };
return zfin.DataService.init(testing.allocator, config);
return zfin.DataService.init(std.testing.io, testing.allocator, config);
}
fn makeTestPortfolioPath(tmp: *std.testing.TmpDir, allocator: std.mem.Allocator) ![]u8 {
const dir_path = try tmp.dir.realpathAlloc(allocator, ".");
fn makeTestPortfolioPath(io: std.Io, tmp: *std.testing.TmpDir, allocator: std.mem.Allocator) ![]u8 {
const dir_path = try tmp.dir.realPathFileAlloc(io, ".", allocator);
defer allocator.free(dir_path);
return std.fs.path.join(allocator, &.{ dir_path, "portfolio.srf" });
}
test "run: zero args returns MissingDateArg" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const result = run(testing.allocator, &svc, pf, &.{}, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &.{}, Date.fromYmd(2024, 3, 15), false, &stream);
try testing.expectError(error.MissingDateArg, result);
}
test "run: three args returns UnexpectedArg" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const args = [_][]const u8{ "2024-01-15", "2024-02-15", "2024-03-15" };
const result = run(testing.allocator, &svc, pf, &args, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &args, Date.fromYmd(2024, 3, 15), false, &stream);
try testing.expectError(error.UnexpectedArg, result);
}
test "run: bad date1 returns InvalidDate" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const args = [_][]const u8{"not-a-date"};
const result = run(testing.allocator, &svc, pf, &args, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &args, Date.fromYmd(2024, 3, 15), false, &stream);
try testing.expectError(error.InvalidDate, result);
}
test "run: valid date1 + bad date2 returns InvalidDate" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const args = [_][]const u8{ "2024-01-15", "2024/03/15" };
const result = run(testing.allocator, &svc, pf, &args, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &args, Date.fromYmd(2024, 3, 15), false, &stream);
try testing.expectError(error.InvalidDate, result);
}
test "run: same date twice returns SameDate" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const args = [_][]const u8{ "2024-01-15", "2024-01-15" };
const result = run(testing.allocator, &svc, pf, &args, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &args, Date.fromYmd(2024, 3, 15), false, &stream);
try testing.expectError(error.SameDate, result);
}
test "run: one date equal to today returns SameDate" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
var today_buf: [10]u8 = undefined;
const today_str = fmt.todayDate().format(&today_buf);
const today_date = Date.fromYmd(2024, 3, 15);
const today_str = today_date.format(&today_buf);
const args = [_][]const u8{today_str};
const result = run(testing.allocator, &svc, pf, &args, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &args, today_date, false, &stream);
try testing.expectError(error.SameDate, result);
}
test "run: single-date past-date with empty history returns SnapshotNotFound" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const args = [_][]const u8{"2020-01-01"};
const result = run(testing.allocator, &svc, pf, &args, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &args, Date.fromYmd(2024, 3, 15), false, &stream);
try testing.expectError(error.SnapshotNotFound, result);
}
test "run: single-date future-date rejected as InvalidDate" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const args = [_][]const u8{"2099-01-01"};
const result = run(testing.allocator, &svc, pf, &args, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &args, Date.fromYmd(2024, 3, 15), false, &stream);
try testing.expectError(error.InvalidDate, result);
}
test "run: relative shortcut resolves (1W -> SnapshotNotFound against empty history)" {
const io = std.testing.io;
// Verifies that `zfin compare 1W` doesn't bail with InvalidDate
// for a non-ISO string the relative shortcut resolves to an
// absolute date, which then tries to load a snapshot that
@ -1310,39 +1323,41 @@ test "run: relative shortcut resolves (1W -> SnapshotNotFound against empty hist
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const args = [_][]const u8{"1W"};
const result = run(testing.allocator, &svc, pf, &args, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &args, Date.fromYmd(2024, 3, 15), false, &stream);
try testing.expectError(error.SnapshotNotFound, result);
}
test "run: 'live' string rejected as InvalidDate (not a real prior date)" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const args = [_][]const u8{"live"};
const result = run(testing.allocator, &svc, pf, &args, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &args, Date.fromYmd(2024, 3, 15), false, &stream);
try testing.expectError(error.InvalidDate, result);
}
test "run: two-date with empty history returns SnapshotNotFound (auto-swap path)" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [1024]u8 = undefined;
@ -1351,19 +1366,20 @@ test "run: two-date with empty history returns SnapshotNotFound (auto-swap path)
// Intentionally reversed verifies the swap happens without
// error (both dates will fail to load with SnapshotNotFound).
const args = [_][]const u8{ "2024-03-15", "2024-01-15" };
const result = run(testing.allocator, &svc, pf, &args, false, &stream);
const result = run(io, testing.allocator, &svc, pf, &args, Date.fromYmd(2024, 3, 15), false, &stream);
try testing.expectError(error.SnapshotNotFound, result);
}
test "run: two-date happy path via fixtures" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
try tmp.dir.makePath("history");
var hist_dir = try tmp.dir.openDir("history", .{});
defer hist_dir.close();
try tmp.dir.createDirPath(io, "history");
var hist_dir = try tmp.dir.openDir(io, "history", .{});
defer hist_dir.close(io);
const d1 = Date.fromYmd(2024, 1, 15);
const d2 = Date.fromYmd(2024, 3, 15);
@ -1398,17 +1414,17 @@ test "run: two-date happy path via fixtures" {
.quote_date = d2,
},
};
try writeFixtureSnapshot(hist_dir, testing.allocator, "2024-01-15-portfolio.srf", d1, 15_000, 15_000, &lots_then);
try writeFixtureSnapshot(hist_dir, testing.allocator, "2024-03-15-portfolio.srf", d2, 16_500, 16_500, &lots_now);
try writeFixtureSnapshot(io, hist_dir, testing.allocator, "2024-01-15-portfolio.srf", d1, 15_000, 15_000, &lots_then);
try writeFixtureSnapshot(io, hist_dir, testing.allocator, "2024-03-15-portfolio.srf", d2, 16_500, 16_500, &lots_now);
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [4096]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const args = [_][]const u8{ "2024-01-15", "2024-03-15" };
try run(testing.allocator, &svc, pf, &args, false, &stream);
try run(io, testing.allocator, &svc, pf, &args, Date.fromYmd(2024, 3, 15), false, &stream);
const out = stream.buffered();
try testing.expect(std.mem.indexOf(u8, out, "AAPL") != null);
@ -1416,7 +1432,8 @@ test "run: two-date happy path via fixtures" {
}
fn writeFixtureSnapshot(
dir: std.fs.Dir,
io: std.Io,
dir: std.Io.Dir,
allocator: std.mem.Allocator,
filename: []const u8,
as_of: Date,
@ -1446,5 +1463,62 @@ fn writeFixtureSnapshot(
};
const rendered = try snapshot.renderSnapshot(allocator, snap);
defer allocator.free(rendered);
try dir.writeFile(.{ .sub_path = filename, .data = rendered });
try dir.writeFile(io, .{ .sub_path = filename, .data = rendered });
}
test "printSnapNote: 1 day earlier uses singular 'day'" {
var buf: [512]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printSnapNote(&w, false, Date.fromYmd(2024, 3, 15), Date.fromYmd(2024, 3, 14), "then");
const out = w.buffered();
try testing.expect(std.mem.indexOf(u8, out, "requested 2024-03-15 for then") != null);
try testing.expect(std.mem.indexOf(u8, out, "nearest snapshot: 2024-03-14") != null);
try testing.expect(std.mem.indexOf(u8, out, "1 day earlier") != null);
// Singular: must NOT contain "1 days"
try testing.expect(std.mem.indexOf(u8, out, "1 days") == null);
// Trailing newline
try testing.expectEqual(@as(u8, '\n'), out[out.len - 1]);
}
test "printSnapNote: multi-day uses plural 'days'" {
var buf: [512]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printSnapNote(&w, false, Date.fromYmd(2024, 3, 15), Date.fromYmd(2024, 3, 8), "now");
const out = w.buffered();
try testing.expect(std.mem.indexOf(u8, out, "requested 2024-03-15 for now") != null);
try testing.expect(std.mem.indexOf(u8, out, "nearest snapshot: 2024-03-08") != null);
try testing.expect(std.mem.indexOf(u8, out, "7 days earlier") != null);
}
test "printSnapNote: label is interpolated verbatim" {
var buf: [512]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printSnapNote(&w, false, Date.fromYmd(2024, 3, 15), Date.fromYmd(2024, 3, 12), "vs");
const out = w.buffered();
try testing.expect(std.mem.indexOf(u8, out, "for vs;") != null);
}
test "printSnapNote: color=false emits no ANSI escapes" {
var buf: [512]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printSnapNote(&w, false, Date.fromYmd(2024, 3, 15), Date.fromYmd(2024, 3, 12), "then");
try testing.expect(std.mem.indexOf(u8, w.buffered(), "\x1b[") == null);
}
test "printSnapNote: color=true emits muted-fg ANSI escape and reset" {
var buf: [512]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printSnapNote(&w, true, Date.fromYmd(2024, 3, 15), Date.fromYmd(2024, 3, 12), "then");
const out = w.buffered();
try testing.expect(std.mem.indexOf(u8, out, "\x1b[38;2;") != null);
// Reset ANSI before newline
try testing.expect(std.mem.indexOf(u8, out, "\x1b[0m") != null);
}
test "printSnapNote: month-boundary day delta computes calendar days" {
// 2024-04-01 requested, 2024-03-30 actual 2 days earlier.
var buf: [512]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printSnapNote(&w, false, Date.fromYmd(2024, 4, 1), Date.fromYmd(2024, 3, 30), "then");
try testing.expect(std.mem.indexOf(u8, w.buffered(), "2 days earlier") != null);
}

View file

@ -20,7 +20,7 @@
//!
//! Every lot-level change gets exactly one `ChangeKind` assigned at
//! diff time in `computeReport`. Downstream consumers (section printer,
//! per-account summary, `computeAttribution` used by `compare`) all
//! per-account summary, `computeAttributionSpec` used by `compare`) all
//! read the pre-classified kinds there is no post-hoc reclassification
//! in any consumer. Single point of truth so the grand total in
//! `zfin contributions` and the attribution line in `zfin compare`
@ -180,11 +180,13 @@ const Endpoints = struct {
};
pub fn run(
io: std.Io,
allocator: std.mem.Allocator,
svc: *zfin.DataService,
portfolio_path: []const u8,
before: ?git.CommitSpec,
after: ?git.CommitSpec,
as_of: Date,
color: bool,
out: *std.Io.Writer,
) !void {
@ -201,18 +203,18 @@ pub fn run(
// can assume the invariant. The legacy no-flag path passes both
// as null and falls through to HEAD~1..HEAD / HEAD..WC.
if (before == null and after != null) {
try cli.stderrPrint("Error: --until / --commit-after requires --since / --commit-before.\n");
try cli.stderrPrint(io, "Error: --until / --commit-after requires --since / --commit-before.\n");
return;
}
var ctx = prepareReport(allocator, arena, svc, portfolio_path, before, after, color, .verbose) catch return;
var ctx = prepareReport(io, allocator, arena, svc, portfolio_path, before, after, as_of, color, .verbose) catch return;
defer ctx.deinit();
try printReport(out, &ctx.report, ctx.endpoints.label, color);
try out.flush();
}
/// Shared pipeline context: everything `run` and `computeAttribution`
/// Shared pipeline context: everything `run` and `computeAttributionSpec`
/// both need from the git-backed diff.
///
/// Owned fields split across two allocators:
@ -247,72 +249,74 @@ const PrepareError = error{PrepareFailed};
/// path (user sees why things failed); `.silent` is the attribution
/// path (failure just means "no attribution line", don't nag).
fn prepareReport(
io: std.Io,
allocator: std.mem.Allocator,
arena: std.mem.Allocator,
svc: *zfin.DataService,
portfolio_path: []const u8,
before_spec: ?git.CommitSpec,
after_spec: ?git.CommitSpec,
as_of: Date,
color: bool,
verbosity: Verbosity,
) PrepareError!ReportContext {
const repo = git.findRepo(arena, portfolio_path) catch |err| {
const repo = git.findRepo(io, arena, portfolio_path) catch |err| {
if (verbosity == .verbose) {
switch (err) {
error.NotInGitRepo => cli.stderrPrint("Error: contributions requires portfolio.srf to be in a git repo.\n") catch {},
error.GitUnavailable => cli.stderrPrint("Error: could not run 'git'. Is git installed and on PATH?\n") catch {},
else => cli.stderrPrint("Error locating git repo.\n") catch {},
error.NotInGitRepo => cli.stderrPrint(io, "Error: contributions requires portfolio.srf to be in a git repo.\n") catch {},
error.GitUnavailable => cli.stderrPrint(io, "Error: could not run 'git'. Is git installed and on PATH?\n") catch {},
else => cli.stderrPrint(io, "Error locating git repo.\n") catch {},
}
}
return error.PrepareFailed;
};
const status = git.pathStatus(arena, repo.root, repo.rel_path) catch {
if (verbosity == .verbose) cli.stderrPrint("Error: could not determine git status of portfolio.srf.\n") catch {};
const status = git.pathStatus(io, arena, repo.root, repo.rel_path) catch {
if (verbosity == .verbose) cli.stderrPrint(io, "Error: could not determine git status of portfolio.srf.\n") catch {};
return error.PrepareFailed;
};
if (status == .untracked) {
if (verbosity == .verbose) cli.stderrPrint("Error: portfolio.srf is not tracked in git. Add and commit it first.\n") catch {};
if (verbosity == .verbose) cli.stderrPrint(io, "Error: portfolio.srf is not tracked in git. Add and commit it first.\n") catch {};
return error.PrepareFailed;
}
const dirty = status == .modified;
const endpoints = resolveEndpoints(arena, repo, before_spec, after_spec, dirty, verbosity) catch return error.PrepareFailed;
const endpoints = resolveEndpoints(io, arena, repo, before_spec, after_spec, dirty, verbosity) catch return error.PrepareFailed;
// Pull both sides: before is always from git; after is either
// from git (at some revision) or from the working copy.
const before = git.show(arena, repo.root, endpoints.range.before_rev, repo.rel_path) catch |err| {
const before = git.show(io, arena, repo.root, endpoints.range.before_rev, repo.rel_path) catch |err| {
if (verbosity == .verbose) {
var buf: [256]u8 = undefined;
const msg = std.fmt.bufPrint(&buf, "Error reading {s}:portfolio.srf from git: {s}\n", .{ endpoints.range.before_rev, @errorName(err) }) catch "Error reading before-side portfolio.\n";
cli.stderrPrint(msg) catch {};
cli.stderrPrint(io, msg) catch {};
}
return error.PrepareFailed;
};
const after = if (endpoints.range.after_rev) |rev|
git.show(arena, repo.root, rev, repo.rel_path) catch |err| {
git.show(io, arena, repo.root, rev, repo.rel_path) catch |err| {
if (verbosity == .verbose) {
var buf: [256]u8 = undefined;
const msg = std.fmt.bufPrint(&buf, "Error reading {s}:portfolio.srf from git: {s}\n", .{ rev, @errorName(err) }) catch "Error reading after-side portfolio.\n";
cli.stderrPrint(msg) catch {};
cli.stderrPrint(io, msg) catch {};
}
return error.PrepareFailed;
}
else
std.fs.cwd().readFileAlloc(arena, portfolio_path, 10 * 1024 * 1024) catch {
if (verbosity == .verbose) cli.stderrPrint("Error reading working-copy portfolio file.\n") catch {};
std.Io.Dir.cwd().readFileAlloc(io, portfolio_path, arena, .limited(10 * 1024 * 1024)) catch {
if (verbosity == .verbose) cli.stderrPrint(io, "Error reading working-copy portfolio file.\n") catch {};
return error.PrepareFailed;
};
var before_pf = zfin.cache.deserializePortfolio(allocator, before) catch {
if (verbosity == .verbose) cli.stderrPrint("Error parsing before-snapshot portfolio.\n") catch {};
if (verbosity == .verbose) cli.stderrPrint(io, "Error parsing before-snapshot portfolio.\n") catch {};
return error.PrepareFailed;
};
errdefer before_pf.deinit();
var after_pf = zfin.cache.deserializePortfolio(allocator, after) catch {
if (verbosity == .verbose) cli.stderrPrint("Error parsing after-snapshot portfolio.\n") catch {};
if (verbosity == .verbose) cli.stderrPrint(io, "Error parsing after-snapshot portfolio.\n") catch {};
return error.PrepareFailed;
};
errdefer after_pf.deinit();
@ -335,7 +339,7 @@ fn prepareReport(
while (sit.next()) |k| syms.append(arena, k.*) catch return error.PrepareFailed;
if (syms.items.len > 0) {
var load_result = cli.loadPortfolioPrices(svc, syms.items, &.{}, false, color);
var load_result = cli.loadPortfolioPrices(io, svc, syms.items, &.{}, false, color);
defer load_result.deinit();
var pit = load_result.prices.iterator();
while (pit.next()) |entry| {
@ -360,15 +364,15 @@ fn prepareReport(
defer if (transfer_log_opt) |*tl| tl.deinit();
const window_start: ?Date = blk: {
const ts = git.commitTimestamp(arena, repo.root, endpoints.range.before_rev) catch break :blk null;
const ts = git.commitTimestamp(io, arena, repo.root, endpoints.range.before_rev) catch break :blk null;
break :blk Date.fromEpoch(ts);
};
const window_end: ?Date = blk: {
if (endpoints.range.after_rev) |rev| {
const ts = git.commitTimestamp(arena, repo.root, rev) catch break :blk fmt.todayDate();
const ts = git.commitTimestamp(io, arena, repo.root, rev) catch break :blk as_of;
break :blk Date.fromEpoch(ts);
} else {
break :blk fmt.todayDate();
break :blk as_of;
}
};
@ -377,7 +381,7 @@ fn prepareReport(
before_pf.lots,
after_pf.lots,
&prices,
fmt.todayDate(),
as_of,
.{
.account_map = if (account_map_opt) |*am| am else null,
.transfer_log = if (transfer_log_opt) |*tl| tl else null,
@ -385,7 +389,7 @@ fn prepareReport(
.window_end = window_end,
},
) catch {
if (verbosity == .verbose) cli.stderrPrint("Error computing contributions diff.\n") catch {};
if (verbosity == .verbose) cli.stderrPrint(io, "Error computing contributions diff.\n") catch {};
return error.PrepareFailed;
};
@ -400,7 +404,7 @@ fn prepareReport(
/// Whether `resolveEndpoints` / `prepareReport` should print
/// explanatory stderr messages when the window can't be resolved. The
/// main `run` command uses `.verbose` so the user sees why the command
/// failed; the internal `computeAttribution` helper uses `.silent`
/// failed; the internal `computeAttributionSpec` helper uses `.silent`
/// because a missing git window is an expected null-return case, not
/// a hard error.
const Verbosity = enum { verbose, silent };
@ -416,6 +420,7 @@ const Verbosity = enum { verbose, silent };
/// - A friendly "resolved to the same commit" warning when
/// `--since` and `--until` collapse.
fn resolveEndpoints(
io: std.Io,
arena: std.mem.Allocator,
repo: git.RepoInfo,
before: ?git.CommitSpec,
@ -423,7 +428,7 @@ fn resolveEndpoints(
dirty: bool,
verbosity: Verbosity,
) !Endpoints {
const range = git.resolveCommitRangeSpec(arena, repo, before, after, dirty) catch |err| {
const range = git.resolveCommitRangeSpec(io, arena, repo, before, after, dirty) catch |err| {
if (verbosity == .verbose) {
switch (err) {
error.NoCommitAtOrBefore => {
@ -434,15 +439,15 @@ fn resolveEndpoints(
const before_str = specDisplayString(before, &before_buf);
var msg_buf: [256]u8 = undefined;
const msg = std.fmt.bufPrint(&msg_buf, "Error: no commit of {s} at or before {s}.\n", .{ repo.rel_path, before_str }) catch "Error: no commit at or before requested date.\n";
try cli.stderrPrint(msg);
try cli.stderrPrint(io, msg);
},
error.InvalidArg => {
try cli.stderrPrint("Error: --commit-before cannot be `working` — diffing the working copy against itself is meaningless.\n");
try cli.stderrPrint(io, "Error: --commit-before cannot be `working` — diffing the working copy against itself is meaningless.\n");
},
else => {
try cli.stderrPrint("Error resolving commit range: ");
try cli.stderrPrint(@errorName(err));
try cli.stderrPrint("\n");
try cli.stderrPrint(io, "Error resolving commit range: ");
try cli.stderrPrint(io, @errorName(err));
try cli.stderrPrint(io, "\n");
},
}
}
@ -460,7 +465,7 @@ fn resolveEndpoints(
if (before != null and after != null and verbosity == .verbose) {
if (range.after_rev) |after_rev| {
if (std.mem.eql(u8, range.before_rev, after_rev)) {
try cli.stderrPrint("Warning: before and after resolve to the same commit; no changes to report.\n");
try cli.stderrPrint(io, "Warning: before and after resolve to the same commit; no changes to report.\n");
}
}
}
@ -472,7 +477,7 @@ fn resolveEndpoints(
// `docs/notes/commit-window-edge-case.md` (aka TODO.md) for the
// motivating scenario.
if (verbosity == .verbose) {
try maybeSnapNote(arena, repo, before, range.before_rev, "before");
try maybeSnapNote(io, arena, repo, before, range.before_rev, "before");
}
return .{ .range = range, .label = label };
@ -495,6 +500,7 @@ fn specDisplayString(spec: ?git.CommitSpec, date_buf: *[10]u8) []const u8 {
/// muted hint. Catches the "I committed after my review date" case
/// where `--since 1W` picks up a commit 7+ days before the cutoff.
fn maybeSnapNote(
io: std.Io,
arena: std.mem.Allocator,
repo: git.RepoInfo,
spec: ?git.CommitSpec,
@ -509,7 +515,7 @@ fn maybeSnapNote(
// Get the committer-date of the resolved commit. `%ct` gives a
// Unix timestamp.
const ts = git.commitTimestamp(arena, repo.root, resolved_ref) catch return;
const ts = git.commitTimestamp(io, arena, repo.root, resolved_ref) catch return;
const commit_date = zfin.Date.fromEpoch(ts);
if (!commit_date.lessThan(requested_date)) return;
@ -533,7 +539,7 @@ fn maybeSnapNote(
label,
},
) catch return;
cli.stderrPrint(msg) catch {};
cli.stderrPrint(io, msg) catch {};
}
/// Abbreviate a commit ref for display. SHAs get shortened to 7
@ -674,7 +680,7 @@ pub const AttributionSummary = struct {
}
};
/// Run the contributions pipeline over a date window and return the
/// Run the contributions pipeline over a commit window and return the
/// aggregated "money in" totals. Returns null on any failure
/// intended callers (e.g. `compare`) surface the attribution line
/// opportunistically; a missing git repo or no resolvable commits
@ -691,45 +697,14 @@ pub const AttributionSummary = struct {
/// `cash_is_contribution::true`), so the attribution line here and
/// the grand total in the full `zfin contributions` report come out
/// of the same classifier and always agree over the same window.
pub fn computeAttribution(
allocator: std.mem.Allocator,
svc: *zfin.DataService,
portfolio_path: []const u8,
since: ?Date,
until: ?Date,
color: bool,
) ?AttributionSummary {
// `--until` without `--since` is ambiguous; caller is expected to
// enforce this at the entry point, but guard here too the
// prepareReport path sends it through `git.resolveCommitRangeSpec`
// which asserts the invariant.
if (since == null and until != null) return null;
var arena_state = std.heap.ArenaAllocator.init(allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
// Wrap legacy date-based inputs as CommitSpec.date_at_or_before
// for the shared prepareReport path.
const before: ?git.CommitSpec = if (since) |d| .{ .date_at_or_before = d } else null;
const after: ?git.CommitSpec = if (until) |d| .{ .date_at_or_before = d } else null;
var ctx = prepareReport(allocator, arena, svc, portfolio_path, before, after, color, .silent) catch return null;
defer ctx.deinit();
return summarizeAttribution(ctx);
}
/// Spec-based variant of `computeAttribution` for callers that
/// already have `CommitSpec`s (e.g. `compare --commit-before HEAD`).
/// Uses exactly the same classifier as the date form; only the
/// argument shape differs.
pub fn computeAttributionSpec(
io: std.Io,
allocator: std.mem.Allocator,
svc: *zfin.DataService,
portfolio_path: []const u8,
before: ?git.CommitSpec,
after: ?git.CommitSpec,
as_of: Date,
color: bool,
) ?AttributionSummary {
if (before == null and after != null) return null;
@ -738,7 +713,7 @@ pub fn computeAttributionSpec(
defer arena_state.deinit();
const arena = arena_state.allocator();
var ctx = prepareReport(allocator, arena, svc, portfolio_path, before, after, color, .silent) catch return null;
var ctx = prepareReport(io, allocator, arena, svc, portfolio_path, before, after, as_of, color, .silent) catch return null;
defer ctx.deinit();
return summarizeAttribution(ctx);
@ -792,10 +767,12 @@ pub const UnmatchedLargeLotSet = struct {
/// `transaction_log.srf` exists when absent, every large lot
/// surfaces since nothing gets reclassified.
pub fn findUnmatchedLargeLots(
io: std.Io,
allocator: std.mem.Allocator,
svc: *zfin.DataService,
portfolio_path: []const u8,
threshold: f64,
as_of: Date,
color: bool,
) ?UnmatchedLargeLotSet {
var arena_state = std.heap.ArenaAllocator.init(allocator);
@ -810,7 +787,7 @@ pub fn findUnmatchedLargeLots(
//
// Separate allocator here so we can tear the whole thing down
// via `arena_state.deinit` once we've copied out the descriptors.
var ctx = prepareReport(allocator, arena, svc, portfolio_path, null, null, color, .silent) catch {
var ctx = prepareReport(io, allocator, arena, svc, portfolio_path, null, null, as_of, color, .silent) catch {
arena_state.deinit();
return null;
};
@ -1358,7 +1335,7 @@ fn computeReport(
before: []const Lot,
after: []const Lot,
prices: *const std.StringHashMap(f64),
today: Date,
as_of: Date,
opts: ReportOptions,
) !Report {
var changes: std.ArrayList(Change) = .empty;
@ -1543,8 +1520,8 @@ fn computeReport(
var kind: ChangeKind = .lot_removed;
if (lot.security_type == .cd) {
if (lot.maturity_date) |mat| {
// "matured" if maturity_date <= today (i.e. NOT today.lessThan(mat))
if (!today.lessThan(mat)) {
// "matured" if maturity_date <= as_of (i.e. NOT as_of.lessThan(mat))
if (!as_of.lessThan(mat)) {
kind = .cd_matured;
} else {
kind = .cd_removed_early;
@ -3495,7 +3472,7 @@ test "resolveEndpoints: legacy dirty → HEAD vs working copy" {
defer arena_state.deinit();
const repo: git.RepoInfo = .{ .root = "/tmp", .rel_path = "portfolio.srf" };
const eps = try resolveEndpoints(arena_state.allocator(), repo, null, null, true, .verbose);
const eps = try resolveEndpoints(std.testing.io, arena_state.allocator(), repo, null, null, true, .verbose);
try std.testing.expectEqualStrings("HEAD", eps.range.before_rev);
try std.testing.expect(eps.range.after_rev == null);
try std.testing.expect(std.mem.indexOf(u8, eps.label, "working copy against HEAD") != null);
@ -3506,7 +3483,7 @@ test "resolveEndpoints: legacy clean → HEAD~1 vs HEAD" {
defer arena_state.deinit();
const repo: git.RepoInfo = .{ .root = "/tmp", .rel_path = "portfolio.srf" };
const eps = try resolveEndpoints(arena_state.allocator(), repo, null, null, false, .verbose);
const eps = try resolveEndpoints(std.testing.io, arena_state.allocator(), repo, null, null, false, .verbose);
try std.testing.expectEqualStrings("HEAD~1", eps.range.before_rev);
try std.testing.expectEqualStrings("HEAD", eps.range.after_rev.?);
try std.testing.expect(std.mem.indexOf(u8, eps.label, "HEAD~1 against HEAD") != null);
@ -4178,3 +4155,333 @@ test "collectUnmatchedLargeLots: partial transfer still flags residual? No — f
const lots = try collectUnmatchedLargeLots(allocator, report.changes, 10_000.0);
try std.testing.expectEqual(@as(usize, 0), lots.len);
}
test "shortSha: HEAD passes through unchanged" {
try std.testing.expectEqualStrings("HEAD", shortSha("HEAD"));
try std.testing.expectEqualStrings("HEAD~", shortSha("HEAD~"));
try std.testing.expectEqualStrings("HEAD~3", shortSha("HEAD~3"));
}
test "shortSha: long SHA truncates to 7 chars" {
try std.testing.expectEqualStrings("abcdef0", shortSha("abcdef0123456789"));
try std.testing.expectEqualStrings("a1b2c3d", shortSha("a1b2c3d4e5f6789012345"));
}
test "shortSha: short input returned as-is" {
try std.testing.expectEqualStrings("abc", shortSha("abc"));
try std.testing.expectEqualStrings("abcdefg", shortSha("abcdefg")); // exactly 7
try std.testing.expectEqualStrings("", shortSha(""));
}
test "specDisplayString: null yields '(unset)'" {
var buf: [10]u8 = undefined;
try std.testing.expectEqualStrings("(unset)", specDisplayString(null, &buf));
}
test "specDisplayString: working_copy yields 'working'" {
var buf: [10]u8 = undefined;
try std.testing.expectEqualStrings("working", specDisplayString(.{ .working_copy = {} }, &buf));
}
test "specDisplayString: git_ref returns ref verbatim" {
var buf: [10]u8 = undefined;
try std.testing.expectEqualStrings("HEAD", specDisplayString(.{ .git_ref = "HEAD" }, &buf));
try std.testing.expectEqualStrings("main", specDisplayString(.{ .git_ref = "main" }, &buf));
}
test "specDisplayString: date_at_or_before formats date YYYY-MM-DD" {
var buf: [10]u8 = undefined;
const d = @import("../models/date.zig").Date.fromYmd(2024, 3, 15);
try std.testing.expectEqualStrings("2024-03-15", specDisplayString(.{ .date_at_or_before = d }, &buf));
}
test "specLabel: null spec returns resolved ref dup'd" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const result = try specLabel(arena, null, "abc1234");
try std.testing.expectEqualStrings("abc1234", result);
}
test "specLabel: git_ref returns ref dup'd" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const result = try specLabel(arena, .{ .git_ref = "main" }, "ignored");
try std.testing.expectEqualStrings("main", result);
}
test "specLabel: date renders 'commit at-or-before YYYY-MM-DD'" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const d = @import("../models/date.zig").Date.fromYmd(2024, 3, 15);
const result = try specLabel(arena, .{ .date_at_or_before = d }, "ignored");
try std.testing.expectEqualStrings("commit at-or-before 2024-03-15", result);
}
test "specLabel: working_copy literal" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const result = try specLabel(arena, .{ .working_copy = {} }, "ignored");
try std.testing.expectEqualStrings("working copy", result);
}
test "specLabelAfter: null spec + non-null resolved_ref returns resolved_ref" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const result = try specLabelAfter(arena, null, "HEAD");
try std.testing.expectEqualStrings("HEAD", result);
}
test "specLabelAfter: null spec + null resolved_ref returns 'working copy'" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const result = try specLabelAfter(arena, null, null);
try std.testing.expectEqualStrings("working copy", result);
}
test "specLabelAfter: spec set + null resolved_ref defaults to 'working'" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const result = try specLabelAfter(arena, .{ .working_copy = {} }, null);
try std.testing.expectEqualStrings("working copy", result);
}
test "buildLabel: no date window, dirty -> 'Comparing working copy against HEAD'" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const range = git.CommitRange{ .before_rev = "abc1234567890", .after_rev = null };
const result = try buildLabel(arena, range, null, null, true);
try std.testing.expectEqualStrings("Comparing working copy against HEAD", result);
}
test "buildLabel: no date window, clean -> HEAD~1 against HEAD" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const range = git.CommitRange{ .before_rev = "abc1234567890", .after_rev = null };
const result = try buildLabel(arena, range, null, null, false);
try std.testing.expectEqualStrings("Working tree clean — comparing HEAD~1 against HEAD", result);
}
test "buildLabel: --since only, dirty -> against working copy" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const range = git.CommitRange{ .before_rev = "abc1234567890", .after_rev = null };
const since = @import("../models/date.zig").Date.fromYmd(2024, 3, 15);
const result = try buildLabel(arena, range, since, null, true);
try std.testing.expect(std.mem.indexOf(u8, result, "abc1234") != null);
try std.testing.expect(std.mem.indexOf(u8, result, "2024-03-15") != null);
try std.testing.expect(std.mem.indexOf(u8, result, "working copy") != null);
}
test "buildLabel: --since only, clean -> against HEAD" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const range = git.CommitRange{ .before_rev = "abc1234567890", .after_rev = null };
const since = @import("../models/date.zig").Date.fromYmd(2024, 3, 15);
const result = try buildLabel(arena, range, since, null, false);
try std.testing.expect(std.mem.indexOf(u8, result, "against HEAD") != null);
try std.testing.expect(std.mem.indexOf(u8, result, "2024-03-15") != null);
}
test "buildLabel: --since + --until renders both dates and short SHAs" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const range = git.CommitRange{ .before_rev = "abc1234567890", .after_rev = "def4567890123" };
const since = @import("../models/date.zig").Date.fromYmd(2024, 1, 15);
const until = @import("../models/date.zig").Date.fromYmd(2024, 3, 15);
const result = try buildLabel(arena, range, since, until, false);
try std.testing.expect(std.mem.indexOf(u8, result, "2024-01-15") != null);
try std.testing.expect(std.mem.indexOf(u8, result, "2024-03-15") != null);
try std.testing.expect(std.mem.indexOf(u8, result, "abc1234") != null);
try std.testing.expect(std.mem.indexOf(u8, result, "def4567") != null);
}
test "buildLabelFromSpecs: both date specs -> falls through to buildLabel" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const range = git.CommitRange{ .before_rev = "aaaaaaa1234567", .after_rev = "bbbbbbb1234567" };
const before_d = @import("../models/date.zig").Date.fromYmd(2024, 1, 15);
const after_d = @import("../models/date.zig").Date.fromYmd(2024, 3, 15);
const result = try buildLabelFromSpecs(
arena,
range,
.{ .date_at_or_before = before_d },
.{ .date_at_or_before = after_d },
false,
);
// Date-form path: uses buildLabel formatting
try std.testing.expect(std.mem.indexOf(u8, result, "2024-01-15") != null);
try std.testing.expect(std.mem.indexOf(u8, result, "2024-03-15") != null);
}
test "buildLabelFromSpecs: non-date spec -> '<before> vs <after>' format" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const range = git.CommitRange{ .before_rev = "main", .after_rev = "feature" };
const result = try buildLabelFromSpecs(
arena,
range,
.{ .git_ref = "main" },
.{ .git_ref = "feature" },
false,
);
try std.testing.expectEqualStrings("main vs feature", result);
}
test "buildLabelFromSpecs: working_copy after -> 'working copy' literal" {
var arena_state = std.heap.ArenaAllocator.init(std.testing.allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
const range = git.CommitRange{ .before_rev = "abc1234567890", .after_rev = null };
const result = try buildLabelFromSpecs(
arena,
range,
.{ .git_ref = "HEAD~1" },
.{ .working_copy = {} },
true,
);
try std.testing.expectEqualStrings("HEAD~1 vs working copy", result);
}
test "printChangeLine: stock change shows shares × price = value" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const c = Change{
.kind = .new_stock,
.symbol = "AAPL",
.account = "Roth",
.security_type = .stock,
.delta_shares = 10,
.unit_value = 150.0,
};
try printChangeLine(&w, c, false, cli.CLR_POSITIVE);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "AAPL") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "Roth") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "shares") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "$150.00") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "$1,500.00") != null);
}
test "printChangeLine: cash change shows value only (no shares × price)" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const c = Change{
.kind = .new_cash,
.symbol = "CASH",
.account = "Brokerage",
.security_type = .cash,
.delta_shares = 1000,
.unit_value = 1.0,
};
try printChangeLine(&w, c, false, cli.CLR_POSITIVE);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "CASH") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "Brokerage") != null);
// cash shouldn't show "shares ×"
try std.testing.expect(std.mem.indexOf(u8, out, "shares ×") == null);
try std.testing.expect(std.mem.indexOf(u8, out, "$1,000.00") != null);
}
test "printChangeLine: empty account shown as '(no account)'" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const c = Change{
.kind = .new_stock,
.symbol = "VTI",
.account = "",
.security_type = .stock,
.delta_shares = 5,
.unit_value = 200.0,
};
try printChangeLine(&w, c, false, cli.CLR_POSITIVE);
try std.testing.expect(std.mem.indexOf(u8, w.buffered(), "(no account)") != null);
}
test "printSummaryCell: zero value renders muted dash" {
var buf: [128]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printSummaryCell(&w, "Drip", 0, false);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "Drip") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "-") != null);
}
test "printSummaryCell: nonzero value renders dollar amount" {
var buf: [128]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printSummaryCell(&w, "Drip", 250.50, false);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "Drip") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "$250.50") != null);
}
test "printSection: emits title with header style" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printSection(&w, "Contributions", false, cli.CLR_POSITIVE);
try std.testing.expect(std.mem.indexOf(u8, w.buffered(), "Contributions") != null);
}
test "printNone: emits muted '(none)' line" {
var buf: [128]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printNone(&w, false, cli.CLR_MUTED);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "none") != null or std.mem.indexOf(u8, out, "None") != null);
}
test "printTotalLine: emits label and dollar amount" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try printTotalLine(&w, "Total:", 12_345.67, false, cli.CLR_POSITIVE);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "Total") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "$12,345.67") != null);
}
test "printPriceOnlyLine: shows old → new price" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const c = Change{
.kind = .price_only,
.symbol = "VTI",
.account = "Roth",
.security_type = .stock,
.old_price = 100.0,
.new_price = 110.0,
};
try printPriceOnlyLine(&w, c, false, cli.CLR_MUTED);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "VTI") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "$100.00") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "$110.00") != null);
}
test "printChangeLine: no ANSI when color=false" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const c = Change{
.kind = .new_stock,
.symbol = "AAPL",
.account = "Roth",
.security_type = .stock,
.delta_shares = 10,
.unit_value = 150.0,
};
try printChangeLine(&w, c, false, cli.CLR_POSITIVE);
try std.testing.expect(std.mem.indexOf(u8, w.buffered(), "\x1b[") == null);
}

View file

@ -3,20 +3,20 @@ const zfin = @import("../root.zig");
const cli = @import("common.zig");
const fmt = cli.fmt;
pub fn run(svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, svc: *zfin.DataService, symbol: []const u8, as_of: zfin.Date, color: bool, out: *std.Io.Writer) !void {
const result = svc.getDividends(symbol) catch |err| switch (err) {
zfin.DataError.NoApiKey => {
try cli.stderrPrint("Error: POLYGON_API_KEY not set. Get a free key at https://polygon.io\n");
try cli.stderrPrint(io, "Error: POLYGON_API_KEY not set. Get a free key at https://polygon.io\n");
return;
},
else => {
try cli.stderrPrint("Error fetching dividend data.\n");
try cli.stderrPrint(io, "Error fetching dividend data.\n");
return;
},
};
defer result.deinit();
if (result.source == .cached) try cli.stderrPrint("(using cached dividend data)\n");
if (result.source == .cached) try cli.stderrPrint(io, "(using cached dividend data)\n");
// Fetch current price for yield calculation via DataService
var current_price: ?f64 = null;
@ -24,10 +24,10 @@ pub fn run(svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io
current_price = q.close;
} else |_| {}
try display(result.data, symbol, current_price, color, out);
try display(result.data, symbol, current_price, as_of, color, out);
}
pub fn display(dividends: []const zfin.Dividend, symbol: []const u8, current_price: ?f64, color: bool, out: *std.Io.Writer) !void {
pub fn display(dividends: []const zfin.Dividend, symbol: []const u8, current_price: ?f64, as_of: zfin.Date, color: bool, out: *std.Io.Writer) !void {
try cli.printBold(out, color, "\nDividend History for {s}\n", .{symbol});
try out.print("========================================\n", .{});
@ -45,8 +45,7 @@ pub fn display(dividends: []const zfin.Dividend, symbol: []const u8, current_pri
});
try cli.reset(out, color);
const today = fmt.todayDate();
const one_year_ago = today.subtractYears(1);
const one_year_ago = as_of.subtractYears(1);
var total: f64 = 0;
var ttm: f64 = 0;
@ -91,7 +90,7 @@ test "display shows dividend data with yield" {
.{ .ex_date = .{ .days = 20000 }, .amount = 0.88, .type = .regular },
.{ .ex_date = .{ .days = 19900 }, .amount = 0.88, .type = .regular },
};
try display(&divs, "VTI", 250.0, false, &w);
try display(&divs, "VTI", 250.0, zfin.Date.fromYmd(2024, 10, 1), false, &w);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "VTI") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "0.8800") != null);
@ -104,7 +103,7 @@ test "display shows empty message" {
var buf: [4096]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const divs = [_]zfin.Dividend{};
try display(&divs, "BRK.A", null, false, &w);
try display(&divs, "BRK.A", null, zfin.Date.fromYmd(2024, 10, 1), false, &w);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "No dividends found") != null);
}
@ -115,7 +114,7 @@ test "display without price omits yield" {
const divs = [_]zfin.Dividend{
.{ .ex_date = .{ .days = 20000 }, .amount = 1.50, .type = .regular },
};
try display(&divs, "T", null, false, &w);
try display(&divs, "T", null, zfin.Date.fromYmd(2024, 10, 1), false, &w);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "yield") == null);
try std.testing.expect(std.mem.indexOf(u8, out, "1 dividends") != null);
@ -127,7 +126,7 @@ test "display no ANSI without color" {
const divs = [_]zfin.Dividend{
.{ .ex_date = .{ .days = 20000 }, .amount = 0.50, .type = .regular },
};
try display(&divs, "SPY", 500.0, false, &w);
try display(&divs, "SPY", 500.0, zfin.Date.fromYmd(2024, 10, 1), false, &w);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "\x1b[") == null);
}

View file

@ -3,14 +3,14 @@ const zfin = @import("../root.zig");
const cli = @import("common.zig");
const fmt = cli.fmt;
pub fn run(svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io.Writer) !void {
const result = svc.getEarnings(symbol) catch |err| switch (err) {
zfin.DataError.NoApiKey => {
try cli.stderrPrint("Error: FMP_API_KEY not set. Get a free key at https://site.financialmodelingprep.com\n");
try cli.stderrPrint(io, "Error: FMP_API_KEY not set. Get a free key at https://site.financialmodelingprep.com\n");
return;
},
else => {
try cli.stderrPrint("Error fetching earnings data.\n");
try cli.stderrPrint(io, "Error fetching earnings data.\n");
return;
},
};
@ -28,7 +28,7 @@ pub fn run(svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io
}.f);
}
if (result.source == .cached) try cli.stderrPrint("(using cached earnings data)\n");
if (result.source == .cached) try cli.stderrPrint(io, "(using cached earnings data)\n");
try display(result.data, symbol, color, out);
}

View file

@ -1,6 +1,7 @@
const std = @import("std");
const zfin = @import("../root.zig");
const cli = @import("common.zig");
const fmt = @import("../format.zig");
const isCusipLike = @import("../models/portfolio.zig").isCusipLike;
const OverviewMeta = struct {
@ -37,7 +38,7 @@ fn deriveMetadata(overview: zfin.CompanyOverview, sector_buf: []u8) OverviewMeta
/// Reads the portfolio, extracts stock symbols, fetches sector/industry/country for each,
/// and outputs a metadata SRF file to stdout.
/// If the argument looks like a symbol (no path separators, no .srf extension), enrich just that symbol.
pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, arg: []const u8, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, allocator: std.mem.Allocator, svc: *zfin.DataService, arg: []const u8, as_of: zfin.Date, out: *std.Io.Writer) !void {
// Determine if arg is a symbol or a file path
const is_file = std.mem.endsWith(u8, arg, ".srf") or
std.mem.indexOfScalar(u8, arg, '/') != null or
@ -45,28 +46,28 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, arg: []const u8
if (!is_file) {
// Single symbol mode: enrich one symbol, output appendable SRF (no header)
try enrichSymbol(allocator, svc, arg, out);
try enrichSymbol(io, allocator, svc, arg, out);
return;
}
// Portfolio file mode: enrich all symbols
try enrichPortfolio(allocator, svc, arg, out);
try enrichPortfolio(io, allocator, svc, arg, as_of, out);
}
/// Enrich a single symbol and output appendable SRF lines to stdout.
fn enrichSymbol(allocator: std.mem.Allocator, svc: *zfin.DataService, sym: []const u8, out: *std.Io.Writer) !void {
fn enrichSymbol(io: std.Io, allocator: std.mem.Allocator, svc: *zfin.DataService, sym: []const u8, out: *std.Io.Writer) !void {
{
var msg_buf: [128]u8 = undefined;
const msg = std.fmt.bufPrint(&msg_buf, " Fetching {s}...\n", .{sym}) catch " ...\n";
try cli.stderrPrint(msg);
try cli.stderrPrint(io, msg);
}
const overview = svc.getCompanyOverview(sym) catch |err| {
if (err == zfin.DataError.NoApiKey) {
try cli.stderrPrint("Error: ALPHAVANTAGE_API_KEY not set. Add it to .env\n");
try cli.stderrPrint(io, "Error: ALPHAVANTAGE_API_KEY not set. Add it to .env\n");
return;
}
try cli.stderrPrint("Error: Failed to fetch data for symbol\n");
try cli.stderrPrint(io, "Error: Failed to fetch data for symbol\n");
try out.print("# {s} -- fetch failed\n", .{sym});
try out.print("# symbol::{s},sector::TODO,geo::TODO,asset_class::TODO\n", .{sym});
return;
@ -92,23 +93,23 @@ fn enrichSymbol(allocator: std.mem.Allocator, svc: *zfin.DataService, sym: []con
}
/// Enrich all symbols from a portfolio file.
fn enrichPortfolio(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []const u8, out: *std.Io.Writer) !void {
fn enrichPortfolio(io: std.Io, allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []const u8, as_of: zfin.Date, out: *std.Io.Writer) !void {
// Load portfolio
const file_data = std.fs.cwd().readFileAlloc(allocator, file_path, 10 * 1024 * 1024) catch {
try cli.stderrPrint("Error: Cannot read portfolio file\n");
const file_data = std.Io.Dir.cwd().readFileAlloc(io, file_path, allocator, .limited(10 * 1024 * 1024)) catch {
try cli.stderrPrint(io, "Error: Cannot read portfolio file\n");
return;
};
defer allocator.free(file_data);
var portfolio = zfin.cache.deserializePortfolio(allocator, file_data) catch {
try cli.stderrPrint("Error: Cannot parse portfolio file\n");
try cli.stderrPrint(io, "Error: Cannot parse portfolio file\n");
return;
};
defer portfolio.deinit();
// Get unique stock symbols (using display-oriented names)
const positions = try portfolio.positions(allocator);
const positions = try portfolio.positions(as_of, allocator);
defer allocator.free(positions);
// Get unique price symbols (raw API symbols)
@ -153,7 +154,7 @@ fn enrichPortfolio(allocator: std.mem.Allocator, svc: *zfin.DataService, file_pa
{
var msg_buf: [128]u8 = undefined;
const msg = std.fmt.bufPrint(&msg_buf, " [{d}/{d}] {s}...\n", .{ i + 1, syms.len, sym }) catch " ...\n";
try cli.stderrPrint(msg);
try cli.stderrPrint(io, msg);
}
const overview = svc.getCompanyOverview(sym) catch {

View file

@ -3,14 +3,14 @@ const zfin = @import("../root.zig");
const cli = @import("common.zig");
const fmt = cli.fmt;
pub fn run(svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io.Writer) !void {
const result = svc.getEtfProfile(symbol) catch |err| switch (err) {
zfin.DataError.NoApiKey => {
try cli.stderrPrint("Error: ALPHAVANTAGE_API_KEY not set. Get a free key at https://alphavantage.co\n");
try cli.stderrPrint(io, "Error: ALPHAVANTAGE_API_KEY not set. Get a free key at https://alphavantage.co\n");
return;
},
else => {
try cli.stderrPrint("Error fetching ETF profile.\n");
try cli.stderrPrint(io, "Error fetching ETF profile.\n");
return;
},
};
@ -18,7 +18,7 @@ pub fn run(svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io
const profile = result.data;
defer result.deinit();
if (result.source == .cached) try cli.stderrPrint("(using cached ETF profile)\n");
if (result.source == .cached) try cli.stderrPrint(io, "(using cached ETF profile)\n");
try printProfile(profile, symbol, color, out);
}
@ -31,7 +31,7 @@ pub fn printProfile(profile: zfin.EtfProfile, symbol: []const u8, color: bool, o
try out.print(" Expense Ratio: {d:.2}%\n", .{er * 100.0});
}
if (profile.net_assets) |na| {
try out.print(" Net Assets: ${s}\n", .{std.mem.trimRight(u8, &fmt.fmtLargeNum(na), &.{' '})});
try out.print(" Net Assets: ${s}\n", .{std.mem.trimEnd(u8, &fmt.fmtLargeNum(na), &.{' '})});
}
if (profile.dividend_yield) |dy| {
try out.print(" Dividend Yield: {d:.2}%\n", .{dy * 100.0});

View file

@ -68,10 +68,9 @@ pub const PortfolioOpts = struct {
///
/// `--since` and `--until` accept the same grammar as other commands:
/// `YYYY-MM-DD` or a relative shortcut like `1W`, `1M`, `1Q`, `1Y`.
/// Relative forms resolve against today (from the system clock) at
/// call time; pass explicit ISO dates for test determinism.
pub fn parsePortfolioOpts(args: []const []const u8) Error!PortfolioOpts {
const today = fmt.todayDate();
/// Relative forms resolve against `as_of` (passed in by the caller, so
/// tests can pin it). Pass explicit ISO dates for test determinism.
pub fn parsePortfolioOpts(as_of: zfin.Date, args: []const []const u8) Error!PortfolioOpts {
var opts: PortfolioOpts = .{};
var i: usize = 0;
while (i < args.len) : (i += 1) {
@ -79,11 +78,11 @@ pub fn parsePortfolioOpts(args: []const []const u8) Error!PortfolioOpts {
if (std.mem.eql(u8, a, "--since")) {
i += 1;
if (i >= args.len) return error.MissingFlagValue;
opts.since = cli.parseRequiredDate(args[i], today) catch return error.InvalidFlagValue;
opts.since = cli.parseRequiredDate(args[i], as_of) catch return error.InvalidFlagValue;
} else if (std.mem.eql(u8, a, "--until")) {
i += 1;
if (i >= args.len) return error.MissingFlagValue;
opts.until = cli.parseRequiredDate(args[i], today) catch return error.InvalidFlagValue;
opts.until = cli.parseRequiredDate(args[i], as_of) catch return error.InvalidFlagValue;
} else if (std.mem.eql(u8, a, "--metric")) {
i += 1;
if (i >= args.len) return error.MissingFlagValue;
@ -112,61 +111,64 @@ pub fn parsePortfolioOpts(args: []const []const u8) Error!PortfolioOpts {
/// Entry point. Dispatches to symbol mode or portfolio mode based on
/// the first argument.
pub fn run(
io: std.Io,
allocator: std.mem.Allocator,
svc: *zfin.DataService,
portfolio_path: []const u8,
args: []const []const u8,
as_of: zfin.Date,
color: bool,
out: *std.Io.Writer,
) !void {
if (args.len > 0 and args[0].len > 0 and args[0][0] != '-') {
try runSymbol(svc, args[0], color, out);
try runSymbol(io, svc, args[0], as_of, color, out);
return;
}
const opts = parsePortfolioOpts(args) catch |err| {
const opts = parsePortfolioOpts(as_of, args) catch |err| {
switch (err) {
error.UnexpectedArg => try cli.stderrPrint("Error: unknown flag in 'history'. See --help.\n"),
error.MissingFlagValue => try cli.stderrPrint("Error: flag requires a value.\n"),
error.InvalidFlagValue => try cli.stderrPrint("Error: invalid flag value.\n"),
error.UnknownMetric => try cli.stderrPrint("Error: unknown --metric. Valid: net_worth, liquid, illiquid.\n"),
error.UnknownResolution => try cli.stderrPrint("Error: unknown --resolution. Valid: daily, weekly, monthly, auto.\n"),
error.UnexpectedArg => try cli.stderrPrint(io, "Error: unknown flag in 'history'. See --help.\n"),
error.MissingFlagValue => try cli.stderrPrint(io, "Error: flag requires a value.\n"),
error.InvalidFlagValue => try cli.stderrPrint(io, "Error: invalid flag value.\n"),
error.UnknownMetric => try cli.stderrPrint(io, "Error: unknown --metric. Valid: net_worth, liquid, illiquid.\n"),
error.UnknownResolution => try cli.stderrPrint(io, "Error: unknown --resolution. Valid: daily, weekly, monthly, auto.\n"),
}
return err;
};
try runPortfolio(allocator, portfolio_path, opts, color, out);
try runPortfolio(io, allocator, portfolio_path, opts, color, out);
}
// Symbol mode (legacy)
fn runSymbol(
io: std.Io,
svc: *zfin.DataService,
symbol: []const u8,
as_of: zfin.Date,
color: bool,
out: *std.Io.Writer,
) !void {
const result = svc.getCandles(symbol) catch |err| switch (err) {
zfin.DataError.NoApiKey => {
try cli.stderrPrint("Error: No API key configured for candle data.\n");
try cli.stderrPrint(io, "Error: No API key configured for candle data.\n");
return;
},
else => {
try cli.stderrPrint("Error fetching data.\n");
try cli.stderrPrint(io, "Error fetching data.\n");
return;
},
};
defer result.deinit();
if (result.source == .cached) try cli.stderrPrint("(using cached data)\n");
if (result.source == .cached) try cli.stderrPrint(io, "(using cached data)\n");
const all = result.data;
if (all.len == 0) return try cli.stderrPrint("No data available.\n");
if (all.len == 0) return try cli.stderrPrint(io, "No data available.\n");
const today = fmt.todayDate();
const one_month_ago = today.addDays(-30);
const one_month_ago = as_of.addDays(-30);
const c = fmt.filterCandlesFrom(all, one_month_ago);
if (c.len == 0) return try cli.stderrPrint("No data available.\n");
if (c.len == 0) return try cli.stderrPrint(io, "No data available.\n");
try displaySymbol(c, symbol, color, out);
}
@ -196,17 +198,21 @@ pub fn displaySymbol(candles: []const zfin.Candle, symbol: []const u8, color: bo
// Portfolio mode
fn runPortfolio(
io: std.Io,
allocator: std.mem.Allocator,
portfolio_path: []const u8,
opts: PortfolioOpts,
color: bool,
out: *std.Io.Writer,
) !void {
var tl = try history.loadTimeline(allocator, portfolio_path);
var tl = try history.loadTimeline(io, allocator, portfolio_path);
defer tl.deinit();
if (opts.rebuild_rollup) {
try rebuildRollup(allocator, tl.history_dir, tl.loaded.snapshots, out);
// wall-clock required: the rollup.srf `#!created=` directive
// captures when this rebuild happened. Single read per command.
const now_s = std.Io.Timestamp.now(io, .real).toSeconds();
try rebuildRollup(io, allocator, tl.history_dir, tl.loaded.snapshots, now_s, out);
return;
}
@ -230,10 +236,12 @@ fn runPortfolio(
/// Regenerate `history/rollup.srf` from `snapshots`. Uses
/// `timeline.buildRollupRecords` + `srf.fmtFrom` + atomic write.
pub fn rebuildRollup(
fn rebuildRollup(
io: std.Io,
allocator: std.mem.Allocator,
history_dir: []const u8,
snapshots: []const snapshot_model.Snapshot,
now_s: i64,
out: *std.Io.Writer,
) !void {
const series = try timeline.buildSeries(allocator, snapshots);
@ -246,19 +254,19 @@ pub fn rebuildRollup(
defer aw.deinit();
try aw.writer.print("{f}", .{srf.fmtFrom(timeline.RollupRow, allocator, rows, .{
.emit_directives = true,
.created = std.time.timestamp(),
.created = now_s,
})});
const rendered = aw.written();
const rollup_path = try std.fs.path.join(allocator, &.{ history_dir, "rollup.srf" });
defer allocator.free(rollup_path);
std.fs.cwd().makePath(history_dir) catch |err| switch (err) {
std.Io.Dir.cwd().createDirPath(io, history_dir) catch |err| switch (err) {
error.PathAlreadyExists => {},
else => return err,
};
try atomic.writeFileAtomic(allocator, rollup_path, rendered);
try atomic.writeFileAtomic(io, allocator, rollup_path, rendered);
try out.print("rollup rebuilt: {s} ({d} rows)\n", .{ rollup_path, rows.len });
}
@ -294,8 +302,8 @@ pub fn renderPortfolio(
try out.print("========================================\n", .{});
// Windows block
const today = points[points.len - 1].as_of_date;
const ws = try timeline.computeWindowSet(allocator, points, focus_metric, today);
const as_of = points[points.len - 1].as_of_date;
const ws = try timeline.computeWindowSet(allocator, points, focus_metric, as_of);
defer ws.deinit();
try renderWindowsBlock(out, color, ws);
@ -412,7 +420,10 @@ fn renderTable(
try cli.setFg(out, color, cli.CLR_MUTED);
// Column order: Liquid Illiquid Net Worth (components sum to total).
try out.print(" {s:>10} {s:>28} {s:>28} {s:>28}\n", .{
"Date", "Liquid (Δ)", "Illiquid (Δ)", "Net Worth (Δ)",
"Date",
"Liquid (Δ)",
"Illiquid (Δ)",
"Net Worth (Δ)",
});
try out.print(" {s:->10} {s:->28} {s:->28} {s:->28}\n", .{ "", "", "", "" });
try cli.reset(out, color);
@ -476,7 +487,7 @@ fn writeTableRow(
const testing = std.testing;
test "parsePortfolioOpts: defaults" {
const o = try parsePortfolioOpts(&.{});
const o = try parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &.{});
try testing.expect(o.since == null);
try testing.expect(o.until == null);
// Default metric is liquid (matches TUI default).
@ -488,55 +499,55 @@ test "parsePortfolioOpts: defaults" {
test "parsePortfolioOpts: --since / --until parse ISO dates" {
const args = [_][]const u8{ "--since", "2026-01-01", "--until", "2026-04-30" };
const o = try parsePortfolioOpts(&args);
const o = try parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &args);
try testing.expect(o.since.?.eql(Date.fromYmd(2026, 1, 1)));
try testing.expect(o.until.?.eql(Date.fromYmd(2026, 4, 30)));
}
test "parsePortfolioOpts: --metric picks the right enum" {
const a1 = [_][]const u8{ "--metric", "illiquid" };
const o1 = try parsePortfolioOpts(&a1);
const o1 = try parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &a1);
try testing.expectEqual(timeline.Metric.illiquid, o1.metric);
const a2 = [_][]const u8{ "--metric", "net_worth" };
const o2 = try parsePortfolioOpts(&a2);
const o2 = try parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &a2);
try testing.expectEqual(timeline.Metric.net_worth, o2.metric);
}
test "parsePortfolioOpts: --resolution parses all four forms" {
const ad = [_][]const u8{ "--resolution", "daily" };
try testing.expectEqual(timeline.Resolution.daily, (try parsePortfolioOpts(&ad)).resolution.?);
try testing.expectEqual(timeline.Resolution.daily, (try parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &ad)).resolution.?);
const aw = [_][]const u8{ "--resolution", "weekly" };
try testing.expectEqual(timeline.Resolution.weekly, (try parsePortfolioOpts(&aw)).resolution.?);
try testing.expectEqual(timeline.Resolution.weekly, (try parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &aw)).resolution.?);
const am = [_][]const u8{ "--resolution", "monthly" };
try testing.expectEqual(timeline.Resolution.monthly, (try parsePortfolioOpts(&am)).resolution.?);
try testing.expectEqual(timeline.Resolution.monthly, (try parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &am)).resolution.?);
// "auto" resolves to null (defer to selectResolution at render time).
const aa = [_][]const u8{ "--resolution", "auto" };
try testing.expect((try parsePortfolioOpts(&aa)).resolution == null);
try testing.expect((try parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &aa)).resolution == null);
}
test "parsePortfolioOpts: --limit parses integer" {
const args = [_][]const u8{ "--limit", "25" };
const o = try parsePortfolioOpts(&args);
const o = try parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &args);
try testing.expectEqual(@as(usize, 25), o.limit.?);
}
test "parsePortfolioOpts: --rebuild-rollup boolean" {
const args = [_][]const u8{"--rebuild-rollup"};
const o = try parsePortfolioOpts(&args);
const o = try parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &args);
try testing.expect(o.rebuild_rollup);
}
test "parsePortfolioOpts: unknown flag / value errors" {
try testing.expectError(error.UnexpectedArg, parsePortfolioOpts(&[_][]const u8{"--bogus"}));
try testing.expectError(error.MissingFlagValue, parsePortfolioOpts(&[_][]const u8{"--since"}));
try testing.expectError(error.InvalidFlagValue, parsePortfolioOpts(&[_][]const u8{ "--since", "not-a-date" }));
try testing.expectError(error.UnknownMetric, parsePortfolioOpts(&[_][]const u8{ "--metric", "bogus" }));
try testing.expectError(error.UnknownResolution, parsePortfolioOpts(&[_][]const u8{ "--resolution", "bogus" }));
try testing.expectError(error.InvalidFlagValue, parsePortfolioOpts(&[_][]const u8{ "--limit", "not-a-number" }));
try testing.expectError(error.UnexpectedArg, parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &[_][]const u8{"--bogus"}));
try testing.expectError(error.MissingFlagValue, parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &[_][]const u8{"--since"}));
try testing.expectError(error.InvalidFlagValue, parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &[_][]const u8{ "--since", "not-a-date" }));
try testing.expectError(error.UnknownMetric, parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &[_][]const u8{ "--metric", "bogus" }));
try testing.expectError(error.UnknownResolution, parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &[_][]const u8{ "--resolution", "bogus" }));
try testing.expectError(error.InvalidFlagValue, parsePortfolioOpts(zfin.Date.fromYmd(2026, 5, 8), &[_][]const u8{ "--limit", "not-a-number" }));
}
// renderPortfolio (end-to-end)
@ -731,11 +742,13 @@ fn makeFixtureSnapshot(
}
test "rebuildRollup: writes rollup.srf with one row per snapshot" {
const io = std.testing.io;
var tmp = testing.tmpDir(.{});
defer tmp.cleanup();
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const tmp_path = try tmp.dir.realpath(".", &path_buf);
const tmp_path_len = try tmp.dir.realPathFile(io, ".", &path_buf);
const tmp_path = path_buf[0..tmp_path_len];
var b1: [3]snapshot_model.TotalRow = undefined;
var b2: [3]snapshot_model.TotalRow = undefined;
@ -747,14 +760,14 @@ test "rebuildRollup: writes rollup.srf with one row per snapshot" {
var out_buf: [512]u8 = undefined;
var out: std.Io.Writer = .fixed(&out_buf);
try rebuildRollup(testing.allocator, tmp_path, &snaps, &out);
try rebuildRollup(std.testing.io, testing.allocator, tmp_path, &snaps, 1_700_000_000, &out);
const out_str = out.buffered();
try testing.expect(std.mem.indexOf(u8, out_str, "2 rows") != null);
const rollup_path = try std.fs.path.join(testing.allocator, &.{ tmp_path, "rollup.srf" });
defer testing.allocator.free(rollup_path);
const bytes = try std.fs.cwd().readFileAlloc(testing.allocator, rollup_path, 16 * 1024);
const bytes = try std.Io.Dir.cwd().readFileAlloc(io, rollup_path, testing.allocator, .limited(16 * 1024));
defer testing.allocator.free(bytes);
try testing.expect(std.mem.startsWith(u8, bytes, "#!srfv1"));
@ -765,11 +778,13 @@ test "rebuildRollup: writes rollup.srf with one row per snapshot" {
}
test "rebuildRollup: creates history dir when it doesn't exist" {
const io = std.testing.io;
var tmp = testing.tmpDir(.{});
defer tmp.cleanup();
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const tmp_path = try tmp.dir.realpath(".", &path_buf);
const tmp_path_len = try tmp.dir.realPathFile(io, ".", &path_buf);
const tmp_path = path_buf[0..tmp_path_len];
const nested = try std.fs.path.join(testing.allocator, &.{ tmp_path, "nested", "history" });
defer testing.allocator.free(nested);
@ -782,29 +797,31 @@ test "rebuildRollup: creates history dir when it doesn't exist" {
var out_buf: [256]u8 = undefined;
var out: std.Io.Writer = .fixed(&out_buf);
try rebuildRollup(testing.allocator, nested, &snaps, &out);
try rebuildRollup(std.testing.io, testing.allocator, nested, &snaps, 1_700_000_000, &out);
const rollup_path = try std.fs.path.join(testing.allocator, &.{ nested, "rollup.srf" });
defer testing.allocator.free(rollup_path);
try std.fs.cwd().access(rollup_path, .{});
try std.Io.Dir.cwd().access(io, rollup_path, .{});
}
test "rebuildRollup: empty snapshots produces an empty rollup" {
const io = std.testing.io;
var tmp = testing.tmpDir(.{});
defer tmp.cleanup();
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const tmp_path = try tmp.dir.realpath(".", &path_buf);
const tmp_path_len = try tmp.dir.realPathFile(io, ".", &path_buf);
const tmp_path = path_buf[0..tmp_path_len];
var out_buf: [256]u8 = undefined;
var out: std.Io.Writer = .fixed(&out_buf);
try rebuildRollup(testing.allocator, tmp_path, &.{}, &out);
try rebuildRollup(std.testing.io, testing.allocator, tmp_path, &.{}, 1_700_000_000, &out);
try testing.expect(std.mem.indexOf(u8, out.buffered(), "0 rows") != null);
const rollup_path = try std.fs.path.join(testing.allocator, &.{ tmp_path, "rollup.srf" });
defer testing.allocator.free(rollup_path);
const bytes = try std.fs.cwd().readFileAlloc(testing.allocator, rollup_path, 4 * 1024);
const bytes = try std.Io.Dir.cwd().readFileAlloc(io, rollup_path, testing.allocator, .limited(4 * 1024));
defer testing.allocator.free(bytes);
try testing.expect(std.mem.startsWith(u8, bytes, "#!srfv1"));

View file

@ -3,16 +3,16 @@ const zfin = @import("../root.zig");
const cli = @import("common.zig");
const isCusipLike = @import("../models/portfolio.zig").isCusipLike;
pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, cusip: []const u8, color: bool, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, allocator: std.mem.Allocator, svc: *zfin.DataService, cusip: []const u8, color: bool, out: *std.Io.Writer) !void {
if (!isCusipLike(cusip)) {
try cli.printFg(out, color, cli.CLR_MUTED, "Note: '{s}' doesn't look like a CUSIP (expected 9 alphanumeric chars with digits)\n", .{cusip});
}
try cli.stderrPrint("Looking up via OpenFIGI...\n");
try cli.stderrPrint(io, "Looking up via OpenFIGI...\n");
// Try full batch lookup for richer output
const results = svc.lookupCusips(&.{cusip}) catch {
try cli.stderrPrint("Error: OpenFIGI request failed (network error)\n");
try cli.stderrPrint(io, "Error: OpenFIGI request failed (network error)\n");
return;
};
defer {

View file

@ -3,24 +3,24 @@ const zfin = @import("../root.zig");
const cli = @import("common.zig");
const fmt = cli.fmt;
pub fn run(svc: *zfin.DataService, symbol: []const u8, ntm: usize, color: bool, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, svc: *zfin.DataService, symbol: []const u8, ntm: usize, color: bool, out: *std.Io.Writer) !void {
const result = svc.getOptions(symbol) catch |err| switch (err) {
zfin.DataError.FetchFailed => {
try cli.stderrPrint("Error fetching options data from CBOE.\n");
try cli.stderrPrint(io, "Error fetching options data from CBOE.\n");
return;
},
else => {
try cli.stderrPrint("Error loading options data.\n");
try cli.stderrPrint(io, "Error loading options data.\n");
return;
},
};
const ch = result.data;
defer result.deinit();
if (result.source == .cached) try cli.stderrPrint("(using cached options data)\n");
if (result.source == .cached) try cli.stderrPrint(io, "(using cached options data)\n");
if (ch.len == 0) {
try cli.stderrPrint("No options data found.\n");
try cli.stderrPrint(io, "No options data found.\n");
return;
}

View file

@ -3,26 +3,25 @@ const zfin = @import("../root.zig");
const cli = @import("common.zig");
const fmt = cli.fmt;
pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, allocator: std.mem.Allocator, svc: *zfin.DataService, symbol: []const u8, as_of: zfin.Date, color: bool, out: *std.Io.Writer) !void {
const result = svc.getTrailingReturns(symbol) catch |err| switch (err) {
zfin.DataError.NoApiKey => {
try cli.stderrPrint("Error: No API key set. Get a free key at https://tiingo.com or https://twelvedata.com\n");
try cli.stderrPrint(io, "Error: No API key set. Get a free key at https://tiingo.com or https://twelvedata.com\n");
return;
},
else => {
try cli.stderrPrint("Error fetching data.\n");
try cli.stderrPrint(io, "Error fetching data.\n");
return;
},
};
defer allocator.free(result.candles);
defer if (result.dividends) |d| zfin.Dividend.freeSlice(allocator, d);
if (result.source == .cached) try cli.stderrPrint("(using cached data)\n");
if (result.source == .cached) try cli.stderrPrint(io, "(using cached data)\n");
const c = result.candles;
const end_date = c[c.len - 1].date;
const today = fmt.todayDate();
const month_end = today.lastDayOfPriorMonth();
const month_end = as_of.lastDayOfPriorMonth();
try cli.printBold(out, color, "\nTrailing Returns for {s}\n", .{symbol});
try out.print("========================================\n", .{});
@ -240,3 +239,70 @@ test "printReturnsTable with actual returns" {
// 3-year should still show N/A
try std.testing.expect(std.mem.indexOf(u8, out, "ann.") != null);
}
test "printRiskTable: all-null returns silently (no output)" {
var buf: [2048]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const empty: zfin.risk.TrailingRisk = .{};
try printRiskTable(&w, empty, false);
try std.testing.expectEqual(@as(usize, 0), w.buffered().len);
}
test "printRiskTable: with one period populated, header + data row appear" {
var buf: [4096]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const tr: zfin.risk.TrailingRisk = .{
.one_year = .{
.volatility = 0.18,
.sharpe = 1.25,
.max_drawdown = 0.12,
.sample_size = 12,
},
};
try printRiskTable(&w, tr, false);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "Risk Metrics") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "Volatility") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "Sharpe") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "Max DD") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "1-Year") != null);
// Volatility renders as "18.0%"
try std.testing.expect(std.mem.indexOf(u8, out, "18.0%") != null);
// Sharpe renders as "1.25"
try std.testing.expect(std.mem.indexOf(u8, out, "1.25") != null);
// Max DD renders as "12.0%"
try std.testing.expect(std.mem.indexOf(u8, out, "12.0%") != null);
}
test "printRiskTable: missing periods render as em-dash placeholders" {
var buf: [4096]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const tr: zfin.risk.TrailingRisk = .{
.three_year = .{
.volatility = 0.20,
.sharpe = 0.80,
.max_drawdown = 0.25,
.sample_size = 36,
},
};
try printRiskTable(&w, tr, false);
const out = w.buffered();
// 1-year, 5-year, 10-year all missing; em-dashes appear
try std.testing.expect(std.mem.indexOf(u8, out, "") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "3-Year") != null);
}
test "printRiskTable: no ANSI escapes when color=false" {
var buf: [4096]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const tr: zfin.risk.TrailingRisk = .{
.one_year = .{
.volatility = 0.15,
.sharpe = 1.0,
.max_drawdown = 0.10,
.sample_size = 12,
},
};
try printRiskTable(&w, tr, false);
try std.testing.expect(std.mem.indexOf(u8, w.buffered(), "\x1b[") == null);
}

View file

@ -4,9 +4,9 @@ const cli = @import("common.zig");
const fmt = cli.fmt;
const views = @import("../views/portfolio_sections.zig");
pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []const u8, watchlist_path: ?[]const u8, force_refresh: bool, color: bool, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []const u8, watchlist_path: ?[]const u8, force_refresh: bool, as_of: zfin.Date, color: bool, out: *std.Io.Writer) !void {
// Load portfolio from SRF file
var loaded = cli.loadPortfolio(allocator, file_path) orelse return;
var loaded = cli.loadPortfolio(io, allocator, file_path, as_of) orelse return;
defer loaded.deinit(allocator);
const portfolio = loaded.portfolio;
@ -14,7 +14,7 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []co
const syms = loaded.syms;
if (portfolio.lots.len == 0) {
try cli.stderrPrint("Portfolio is empty.\n");
try cli.stderrPrint(io, "Portfolio is empty.\n");
return;
}
@ -44,6 +44,7 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []co
if (all_syms_count > 0) {
// Use consolidated parallel loader
var load_result = cli.loadPortfolioPrices(
io,
svc,
syms,
watch_syms.items,
@ -61,9 +62,9 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []co
}
// Build portfolio summary, candle map, and historical snapshots
var pf_data = cli.buildPortfolioData(allocator, portfolio, positions, syms, &prices, svc) catch |err| switch (err) {
var pf_data = cli.buildPortfolioData(allocator, portfolio, positions, syms, &prices, svc, as_of) catch |err| switch (err) {
error.NoAllocations, error.SummaryFailed => {
try cli.stderrPrint("Error computing portfolio summary.\n");
try cli.stderrPrint(io, "Error computing portfolio summary.\n");
return;
},
else => return err,
@ -106,7 +107,7 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []co
// Separate watchlist file (backward compat)
if (watchlist_path) |wl_path| {
const wl_syms = cli.loadWatchlist(allocator, wl_path);
const wl_syms = cli.loadWatchlist(io, allocator, wl_path);
defer cli.freeWatchlist(allocator, wl_syms);
if (wl_syms) |syms_list| {
for (syms_list) |sym| {
@ -131,6 +132,7 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, file_path: []co
&pf_data,
watch_list.items,
watch_prices,
as_of,
);
}
@ -145,6 +147,7 @@ pub fn display(
pf_data: *const cli.PortfolioData,
watch_symbols: []const []const u8,
watch_prices: std.StringHashMap(f64),
as_of: zfin.Date,
) !void {
const summary = &pf_data.summary;
// Header with summary
@ -171,7 +174,7 @@ pub fn display(
var closed_lots: u32 = 0;
for (portfolio.lots) |lot| {
if (lot.security_type != .stock) continue;
if (lot.isOpen()) open_lots += 1 else closed_lots += 1;
if (lot.isOpen(as_of)) open_lots += 1 else closed_lots += 1;
}
try cli.printFg(out, color, cli.CLR_MUTED, " Lots: {d} open, {d} closed Positions: {d} symbols\n", .{ open_lots, closed_lots, positions.len });
@ -214,7 +217,7 @@ pub fn display(
try lots_for_sym.append(allocator, lot);
}
}
std.mem.sort(zfin.Lot, lots_for_sym.items, {}, fmt.lotSortFn);
std.mem.sort(zfin.Lot, lots_for_sym.items, as_of, fmt.lotSortFn);
const is_multi = lots_for_sym.items.len > 1;
// Position summary row
@ -234,7 +237,7 @@ pub fn display(
const lot = lots_for_sym.items[0];
var pos_date_buf: [10]u8 = undefined;
const ds = lot.open_date.format(&pos_date_buf);
const indicator = fmt.capitalGainsIndicator(lot.open_date);
const indicator = fmt.capitalGainsIndicator(as_of, lot.open_date);
const written = std.fmt.bufPrint(&date_col, "{s} {s}", .{ ds, indicator }) catch "";
date_col_len = written.len;
}
@ -280,18 +283,18 @@ pub fn display(
if (!has_drip) {
// No DRIP: show all individually
for (lots_for_sym.items) |lot| {
try printLotRow(out, color, lot, a.current_price);
try printLotRow(as_of, out, color, lot, a.current_price);
}
} else {
// Show non-DRIP lots individually
for (lots_for_sym.items) |lot| {
if (!lot.drip) {
try printLotRow(out, color, lot, a.current_price);
try printLotRow(as_of, out, color, lot, a.current_price);
}
}
// Summarize DRIP lots as ST/LT
const drip = fmt.aggregateDripLots(lots_for_sym.items);
const drip = fmt.aggregateDripLots(as_of, lots_for_sym.items);
if (!drip.st.isEmpty()) {
var drip_buf: [128]u8 = undefined;
@ -334,7 +337,7 @@ pub fn display(
// Options section
if (portfolio.hasType(.option)) {
var prepared_opts = try views.Options.init(allocator, portfolio.lots, null);
var prepared_opts = try views.Options.init(as_of, allocator, portfolio.lots, null);
defer prepared_opts.deinit();
if (prepared_opts.items.len > 0) {
try out.print("\n", .{});
@ -369,7 +372,7 @@ pub fn display(
// CDs section
if (portfolio.hasType(.cd)) {
var prepared_cds = try views.CDs.init(allocator, portfolio.lots, null);
var prepared_cds = try views.CDs.init(as_of, allocator, portfolio.lots, null);
defer prepared_cds.deinit();
if (prepared_cds.items.len > 0) {
try out.print("\n", .{});
@ -414,7 +417,7 @@ pub fn display(
var sep_buf: [80]u8 = undefined;
try cli.printFg(out, color, cli.CLR_MUTED, "{s}\n", .{fmt.fmtCashSep(&sep_buf)});
var total_buf: [80]u8 = undefined;
try cli.printBold(out, color, "{s}\n", .{fmt.fmtCashTotal(&total_buf, portfolio.totalCash())});
try cli.printBold(out, color, "{s}\n", .{fmt.fmtCashTotal(&total_buf, portfolio.totalCash(as_of))});
}
// Illiquid assets section
@ -437,13 +440,13 @@ pub fn display(
var il_sep_buf2: [80]u8 = undefined;
try cli.printFg(out, color, cli.CLR_MUTED, "{s}\n", .{fmt.fmtIlliquidSep(&il_sep_buf2)});
var il_total_buf: [80]u8 = undefined;
try cli.printBold(out, color, "{s}\n", .{fmt.fmtIlliquidTotal(&il_total_buf, portfolio.totalIlliquid())});
try cli.printBold(out, color, "{s}\n", .{fmt.fmtIlliquidTotal(&il_total_buf, portfolio.totalIlliquid(as_of))});
}
// Net Worth (if illiquid assets exist)
if (portfolio.hasType(.illiquid)) {
const illiquid_total = portfolio.totalIlliquid();
const net_worth = zfin.valuation.netWorth(portfolio.*, summary.*);
const illiquid_total = portfolio.totalIlliquid(as_of);
const net_worth = zfin.valuation.netWorth(as_of, portfolio.*, summary.*);
var nw_buf: [24]u8 = undefined;
var liq_buf: [24]u8 = undefined;
var il_buf: [24]u8 = undefined;
@ -503,12 +506,12 @@ pub fn display(
try out.print("\n", .{});
}
pub fn printLotRow(out: *std.Io.Writer, color: bool, lot: zfin.Lot, current_price: f64) !void {
pub fn printLotRow(as_of: zfin.Date, out: *std.Io.Writer, color: bool, lot: zfin.Lot, current_price: f64) !void {
var lot_price_buf: [24]u8 = undefined;
var lot_date_buf: [10]u8 = undefined;
const date_str = lot.open_date.format(&lot_date_buf);
const indicator = fmt.capitalGainsIndicator(lot.open_date);
const status_str: []const u8 = if (lot.isOpen()) "open" else "closed";
const indicator = fmt.capitalGainsIndicator(as_of, lot.open_date);
const status_str: []const u8 = if (lot.isOpen(as_of)) "open" else "closed";
const acct_col: []const u8 = lot.account orelse "";
const use_price = lot.close_price orelse current_price;
@ -604,7 +607,7 @@ test "display shows header and summary" {
const watch_syms: []const []const u8 = &.{};
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices);
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices, zfin.Date.fromYmd(2026, 5, 8));
const out = w.buffered();
// Header present
@ -656,7 +659,7 @@ test "display with watchlist" {
try watch_prices.put("TSLA", 250.50);
try watch_prices.put("NVDA", 800.25);
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices);
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices, zfin.Date.fromYmd(2026, 5, 8));
const out = w.buffered();
// Watchlist header and symbols
@ -684,8 +687,8 @@ test "display with options section" {
};
var summary = testSummary(&allocs);
// Include option cost in totals (like run() does)
summary.total_value += portfolio.totalOptionCost();
summary.total_cost += portfolio.totalOptionCost();
summary.total_value += portfolio.totalOptionCost(zfin.Date.fromYmd(2026, 5, 8));
summary.total_cost += portfolio.totalOptionCost(zfin.Date.fromYmd(2026, 5, 8));
var prices = std.StringHashMap(f64).init(testing.allocator);
defer prices.deinit();
@ -698,7 +701,7 @@ test "display with options section" {
defer watch_prices.deinit();
const watch_syms: []const []const u8 = &.{};
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices);
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices, zfin.Date.fromYmd(2026, 5, 8));
const out = w.buffered();
// Options section present
@ -726,8 +729,8 @@ test "display with CDs and cash" {
.{ .symbol = "VTI", .display_symbol = "VTI", .shares = 10, .avg_cost = 200.0, .current_price = 220.0, .market_value = 2200.0, .cost_basis = 2000.0, .weight = 1.0, .unrealized_gain_loss = 200.0, .unrealized_return = 0.1 },
};
var summary = testSummary(&allocs);
summary.total_value += portfolio.totalCash() + portfolio.totalCdFaceValue();
summary.total_cost += portfolio.totalCash() + portfolio.totalCdFaceValue();
summary.total_value += portfolio.totalCash(zfin.Date.fromYmd(2026, 5, 8)) + portfolio.totalCdFaceValue(zfin.Date.fromYmd(2026, 5, 8));
summary.total_cost += portfolio.totalCash(zfin.Date.fromYmd(2026, 5, 8)) + portfolio.totalCdFaceValue(zfin.Date.fromYmd(2026, 5, 8));
var prices = std.StringHashMap(f64).init(testing.allocator);
defer prices.deinit();
@ -740,7 +743,7 @@ test "display with CDs and cash" {
defer watch_prices.deinit();
const watch_syms: []const []const u8 = &.{};
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices);
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices, zfin.Date.fromYmd(2026, 5, 8));
const out = w.buffered();
// CDs section present
@ -784,7 +787,7 @@ test "display realized PnL shown when nonzero" {
defer watch_prices.deinit();
const watch_syms: []const []const u8 = &.{};
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices);
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices, zfin.Date.fromYmd(2026, 5, 8));
const out = w.buffered();
try testing.expect(std.mem.indexOf(u8, out, "Realized P&L") != null);
@ -819,7 +822,7 @@ test "display empty watchlist not shown" {
defer watch_prices.deinit();
const watch_syms: []const []const u8 = &.{};
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices);
try display(testing.allocator, &w, false, "test.srf", &portfolio, &positions, &pf_data, watch_syms, watch_prices, zfin.Date.fromYmd(2026, 5, 8));
const out = w.buffered();
// Watchlist header should NOT appear when there are no watch symbols

View file

@ -36,12 +36,22 @@ const AsOfResolution = struct {
actual: Date,
};
/// Run projections.
///
/// `as_of` is the reference date for ages, horizons, and snapshot
/// windows. `from_snapshot` selects the data source:
/// - `false`: live mode. Load `file_path` directly. Caller passes
/// today as `as_of`.
/// - `true`: historical mode. Load the snapshot at-or-before
/// `as_of` from the history dir.
pub fn run(
io: std.Io,
allocator: std.mem.Allocator,
svc: *zfin.DataService,
file_path: []const u8,
events_enabled: bool,
as_of: ?Date,
as_of: Date,
from_snapshot: bool,
color: bool,
out: *std.Io.Writer,
) !void {
@ -68,15 +78,16 @@ pub fn run(
var snap_bundle: ?history.LoadedSnapshot = null;
defer if (snap_bundle) |*s| s.deinit(allocator);
if (as_of) |requested_date| {
resolution = resolveAsOfSnapshot(va, file_path, requested_date) catch |err| switch (err) {
if (from_snapshot) {
resolution = resolveAsOfSnapshot(io, va, file_path, as_of) catch |err| switch (err) {
error.NoSnapshot => return,
else => return err,
};
const hist_dir = try history.deriveHistoryDir(va, file_path);
snap_bundle = try history.loadSnapshotAt(allocator, hist_dir, resolution.?.actual);
snap_bundle = try history.loadSnapshotAt(io, allocator, hist_dir, resolution.?.actual);
ctx = try view.loadProjectionContextAsOf(
io,
va,
portfolio_dir,
&snap_bundle.?.snap,
@ -85,7 +96,7 @@ pub fn run(
events_enabled,
);
} else {
var loaded = cli.loadPortfolio(allocator, file_path) orelse return;
var loaded = cli.loadPortfolio(io, allocator, file_path, as_of) orelse return;
defer loaded.deinit(allocator);
const portfolio = loaded.portfolio;
const positions = loaded.positions;
@ -104,9 +115,9 @@ pub fn run(
}
}
var pf_data = cli.buildPortfolioData(allocator, portfolio, positions, syms, &prices, svc) catch |err| switch (err) {
var pf_data = cli.buildPortfolioData(allocator, portfolio, positions, syms, &prices, svc, as_of) catch |err| switch (err) {
error.NoAllocations, error.SummaryFailed => {
try cli.stderrPrint("Error computing portfolio summary.\n");
try cli.stderrPrint(io, "Error computing portfolio summary.\n");
return;
},
else => return err,
@ -114,14 +125,16 @@ pub fn run(
defer pf_data.deinit(allocator);
ctx = try view.loadProjectionContext(
io,
va,
portfolio_dir,
pf_data.summary.allocations,
pf_data.summary.total_value,
portfolio.totalCash(),
portfolio.totalCdFaceValue(),
portfolio.totalCash(as_of),
portfolio.totalCdFaceValue(as_of),
svc,
events_enabled,
as_of,
);
}
@ -280,14 +293,14 @@ pub fn run(
try cli.printFg(out, color, cli.CLR_MUTED, "{s}\n", .{wr_rows.rate.text});
}
// Life events summary as-of mode uses ages-as-of-as_of; live
// mode uses current ages. `currentAgesAsOf(today)` returns the
// current ages, so this unifies both paths.
// Life events summary both as-of and live modes resolve ages
// against the reference date (`resolution.actual` if a snapshot
// was loaded, otherwise `as_of` directly).
{
const events = ctx.config.getEvents();
if (events.len > 0) {
const ages_ref_date = if (resolution) |r| r.actual else fmt.todayDate();
const ages = ctx.config.currentAgesAsOf(ages_ref_date);
const ages_ref_date = if (resolution) |r| r.actual else as_of;
const ages = ctx.config.currentAges(ages_ref_date);
try out.print("\n", .{});
try cli.printBold(out, color, "Life Events\n", .{});
for (events) |*ev| {
@ -336,6 +349,7 @@ fn extractKeyMetrics(ctx: view.ProjectionContext) KeyMetrics {
/// returned context because allocations borrow symbol strings from
/// the snapshot's backing buffer.
fn loadAsOfContext(
io: std.Io,
allocator: std.mem.Allocator,
va: std.mem.Allocator,
svc: *zfin.DataService,
@ -346,10 +360,11 @@ fn loadAsOfContext(
resolution_out: *AsOfResolution,
snap_bundle_out: *history.LoadedSnapshot,
) !view.ProjectionContext {
resolution_out.* = resolveAsOfSnapshot(va, file_path, requested_date) catch |err| return err;
resolution_out.* = resolveAsOfSnapshot(io, va, file_path, requested_date) catch |err| return err;
const hist_dir = try history.deriveHistoryDir(va, file_path);
snap_bundle_out.* = try history.loadSnapshotAt(allocator, hist_dir, resolution_out.actual);
snap_bundle_out.* = try history.loadSnapshotAt(io, allocator, hist_dir, resolution_out.actual);
return try view.loadProjectionContextAsOf(
io,
va,
portfolio_dir,
&snap_bundle_out.snap,
@ -360,22 +375,24 @@ fn loadAsOfContext(
}
/// `--vs <DATE>` entry point: compare two projections side-by-side
/// with deltas. By default `now` is the live portfolio; when
/// `as_of_now` is non-null, `now` is also a historical snapshot
/// letting the caller compare any two points in time without
/// intermediate arithmetic.
/// with deltas. The "then" side is always a historical snapshot at
/// `vs_date`; the "now" side is either another historical snapshot
/// (when `now_from_snapshot` is true) or the live portfolio at
/// `now_date`.
///
/// Target audience is the weekly review email's header the
/// "Projected Return" and "1st Year Withdrawal" rows with Δ columns.
/// For the full benchmark table / SWR grid / percentile bands, run
/// `zfin projections` and `zfin projections --as-of <DATE>` separately.
pub fn runCompare(
io: std.Io,
allocator: std.mem.Allocator,
svc: *zfin.DataService,
file_path: []const u8,
events_enabled: bool,
vs_date: Date,
as_of_now: ?Date,
now_date: Date,
now_from_snapshot: bool,
color: bool,
out: *std.Io.Writer,
) !void {
@ -383,7 +400,7 @@ pub fn runCompare(
defer arena_state.deinit();
const va = arena_state.allocator();
const result = computeKeyComparison(allocator, va, svc, file_path, events_enabled, vs_date, as_of_now) catch |err| switch (err) {
const result = computeKeyComparison(io, allocator, va, svc, file_path, events_enabled, vs_date, now_date, now_from_snapshot) catch |err| switch (err) {
error.NoSnapshot, error.PortfolioLoadFailed => return,
else => return err,
};
@ -397,7 +414,7 @@ pub fn runCompare(
const days_between = if (result.now_resolution) |nr|
nr.actual.days - result.resolution.actual.days
else
fmt.todayDate().days - result.resolution.actual.days;
now_date.days - result.resolution.actual.days;
try cli.printBold(out, color, "Projections comparison: {s} → {s} ({d} day{s})\n", .{
then_str,
@ -445,9 +462,9 @@ pub fn runCompare(
/// rendering, plus the snapshot resolutions for header rendering.
/// Caller must invoke `cleanup()` to release retained snapshots.
///
/// When `as_of_now` is null, the "now" side is the live portfolio.
/// When set, it's loaded as a snapshot the function then retains
/// two snapshot bundles so both must be cleaned up.
/// When `now_from_snapshot` is false (live mode), only `retained_then`
/// is populated. When true, both snapshots are retained and must be
/// cleaned up via `cleanup()`.
pub const KeyComparisonResult = struct {
then: KeyMetrics,
now: KeyMetrics,
@ -467,13 +484,15 @@ pub const KeyComparisonResult = struct {
};
pub fn computeKeyComparison(
io: std.Io,
allocator: std.mem.Allocator,
va: std.mem.Allocator,
svc: *zfin.DataService,
file_path: []const u8,
events_enabled: bool,
vs_date: Date,
as_of_now: ?Date,
now_date: Date,
now_from_snapshot: bool,
) !KeyComparisonResult {
const dir_end = if (std.mem.lastIndexOfScalar(u8, file_path, std.fs.path.sep)) |idx| idx + 1 else 0;
const portfolio_dir = file_path[0..dir_end];
@ -483,6 +502,7 @@ pub fn computeKeyComparison(
var then_resolution: AsOfResolution = undefined;
var then_snap: history.LoadedSnapshot = undefined;
const then_ctx = try loadAsOfContext(
io,
allocator,
va,
svc,
@ -495,10 +515,11 @@ pub fn computeKeyComparison(
);
// Now side either another snapshot or the live portfolio.
if (as_of_now) |now_date| {
if (now_from_snapshot) {
var now_resolution: AsOfResolution = undefined;
var now_snap: history.LoadedSnapshot = undefined;
const now_ctx = loadAsOfContext(
io,
allocator,
va,
svc,
@ -525,7 +546,7 @@ pub fn computeKeyComparison(
}
// Live "now" side mirrors `run()`'s live path.
var loaded = cli.loadPortfolio(allocator, file_path) orelse {
var loaded = cli.loadPortfolio(io, allocator, file_path, now_date) orelse {
then_snap.deinit(allocator);
return error.PortfolioLoadFailed;
};
@ -543,10 +564,10 @@ pub fn computeKeyComparison(
}
}
var pf_data = cli.buildPortfolioData(allocator, loaded.portfolio, loaded.positions, loaded.syms, &prices, svc) catch |err| switch (err) {
var pf_data = cli.buildPortfolioData(allocator, loaded.portfolio, loaded.positions, loaded.syms, &prices, svc, now_date) catch |err| switch (err) {
error.NoAllocations, error.SummaryFailed => {
then_snap.deinit(allocator);
try cli.stderrPrint("Error computing portfolio summary.\n");
try cli.stderrPrint(io, "Error computing portfolio summary.\n");
return error.PortfolioLoadFailed;
},
else => {
@ -557,14 +578,16 @@ pub fn computeKeyComparison(
defer pf_data.deinit(allocator);
const now_ctx = try view.loadProjectionContext(
io,
va,
portfolio_dir,
pf_data.summary.allocations,
pf_data.summary.total_value,
loaded.portfolio.totalCash(),
loaded.portfolio.totalCdFaceValue(),
loaded.portfolio.totalCash(now_date),
loaded.portfolio.totalCdFaceValue(now_date),
svc,
events_enabled,
now_date,
);
return .{
@ -635,18 +658,19 @@ fn renderCompareRowMoney(out: *std.Io.Writer, color: bool, label: []const u8, th
/// Arena-allocates the intermediate `hist_dir` + filename strings;
/// pass a short-lived arena as `va`.
fn resolveAsOfSnapshot(
io: std.Io,
va: std.mem.Allocator,
file_path: []const u8,
requested: Date,
) !AsOfResolution {
const hist_dir = try history.deriveHistoryDir(va, file_path);
const resolved = cli.resolveSnapshotOrExplain(va, hist_dir, requested) catch |err| switch (err) {
const resolved = cli.resolveSnapshotOrExplain(io, va, hist_dir, requested) catch |err| switch (err) {
error.NoSnapshotAtOrBefore => return error.NoSnapshot,
else => |e| {
try cli.stderrPrint("Error resolving snapshot: ");
try cli.stderrPrint(@errorName(e));
try cli.stderrPrint("\n");
try cli.stderrPrint(io, "Error resolving snapshot: ");
try cli.stderrPrint(io, @errorName(e));
try cli.stderrPrint(io, "\n");
return error.NoSnapshot;
},
};
@ -688,11 +712,12 @@ const snapshot = @import("snapshot.zig");
fn makeTestSvc() zfin.DataService {
const config = zfin.Config{ .cache_dir = "/tmp" };
return zfin.DataService.init(testing.allocator, config);
return zfin.DataService.init(std.testing.io, testing.allocator, config);
}
fn writeFixtureSnapshot(
dir: std.fs.Dir,
io: std.Io,
dir: std.Io.Dir,
allocator: std.mem.Allocator,
filename: []const u8,
as_of: Date,
@ -734,100 +759,105 @@ fn writeFixtureSnapshot(
};
const rendered = try snapshot.renderSnapshot(allocator, snap);
defer allocator.free(rendered);
try dir.writeFile(.{ .sub_path = filename, .data = rendered });
try dir.writeFile(io, .{ .sub_path = filename, .data = rendered });
}
/// Build a portfolio path inside `tmp` and return the joined string.
/// Caller owns the returned buffer.
fn makeTestPortfolioPath(tmp: *std.testing.TmpDir, allocator: std.mem.Allocator) ![]u8 {
const dir_path = try tmp.dir.realpathAlloc(allocator, ".");
fn makeTestPortfolioPath(io: std.Io, tmp: *std.testing.TmpDir, allocator: std.mem.Allocator) ![]u8 {
const dir_path = try tmp.dir.realPathFileAlloc(io, ".", allocator);
defer allocator.free(dir_path);
return std.fs.path.join(allocator, &.{ dir_path, "portfolio.srf" });
}
test "resolveAsOfSnapshot: exact match returns actual == requested" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.makePath("history");
var hist_dir = try tmp.dir.openDir("history", .{});
defer hist_dir.close();
try tmp.dir.createDirPath(io, "history");
var hist_dir = try tmp.dir.openDir(io, "history", .{});
defer hist_dir.close(io);
const d = Date.fromYmd(2026, 3, 13);
try writeFixtureSnapshot(hist_dir, testing.allocator, "2026-03-13-portfolio.srf", d, 1_000_000);
try writeFixtureSnapshot(io, hist_dir, testing.allocator, "2026-03-13-portfolio.srf", d, 1_000_000);
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const res = try resolveAsOfSnapshot(arena.allocator(), pf, d);
const res = try resolveAsOfSnapshot(std.testing.io, arena.allocator(), pf, d);
try testing.expect(res.actual.eql(d));
try testing.expect(res.requested.eql(d));
}
test "resolveAsOfSnapshot: no exact match snaps to earlier" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.makePath("history");
var hist_dir = try tmp.dir.openDir("history", .{});
defer hist_dir.close();
try tmp.dir.createDirPath(io, "history");
var hist_dir = try tmp.dir.openDir(io, "history", .{});
defer hist_dir.close(io);
const earlier = Date.fromYmd(2026, 3, 12);
try writeFixtureSnapshot(hist_dir, testing.allocator, "2026-03-12-portfolio.srf", earlier, 1_000_000);
try writeFixtureSnapshot(io, hist_dir, testing.allocator, "2026-03-12-portfolio.srf", earlier, 1_000_000);
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const requested = Date.fromYmd(2026, 3, 13);
const res = try resolveAsOfSnapshot(arena.allocator(), pf, requested);
const res = try resolveAsOfSnapshot(std.testing.io, arena.allocator(), pf, requested);
try testing.expect(res.actual.eql(earlier));
try testing.expect(res.requested.eql(requested));
try testing.expect(!res.actual.eql(res.requested));
}
test "resolveAsOfSnapshot: no earlier snapshot returns NoSnapshot" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.makePath("history");
var hist_dir = try tmp.dir.openDir("history", .{});
defer hist_dir.close();
try tmp.dir.createDirPath(io, "history");
var hist_dir = try tmp.dir.openDir(io, "history", .{});
defer hist_dir.close(io);
// Only a later snapshot exists can't satisfy an earlier request.
const later = Date.fromYmd(2026, 4, 1);
try writeFixtureSnapshot(hist_dir, testing.allocator, "2026-04-01-portfolio.srf", later, 1_000_000);
try writeFixtureSnapshot(io, hist_dir, testing.allocator, "2026-04-01-portfolio.srf", later, 1_000_000);
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const requested = Date.fromYmd(2026, 3, 13);
const result = resolveAsOfSnapshot(arena.allocator(), pf, requested);
const result = resolveAsOfSnapshot(std.testing.io, arena.allocator(), pf, requested);
try testing.expectError(error.NoSnapshot, result);
}
test "resolveAsOfSnapshot: empty history dir returns NoSnapshot" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.makePath("history");
try tmp.dir.createDirPath(io, "history");
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const requested = Date.fromYmd(2026, 3, 13);
const result = resolveAsOfSnapshot(arena.allocator(), pf, requested);
const result = resolveAsOfSnapshot(std.testing.io, arena.allocator(), pf, requested);
try testing.expectError(error.NoSnapshot, result);
}
test "run: as_of with no snapshots returns without error (stderr-only)" {
const io = std.testing.io;
// No history dir at all. `run` prints a stderr hint via
// `resolveAsOfSnapshot` and returns should NOT propagate the
// error to the caller (exit code stays 0 from the CLI dispatch).
@ -836,14 +866,14 @@ test "run: as_of with no snapshots returns without error (stderr-only)" {
var svc = makeTestSvc();
defer svc.deinit();
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [4096]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const d = Date.fromYmd(2026, 3, 13);
try run(testing.allocator, &svc, pf, false, d, false, &stream);
try run(io, testing.allocator, &svc, pf, false, d, true, false, &stream);
// No body output because the resolution failed the stderr
// message is swallowed by `cli.stderrPrint` and doesn't land in
@ -853,6 +883,7 @@ test "run: as_of with no snapshots returns without error (stderr-only)" {
}
test "run: as_of with matching snapshot produces body output" {
const io = std.testing.io;
// End-to-end smoke test. With no cached candles, benchmark rows
// will be `--` and portfolio returns will be empty, but the
// rendering pipeline should still produce a complete header +
@ -862,19 +893,19 @@ test "run: as_of with matching snapshot produces body output" {
var svc = makeTestSvc();
defer svc.deinit();
try tmp.dir.makePath("history");
var hist_dir = try tmp.dir.openDir("history", .{});
defer hist_dir.close();
try tmp.dir.createDirPath(io, "history");
var hist_dir = try tmp.dir.openDir(io, "history", .{});
defer hist_dir.close(io);
const d = Date.fromYmd(2026, 3, 13);
try writeFixtureSnapshot(hist_dir, testing.allocator, "2026-03-13-portfolio.srf", d, 1_000_000);
try writeFixtureSnapshot(io, hist_dir, testing.allocator, "2026-03-13-portfolio.srf", d, 1_000_000);
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [32_768]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
try run(testing.allocator, &svc, pf, false, d, false, &stream);
try run(io, testing.allocator, &svc, pf, false, d, true, false, &stream);
const out = stream.buffered();
// Header should call out the as-of date explicitly.
@ -885,26 +916,27 @@ test "run: as_of with matching snapshot produces body output" {
}
test "run: as_of auto-snap surfaces muted 'nearest' note" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
var svc = makeTestSvc();
defer svc.deinit();
try tmp.dir.makePath("history");
var hist_dir = try tmp.dir.openDir("history", .{});
defer hist_dir.close();
try tmp.dir.createDirPath(io, "history");
var hist_dir = try tmp.dir.openDir(io, "history", .{});
defer hist_dir.close(io);
const actual = Date.fromYmd(2026, 3, 12);
try writeFixtureSnapshot(hist_dir, testing.allocator, "2026-03-12-portfolio.srf", actual, 1_000_000);
try writeFixtureSnapshot(io, hist_dir, testing.allocator, "2026-03-12-portfolio.srf", actual, 1_000_000);
const pf = try makeTestPortfolioPath(&tmp, testing.allocator);
const pf = try makeTestPortfolioPath(io, &tmp, testing.allocator);
defer testing.allocator.free(pf);
var buf: [32_768]u8 = undefined;
var stream = std.Io.Writer.fixed(&buf);
const requested = Date.fromYmd(2026, 3, 13);
try run(testing.allocator, &svc, pf, false, requested, false, &stream);
try run(io, testing.allocator, &svc, pf, false, requested, true, false, &stream);
const out = stream.buffered();
try testing.expect(std.mem.indexOf(u8, out, "as of 2026-03-12") != null);
@ -913,3 +945,50 @@ test "run: as_of auto-snap surfaces muted 'nearest' note" {
// 1 day earlier singular "day", not "days"
try testing.expect(std.mem.indexOf(u8, out, "1 day earlier") != null);
}
test "renderCompareRowPct: positive delta renders with + sign" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try renderCompareRowPct(&w, false, "Stocks", 0.50, 0.65);
const out = w.buffered();
try testing.expect(std.mem.indexOf(u8, out, "Stocks") != null);
try testing.expect(std.mem.indexOf(u8, out, "50.00%") != null);
try testing.expect(std.mem.indexOf(u8, out, "65.00%") != null);
try testing.expect(std.mem.indexOf(u8, out, "+15.00%") != null);
}
test "renderCompareRowPct: negative delta has no + sign" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try renderCompareRowPct(&w, false, "Bonds", 0.40, 0.30);
const out = w.buffered();
try testing.expect(std.mem.indexOf(u8, out, "40.00%") != null);
try testing.expect(std.mem.indexOf(u8, out, "30.00%") != null);
try testing.expect(std.mem.indexOf(u8, out, "-10.00%") != null);
}
test "renderCompareRowMoney: positive delta" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try renderCompareRowMoney(&w, false, "Net Worth", 100_000, 110_000);
const out = w.buffered();
try testing.expect(std.mem.indexOf(u8, out, "Net Worth") != null);
try testing.expect(std.mem.indexOf(u8, out, "$100,000") != null);
try testing.expect(std.mem.indexOf(u8, out, "$110,000") != null);
}
test "renderCompareRowMoney: zero delta" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try renderCompareRowMoney(&w, false, "Cash", 50_000, 50_000);
const out = w.buffered();
try testing.expect(std.mem.indexOf(u8, out, "Cash") != null);
try testing.expect(std.mem.indexOf(u8, out, "$50,000") != null);
}
test "renderCompareRowPct: no ANSI when color=false" {
var buf: [256]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try renderCompareRowPct(&w, false, "X", 0.1, 0.2);
try testing.expect(std.mem.indexOf(u8, w.buffered(), "\x1b[") == null);
}

View file

@ -14,15 +14,15 @@ pub const QuoteData = struct {
date: zfin.Date,
};
pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, allocator: std.mem.Allocator, svc: *zfin.DataService, symbol: []const u8, as_of: zfin.Date, color: bool, out: *std.Io.Writer) !void {
// Fetch candle data for chart and history
const candle_result = svc.getCandles(symbol) catch |err| switch (err) {
zfin.DataError.NoApiKey => {
try cli.stderrPrint("Error: No API key configured for candle data.\n");
try cli.stderrPrint(io, "Error: No API key configured for candle data.\n");
return;
},
else => {
try cli.stderrPrint("Error fetching candle data.\n");
try cli.stderrPrint(io, "Error fetching candle data.\n");
return;
},
};
@ -39,14 +39,14 @@ pub fn run(allocator: std.mem.Allocator, svc: *zfin.DataService, symbol: []const
.low = q.low,
.volume = q.volume,
.prev_close = q.previous_close,
.date = if (candles.len > 0) candles[candles.len - 1].date else fmt.todayDate(),
.date = if (candles.len > 0) candles[candles.len - 1].date else as_of,
};
} else |_| {}
try display(allocator, candles, quote, symbol, color, out);
try display(allocator, candles, quote, symbol, as_of, color, out);
}
pub fn display(allocator: std.mem.Allocator, candles: []const zfin.Candle, quote: ?QuoteData, symbol: []const u8, color: bool, out: *std.Io.Writer) !void {
pub fn display(allocator: std.mem.Allocator, candles: []const zfin.Candle, quote: ?QuoteData, symbol: []const u8, as_of: zfin.Date, color: bool, out: *std.Io.Writer) !void {
const has_quote = quote != null;
// Header
@ -68,7 +68,7 @@ pub fn display(allocator: std.mem.Allocator, candles: []const zfin.Candle, quote
const prev_close = if (quote) |q| q.prev_close else if (candles.len >= 2) candles[candles.len - 2].close else @as(f64, 0);
if (candles.len > 0 or has_quote) {
const latest_date = if (quote) |q| q.date else if (candles.len > 0) candles[candles.len - 1].date else fmt.todayDate();
const latest_date = if (quote) |q| q.date else if (candles.len > 0) candles[candles.len - 1].date else as_of;
const open_val = if (quote) |q| q.open else if (candles.len > 0) candles[candles.len - 1].open else @as(f64, 0);
const high_val = if (quote) |q| q.high else if (candles.len > 0) candles[candles.len - 1].high else @as(f64, 0);
const low_val = if (quote) |q| q.low else if (candles.len > 0) candles[candles.len - 1].low else @as(f64, 0);
@ -127,13 +127,11 @@ pub fn display(allocator: std.mem.Allocator, candles: []const zfin.Candle, quote
test "display with candles only" {
var buf: [8192]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const candles = [_]zfin.Candle{
.{ .date = .{ .days = 20000 }, .open = 150.0, .high = 155.0, .low = 149.0, .close = 153.0, .adj_close = 153.0, .volume = 50_000_000 },
.{ .date = .{ .days = 20001 }, .open = 153.0, .high = 158.0, .low = 152.0, .close = 156.0, .adj_close = 156.0, .volume = 45_000_000 },
};
try display(gpa.allocator(), &candles, null, "AAPL", false, &w);
try display(std.testing.allocator, &candles, null, "AAPL", zfin.Date.fromYmd(2026, 5, 8), false, &w);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "AAPL") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "(close)") != null);
@ -144,8 +142,6 @@ test "display with candles only" {
test "display with quote data" {
var buf: [8192]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const candles = [_]zfin.Candle{};
const quote: QuoteData = .{
.price = 175.50,
@ -156,7 +152,7 @@ test "display with quote data" {
.prev_close = 172.00,
.date = .{ .days = 20001 },
};
try display(gpa.allocator(), &candles, quote, "AAPL", false, &w);
try display(std.testing.allocator, &candles, quote, "AAPL", zfin.Date.fromYmd(2026, 5, 8), false, &w);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "AAPL") != null);
try std.testing.expect(std.mem.indexOf(u8, out, "Change") != null);
@ -167,12 +163,10 @@ test "display with quote data" {
test "display no ANSI without color" {
var buf: [8192]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const candles = [_]zfin.Candle{
.{ .date = .{ .days = 20000 }, .open = 100.0, .high = 105.0, .low = 99.0, .close = 103.0, .adj_close = 103.0, .volume = 1_000_000 },
};
try display(gpa.allocator(), &candles, null, "SPY", false, &w);
try display(std.testing.allocator, &candles, null, "SPY", zfin.Date.fromYmd(2026, 5, 8), false, &w);
const out = w.buffered();
try std.testing.expect(std.mem.indexOf(u8, out, "\x1b[") == null);
}

View file

@ -33,6 +33,7 @@ const std = @import("std");
const srf = @import("srf");
const zfin = @import("../root.zig");
const cli = @import("common.zig");
const fmt = @import("../format.zig");
const atomic = @import("../atomic.zig");
const version = @import("../version.zig");
const portfolio_mod = @import("../models/portfolio.zig");
@ -68,10 +69,12 @@ pub const SnapshotError = error{
/// 0 on success (including duplicate-skip)
/// non-zero on any error
pub fn run(
io: std.Io,
allocator: std.mem.Allocator,
svc: *zfin.DataService,
portfolio_path: []const u8,
args: []const []const u8,
now_s: i64,
color: bool,
out: *std.Io.Writer,
) !void {
@ -90,23 +93,26 @@ pub fn run(
} else if (std.mem.eql(u8, a, "--out")) {
i += 1;
if (i >= args.len) {
try cli.stderrPrint("Error: --out requires a path argument\n");
try cli.stderrPrint(io, "Error: --out requires a path argument\n");
return error.UnexpectedArg;
}
out_override = args[i];
} else if (std.mem.eql(u8, a, "--as-of")) {
i += 1;
if (i >= args.len) {
try cli.stderrPrint("Error: --as-of requires a date (YYYY-MM-DD or shortcut like 1W/1M/1Q/1Y)\n");
try cli.stderrPrint(io, "Error: --as-of requires a date (YYYY-MM-DD or shortcut like 1W/1M/1Q/1Y)\n");
return error.UnexpectedArg;
}
as_of_override = cli.parseRequiredDateOrStderr(args[i], cli.fmt.todayDate(), "--as-of") catch |err| switch (err) {
// Reference date for resolving relative forms in `--as-of`
// (e.g. "1W" 7 days before this anchor).
const flag_anchor = Date.fromEpoch(now_s);
as_of_override = cli.parseRequiredDateOrStderr(io, args[i], flag_anchor, "--as-of") catch |err| switch (err) {
error.InvalidDate => return error.UnexpectedArg,
};
} else {
try cli.stderrPrint("Error: unknown argument to 'snapshot': ");
try cli.stderrPrint(a);
try cli.stderrPrint("\n");
try cli.stderrPrint(io, "Error: unknown argument to 'snapshot': ");
try cli.stderrPrint(io, a);
try cli.stderrPrint(io, "\n");
return error.UnexpectedArg;
}
}
@ -119,17 +125,17 @@ pub fn run(
// date, git unavailable), we warn and fall back to the working copy,
// which at least approximates "positions the user currently holds"
// and is better than erroring out.
const pf_data = try loadPortfolioAtDate(allocator, portfolio_path, as_of_override);
const pf_data = try loadPortfolioAtDate(io, allocator, portfolio_path, as_of_override);
defer allocator.free(pf_data);
var portfolio = zfin.cache.deserializePortfolio(allocator, pf_data) catch {
try cli.stderrPrint("Error parsing portfolio file.\n");
try cli.stderrPrint(io, "Error parsing portfolio file.\n");
return error.WriteFailed;
};
defer portfolio.deinit();
if (portfolio.lots.len == 0) {
try cli.stderrPrint("Portfolio is empty; nothing to snapshot.\n");
try cli.stderrPrint(io, "Portfolio is empty; nothing to snapshot.\n");
return SnapshotError.PortfolioEmpty;
}
@ -155,14 +161,14 @@ pub fn run(
const cand_str = candidate.format(&cand_buf);
const candidate_path = try deriveSnapshotPath(allocator, portfolio_path, cand_str);
defer allocator.free(candidate_path);
if (std.fs.cwd().access(candidate_path, .{})) |_| {
if (std.Io.Dir.cwd().access(io, candidate_path, .{})) |_| {
var msg_buf: [256]u8 = undefined;
const msg = std.fmt.bufPrint(
&msg_buf,
"snapshot for {s} already exists: {s} (cache fresh, skipped without refresh)\n",
.{ cand_str, candidate_path },
) catch "snapshot already exists\n";
try cli.stderrPrint(msg);
try cli.stderrPrint(io, msg);
if (!dry_run) return;
// --dry-run falls through: the user probably wants to see
// what would be written.
@ -188,7 +194,7 @@ pub fn run(
// The duplicate-skip fast path above already handled the common
// "cache is fresh, snapshot exists" case without any of this work.
if (syms.len > 0 and as_of_override == null) {
var load_result = cli.loadPortfolioPrices(svc, syms, &.{}, false, color);
var load_result = cli.loadPortfolioPrices(io, svc, syms, &.{}, false, color);
load_result.deinit();
}
@ -198,7 +204,7 @@ pub fn run(
// dollar impact is nil, but they'd pollute the mode calculation).
const qdates = try collectQuoteDates(allocator, svc, syms);
defer allocator.free(qdates.dates);
const as_of = as_of_override orelse (computeAsOfDate(qdates.dates) orelse Date.fromEpoch(std.time.timestamp()));
const as_of = as_of_override orelse (computeAsOfDate(qdates.dates) orelse Date.fromEpoch(now_s));
// Under --as-of, skip days with no market activity (weekends, US
// market holidays). Detection is cache-based: if NO non-MM symbol
@ -217,7 +223,7 @@ pub fn run(
"skipping {s}: no market data (weekend or holiday)\n",
.{as_of.format(&date_buf)},
) catch "skipping non-trading day\n";
try cli.stderrPrint(msg);
try cli.stderrPrint(io, msg);
return;
}
@ -268,16 +274,16 @@ pub fn run(
// Duplicate-skip check.
if (!force and !dry_run) {
if (std.fs.cwd().access(derived_path, .{})) |_| {
if (std.Io.Dir.cwd().access(io, derived_path, .{})) |_| {
var msg_buf: [256]u8 = undefined;
const msg = std.fmt.bufPrint(&msg_buf, "snapshot for {s} already exists: {s} (use --force to overwrite)\n", .{ as_of_str, derived_path }) catch "snapshot already exists\n";
try cli.stderrPrint(msg);
try cli.stderrPrint(io, msg);
return;
} else |_| {}
}
// Build and render the snapshot.
var snap = try captureSnapshot(allocator, &portfolio, portfolio_path, svc, prices, symbol_prices, syms, as_of, qdates);
var snap = try captureSnapshot(io, allocator, &portfolio, portfolio_path, svc, prices, symbol_prices, syms, as_of, qdates, now_s);
defer snap.deinit(allocator);
const rendered = try renderSnapshot(allocator, snap);
@ -290,21 +296,21 @@ pub fn run(
// Ensure history/ directory exists.
if (std.fs.path.dirname(derived_path)) |dir| {
std.fs.cwd().makePath(dir) catch |err| switch (err) {
std.Io.Dir.cwd().createDirPath(io, dir) catch |err| switch (err) {
error.PathAlreadyExists => {},
else => {
try cli.stderrPrint("Error creating history directory: ");
try cli.stderrPrint(@errorName(err));
try cli.stderrPrint("\n");
try cli.stderrPrint(io, "Error creating history directory: ");
try cli.stderrPrint(io, @errorName(err));
try cli.stderrPrint(io, "\n");
return err;
},
};
}
atomic.writeFileAtomic(allocator, derived_path, rendered) catch |err| {
try cli.stderrPrint("Error writing snapshot: ");
try cli.stderrPrint(@errorName(err));
try cli.stderrPrint("\n");
atomic.writeFileAtomic(io, allocator, derived_path, rendered) catch |err| {
try cli.stderrPrint(io, "Error writing snapshot: ");
try cli.stderrPrint(io, @errorName(err));
try cli.stderrPrint(io, "\n");
return err;
};
@ -347,22 +353,23 @@ pub fn deriveSnapshotPath(
///
/// Caller owns returned bytes.
fn loadPortfolioAtDate(
io: std.Io,
allocator: std.mem.Allocator,
portfolio_path: []const u8,
as_of: ?Date,
) ![]const u8 {
const target = as_of orelse {
// Normal mode just read the file.
return std.fs.cwd().readFileAlloc(allocator, portfolio_path, 10 * 1024 * 1024) catch |err| {
try cli.stderrPrint("Error reading portfolio file: ");
try cli.stderrPrint(@errorName(err));
try cli.stderrPrint("\n");
return std.Io.Dir.cwd().readFileAlloc(io, portfolio_path, allocator, .limited(10 * 1024 * 1024)) catch |err| {
try cli.stderrPrint(io, "Error reading portfolio file: ");
try cli.stderrPrint(io, @errorName(err));
try cli.stderrPrint(io, "\n");
return err;
};
};
// Try git first.
if (loadPortfolioFromGit(allocator, portfolio_path, target)) |bytes| return bytes else |err| switch (err) {
if (loadPortfolioFromGit(io, allocator, portfolio_path, target)) |bytes| return bytes else |err| switch (err) {
error.NotInGitRepo, error.GitUnavailable, error.PathMissingInRev, error.UnknownRevision, error.NoCommitBeforeDate => {
// Fall through to working-copy fallback below.
var date_buf: [10]u8 = undefined;
@ -372,15 +379,15 @@ fn loadPortfolioAtDate(
"warning: no git history for portfolio at {s}; using working copy as approximation\n",
.{target.format(&date_buf)},
) catch "warning: no git history for portfolio at requested date\n";
try cli.stderrPrint(msg);
try cli.stderrPrint(io, msg);
},
else => |e| return e,
}
return std.fs.cwd().readFileAlloc(allocator, portfolio_path, 10 * 1024 * 1024) catch |err| {
try cli.stderrPrint("Error reading portfolio file: ");
try cli.stderrPrint(@errorName(err));
try cli.stderrPrint("\n");
return std.Io.Dir.cwd().readFileAlloc(io, portfolio_path, allocator, .limited(10 * 1024 * 1024)) catch |err| {
try cli.stderrPrint(io, "Error reading portfolio file: ");
try cli.stderrPrint(io, @errorName(err));
try cli.stderrPrint(io, "\n");
return err;
};
}
@ -390,16 +397,17 @@ fn loadPortfolioAtDate(
/// failure mode so `loadPortfolioAtDate` can decide whether to fall
/// back.
fn loadPortfolioFromGit(
io: std.Io,
allocator: std.mem.Allocator,
portfolio_path: []const u8,
target: Date,
) ![]const u8 {
const info = try git.findRepo(allocator, portfolio_path);
const info = try git.findRepo(io, allocator, portfolio_path);
defer allocator.free(info.root);
defer allocator.free(info.rel_path);
// List all commits that touched this path, newest-first.
const commits = try git.listCommitsTouching(allocator, info.root, info.rel_path, null);
const commits = try git.listCommitsTouching(io, allocator, info.root, info.rel_path, null);
defer git.freeCommitTouches(allocator, commits);
if (commits.len == 0) return error.PathMissingInRev;
@ -417,7 +425,7 @@ fn loadPortfolioFromGit(
};
const chosen = sha orelse return error.NoCommitBeforeDate;
return try git.show(allocator, info.root, chosen, info.rel_path);
return try git.show(io, allocator, info.root, chosen, info.rel_path);
}
// Quote-date / as_of_date helpers
@ -597,6 +605,7 @@ pub fn quoteDateRange(infos: []const QuoteInfo) ?struct { min: Date, max: Date }
/// call `buildSnapshot` directly with hand-built fixtures instead of
/// going through here.
fn captureSnapshot(
io: std.Io,
allocator: std.mem.Allocator,
portfolio: *zfin.Portfolio,
portfolio_path: []const u8,
@ -606,6 +615,7 @@ fn captureSnapshot(
syms: []const []const u8,
as_of: Date,
qdates: QuoteDates,
now_s: i64,
) !Snapshot {
// Use `positionsAsOf(as_of)` rather than `positions()` so historical
// backfills correctly count lots that were held on `as_of`
@ -616,7 +626,7 @@ fn captureSnapshot(
var manual_set = try zfin.valuation.buildFallbackPrices(allocator, portfolio.lots, positions, @constCast(&prices));
defer manual_set.deinit();
var summary = try zfin.valuation.portfolioSummary(allocator, portfolio.*, positions, prices, manual_set);
var summary = try zfin.valuation.portfolioSummary(as_of, allocator, portfolio.*, positions, prices, manual_set);
defer summary.deinit(allocator);
// Analysis is optional metadata.srf may not exist during initial
@ -624,6 +634,7 @@ fn captureSnapshot(
// null through to `buildSnapshot`, which emits empty
// tax_type/account sections.
var analysis_opt: ?zfin.analysis.AnalysisResult = runAnalysis(
io,
allocator,
portfolio,
portfolio_path,
@ -644,6 +655,7 @@ fn captureSnapshot(
as_of,
qdates,
analysis_opt,
now_s,
);
}
@ -675,6 +687,7 @@ fn buildSnapshot(
as_of: Date,
qdates: QuoteDates,
analysis_result: ?zfin.analysis.AnalysisResult,
now_s: i64,
) !Snapshot {
// `summary` and `manual_set` are caller-provided see
// `captureSnapshot` for how they're assembled from
@ -812,7 +825,7 @@ fn buildSnapshot(
.kind = "meta",
.snapshot_version = 1,
.as_of_date = as_of,
.captured_at = std.time.timestamp(),
.captured_at = now_s,
.zfin_version = version.version_string,
.quote_date_min = if (range) |r| r.min else null,
.quote_date_max = if (range) |r| r.max else null,
@ -826,6 +839,7 @@ fn buildSnapshot(
}
fn runAnalysis(
io: std.Io,
allocator: std.mem.Allocator,
portfolio: *zfin.Portfolio,
portfolio_path: []const u8,
@ -837,7 +851,7 @@ fn runAnalysis(
const meta_path = try std.fmt.allocPrint(allocator, "{s}metadata.srf", .{portfolio_path[0..dir_end]});
defer allocator.free(meta_path);
const meta_data = std.fs.cwd().readFileAlloc(allocator, meta_path, 1024 * 1024) catch return error.NoMetadata;
const meta_data = std.Io.Dir.cwd().readFileAlloc(io, meta_path, allocator, .limited(1024 * 1024)) catch return error.NoMetadata;
defer allocator.free(meta_data);
var cm = zfin.classification.parseClassificationFile(allocator, meta_data) catch return error.BadMetadata;
@ -1369,6 +1383,7 @@ test "buildSnapshot: price_ratio applied to live prices, skipped for manual" {
Date.fromYmd(2026, 4, 17),
qdates,
null, // no classification tax_types/accounts empty
1_745_222_400,
);
defer snap.deinit(allocator);
@ -1488,6 +1503,7 @@ test "buildSnapshot: stale carry-forward flagged on lot row" {
Date.fromYmd(2026, 4, 20),
qdates,
null,
1_745_222_400,
);
defer snap.deinit(allocator);

View file

@ -3,20 +3,20 @@ const zfin = @import("../root.zig");
const cli = @import("common.zig");
const fmt = cli.fmt;
pub fn run(svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io.Writer) !void {
pub fn run(io: std.Io, svc: *zfin.DataService, symbol: []const u8, color: bool, out: *std.Io.Writer) !void {
const result = svc.getSplits(symbol) catch |err| switch (err) {
zfin.DataError.NoApiKey => {
try cli.stderrPrint("Error: POLYGON_API_KEY not set. Get a free key at https://polygon.io\n");
try cli.stderrPrint(io, "Error: POLYGON_API_KEY not set. Get a free key at https://polygon.io\n");
return;
},
else => {
try cli.stderrPrint("Error fetching split data.\n");
try cli.stderrPrint(io, "Error fetching split data.\n");
return;
},
};
defer result.deinit();
if (result.source == .cached) try cli.stderrPrint("(using cached split data)\n");
if (result.source == .cached) try cli.stderrPrint(io, "(using cached split data)\n");
try display(result.data, symbol, color, out);
}

View file

@ -19,6 +19,7 @@ const Date = @import("../models/date.zig").Date;
/// `args` is the slice after `zfin version` (expects `--verbose`/`-v` or
/// nothing). Unknown args produce an error on stderr.
pub fn run(
io: std.Io,
config: zfin.Config,
args: []const []const u8,
out: *std.Io.Writer,
@ -28,9 +29,9 @@ pub fn run(
if (std.mem.eql(u8, a, "--verbose") or std.mem.eql(u8, a, "-v")) {
verbose = true;
} else {
try cli.stderrPrint("Error: unknown argument to 'version': ");
try cli.stderrPrint(a);
try cli.stderrPrint("\n");
try cli.stderrPrint(io, "Error: unknown argument to 'version': ");
try cli.stderrPrint(io, a);
try cli.stderrPrint(io, "\n");
return error.UnexpectedArg;
}
}
@ -98,7 +99,7 @@ test "run: no args prints single-line banner" {
var buf: [1024]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
const cfg = stubConfig(null, "/tmp/test-cache");
try run(cfg, &.{}, &w);
try run(std.testing.io, cfg, &.{}, &w);
const out = w.buffered();
// Banner shape: starts with "zfin ", contains " (built ", ends with newline.
@ -121,7 +122,7 @@ test "run: --verbose includes all diagnostic fields" {
var w: std.Io.Writer = .fixed(&buf);
const cfg = stubConfig("/some/zfin/home", "/tmp/expected-cache-dir");
const args = [_][]const u8{"--verbose"};
try run(cfg, &args, &w);
try run(std.testing.io, cfg, &args, &w);
const out = w.buffered();
// Banner still first line
@ -150,8 +151,8 @@ test "run: -v short form equivalent to --verbose" {
var w_short: std.Io.Writer = .fixed(&buf_short);
const cfg = stubConfig("/zhome", "/cache");
try run(cfg, &[_][]const u8{"--verbose"}, &w_long);
try run(cfg, &[_][]const u8{"-v"}, &w_short);
try run(std.testing.io, cfg, &[_][]const u8{"--verbose"}, &w_long);
try run(std.testing.io, cfg, &[_][]const u8{"-v"}, &w_short);
try std.testing.expectEqualStrings(w_long.buffered(), w_short.buffered());
}
@ -162,7 +163,7 @@ test "run: unknown flag returns UnexpectedArg and writes nothing to out" {
const cfg = stubConfig(null, "/cache");
const args = [_][]const u8{"--bogus"};
try std.testing.expectError(error.UnexpectedArg, run(cfg, &args, &w));
try std.testing.expectError(error.UnexpectedArg, run(std.testing.io, cfg, &args, &w));
// The error path returns before any writing to `out`. Stderr output
// (the "unknown argument" line) goes through cli.stderrPrint directly

View file

@ -67,11 +67,12 @@ pub const SnapshotSide = struct {
/// TUI only offers dates that are known to exist so it should never
/// hit this path).
pub fn loadSnapshotSide(
io: std.Io,
allocator: std.mem.Allocator,
hist_dir: []const u8,
date: Date,
) !SnapshotSide {
var loaded = try history.loadSnapshotAt(allocator, hist_dir, date);
var loaded = try history.loadSnapshotAt(io, allocator, hist_dir, date);
errdefer loaded.deinit(allocator);
var map: view.HoldingMap = .init(allocator);
@ -125,14 +126,14 @@ pub fn aggregateSnapshotStocks(
/// `priceSymbol()`). Caller must keep the portfolio alive as long as
/// the map is used.
pub fn aggregateLiveStocks(
as_of: zfin.Date,
portfolio: *const zfin.Portfolio,
prices: *const std.StringHashMap(f64),
out_map: *view.HoldingMap,
) !void {
const today = fmt.todayDate();
for (portfolio.lots) |lot| {
if (lot.security_type != .stock) continue;
if (!lot.lotIsOpenAsOf(today)) continue;
if (!lot.lotIsOpenAsOf(as_of)) continue;
const sym = lot.priceSymbol();
const raw_price = prices.get(sym) orelse continue;
const eff_price = lot.effectivePrice(raw_price, false);
@ -227,6 +228,7 @@ test "aggregateSnapshotStocks: sums shares, filters non-stock, takes first price
}
test "loadSnapshotSide: happy path builds a SnapshotSide with aggregated holdings" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
@ -241,12 +243,12 @@ test "loadSnapshotSide: happy path builds a SnapshotSide with aggregated holding
\\kind::lot,symbol::BAR,lot_symbol::BAR,account::Main,security_type::Stock,shares:num:50,open_price:num:180,cost_basis:num:9000,value:num:10000,price:num:200,quote_date::2024-03-15
\\
;
try tmp.dir.writeFile(.{ .sub_path = "2024-03-15-portfolio.srf", .data = snap_bytes });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-15-portfolio.srf", .data = snap_bytes });
const hist_dir = try tmp.dir.realpathAlloc(testing.allocator, ".");
const hist_dir = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(hist_dir);
var side = try loadSnapshotSide(testing.allocator, hist_dir, Date.fromYmd(2024, 3, 15));
var side = try loadSnapshotSide(std.testing.io, testing.allocator, hist_dir, Date.fromYmd(2024, 3, 15));
defer side.deinit(testing.allocator);
try testing.expectEqual(@as(f64, 25000), side.liquid);
@ -256,12 +258,163 @@ test "loadSnapshotSide: happy path builds a SnapshotSide with aggregated holding
}
test "loadSnapshotSide: missing file propagates FileNotFound" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const hist_dir = try tmp.dir.realpathAlloc(testing.allocator, ".");
const hist_dir = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(hist_dir);
const result = loadSnapshotSide(testing.allocator, hist_dir, Date.fromYmd(2024, 3, 15));
const result = loadSnapshotSide(std.testing.io, testing.allocator, hist_dir, Date.fromYmd(2024, 3, 15));
try testing.expectError(error.FileNotFound, result);
}
test "aggregateLiveStocks: sums shares for same symbol across accounts" {
var map: view.HoldingMap = .init(testing.allocator);
defer map.deinit();
const today = Date.fromYmd(2026, 5, 8);
const lots = [_]zfin.Lot{
.{ .symbol = "AAPL", .shares = 100, .open_date = Date.fromYmd(2024, 1, 1), .open_price = 150, .account = "Roth" },
.{ .symbol = "AAPL", .shares = 50, .open_date = Date.fromYmd(2024, 6, 1), .open_price = 160, .account = "IRA" },
.{ .symbol = "MSFT", .shares = 25, .open_date = Date.fromYmd(2024, 1, 1), .open_price = 350, .account = "Roth" },
};
const portfolio: zfin.Portfolio = .{ .lots = @constCast(&lots), .allocator = testing.allocator };
var prices: std.StringHashMap(f64) = .init(testing.allocator);
defer prices.deinit();
try prices.put("AAPL", 200.0);
try prices.put("MSFT", 400.0);
try aggregateLiveStocks(today, &portfolio, &prices, &map);
try testing.expectEqual(@as(u32, 2), map.count());
const aapl = map.get("AAPL") orelse return error.TestUnexpectedResult;
try testing.expectApproxEqAbs(@as(f64, 150), aapl.shares, 0.01);
try testing.expectApproxEqAbs(@as(f64, 200.0), aapl.price, 0.01);
const msft = map.get("MSFT") orelse return error.TestUnexpectedResult;
try testing.expectApproxEqAbs(@as(f64, 25), msft.shares, 0.01);
}
test "aggregateLiveStocks: filters out non-stock lots" {
var map: view.HoldingMap = .init(testing.allocator);
defer map.deinit();
const today = Date.fromYmd(2026, 5, 8);
const lots = [_]zfin.Lot{
.{ .symbol = "AAPL", .shares = 100, .open_date = Date.fromYmd(2024, 1, 1), .open_price = 150, .security_type = .stock },
.{ .symbol = "CASH", .shares = 5000, .open_date = Date.fromYmd(2024, 1, 1), .open_price = 1.0, .security_type = .cash },
.{ .symbol = "VTI", .shares = 50, .open_date = Date.fromYmd(2024, 1, 1), .open_price = 200, .security_type = .cd, .maturity_date = Date.fromYmd(2027, 1, 1) },
};
const portfolio: zfin.Portfolio = .{ .lots = @constCast(&lots), .allocator = testing.allocator };
var prices: std.StringHashMap(f64) = .init(testing.allocator);
defer prices.deinit();
try prices.put("AAPL", 200.0);
try prices.put("CASH", 1.0);
try prices.put("VTI", 250.0);
try aggregateLiveStocks(today, &portfolio, &prices, &map);
// Only stock lots make it in.
try testing.expectEqual(@as(u32, 1), map.count());
try testing.expect(map.get("AAPL") != null);
try testing.expect(map.get("CASH") == null);
try testing.expect(map.get("VTI") == null);
}
test "aggregateLiveStocks: excludes lots not yet open as of today" {
var map: view.HoldingMap = .init(testing.allocator);
defer map.deinit();
const today = Date.fromYmd(2024, 6, 1);
const lots = [_]zfin.Lot{
// Already open
.{ .symbol = "AAPL", .shares = 100, .open_date = Date.fromYmd(2024, 1, 1), .open_price = 150 },
// Not yet bought (open_date is in the future)
.{ .symbol = "MSFT", .shares = 50, .open_date = Date.fromYmd(2025, 1, 1), .open_price = 300 },
// Sold before today
.{ .symbol = "GOOG", .shares = 25, .open_date = Date.fromYmd(2023, 1, 1), .open_price = 100, .close_date = Date.fromYmd(2024, 3, 1), .close_price = 150 },
};
const portfolio: zfin.Portfolio = .{ .lots = @constCast(&lots), .allocator = testing.allocator };
var prices: std.StringHashMap(f64) = .init(testing.allocator);
defer prices.deinit();
try prices.put("AAPL", 200.0);
try prices.put("MSFT", 400.0);
try prices.put("GOOG", 175.0);
try aggregateLiveStocks(today, &portfolio, &prices, &map);
try testing.expectEqual(@as(u32, 1), map.count());
try testing.expect(map.get("AAPL") != null);
}
test "aggregateLiveStocks: skips lots with no price in map" {
var map: view.HoldingMap = .init(testing.allocator);
defer map.deinit();
const today = Date.fromYmd(2026, 5, 8);
const lots = [_]zfin.Lot{
.{ .symbol = "AAPL", .shares = 100, .open_date = Date.fromYmd(2024, 1, 1), .open_price = 150 },
.{ .symbol = "OBSCURE", .shares = 10, .open_date = Date.fromYmd(2024, 1, 1), .open_price = 50 },
};
const portfolio: zfin.Portfolio = .{ .lots = @constCast(&lots), .allocator = testing.allocator };
var prices: std.StringHashMap(f64) = .init(testing.allocator);
defer prices.deinit();
// Only AAPL has a price; OBSCURE does not.
try prices.put("AAPL", 200.0);
try aggregateLiveStocks(today, &portfolio, &prices, &map);
try testing.expectEqual(@as(u32, 1), map.count());
try testing.expect(map.get("AAPL") != null);
try testing.expect(map.get("OBSCURE") == null);
}
test "aggregateLiveStocks: applies price_ratio via effectivePrice" {
var map: view.HoldingMap = .init(testing.allocator);
defer map.deinit();
const today = Date.fromYmd(2026, 5, 8);
// CUSIP-style lot with price_ratio: raw price * ratio = effective.
const lots = [_]zfin.Lot{
.{
.symbol = "02315N600",
.shares = 100,
.open_date = Date.fromYmd(2024, 1, 1),
.open_price = 140,
.ticker = "VTTHX",
.price_ratio = 5.0,
},
};
const portfolio: zfin.Portfolio = .{ .lots = @constCast(&lots), .allocator = testing.allocator };
var prices: std.StringHashMap(f64) = .init(testing.allocator);
defer prices.deinit();
try prices.put("VTTHX", 30.0); // raw price
try aggregateLiveStocks(today, &portfolio, &prices, &map);
// priceSymbol() returns "VTTHX" (the ticker), not the CUSIP.
const h = map.get("VTTHX") orelse return error.TestUnexpectedResult;
try testing.expectApproxEqAbs(@as(f64, 100), h.shares, 0.01);
// effective price = raw * price_ratio = 30 * 5 = 150
try testing.expectApproxEqAbs(@as(f64, 150.0), h.price, 0.01);
}
test "aggregateLiveStocks: empty portfolio yields empty map" {
var map: view.HoldingMap = .init(testing.allocator);
defer map.deinit();
const today = Date.fromYmd(2026, 5, 8);
const lots = [_]zfin.Lot{};
const portfolio: zfin.Portfolio = .{ .lots = @constCast(&lots), .allocator = testing.allocator };
var prices: std.StringHashMap(f64) = .init(testing.allocator);
defer prices.deinit();
try aggregateLiveStocks(today, &portfolio, &prices, &map);
try testing.expectEqual(@as(u32, 0), map.count());
}

View file

@ -72,23 +72,23 @@ pub const entries = [_]StaleEntry{
};
/// Write a warning line for each entry in `entries` that is overdue
/// for refresh as of `today`. Writes nothing when every entry is
/// for refresh as of `as_of`. Writes nothing when every entry is
/// fresh. Entries are processed in order; multiple warnings are
/// separated by a blank line.
pub fn check(
writer: *std.Io.Writer,
today: Date,
as_of: Date,
list: []const StaleEntry,
) !void {
var wrote_any = false;
for (list) |entry| {
const this_years_due = Date.fromYmd(
today.year(),
as_of.year(),
entry.due_month,
entry.due_day,
);
// Not yet nag season.
if (today.lessThan(this_years_due)) continue;
if (as_of.lessThan(this_years_due)) continue;
// Already refreshed this cycle.
if (!entry.last_updated.lessThan(this_years_due)) continue;

View file

@ -237,11 +237,20 @@ pub fn fmtIntCommas(buf: []u8, value: u64) []const u8 {
return buf[0..len];
}
/// Format a unix timestamp as relative time ("just now", "5m ago", "2h ago", "3d ago").
pub fn fmtTimeAgo(buf: []u8, timestamp: i64) []const u8 {
if (timestamp == 0) return "";
const now = std.time.timestamp();
const delta = now - timestamp;
/// Format an earlier timestamp as relative time measured against a
/// reference point ("just now", "5m ago", "2h ago", "3d ago").
///
/// Pure: takes two unix-epoch-seconds values `before_s` (the earlier
/// event being aged) and `after_s` (the reference "now"). Caller
/// captures `after_s` via `std.Io.Timestamp.now(io, .real).toSeconds()`
/// once per frame/command and passes it in.
///
/// Returns `""` when `before_s == 0` (caller-convention for "unset").
/// Returns `"just now"` when `before_s > after_s` (clock skew or unset)
/// or when the delta is under a minute.
pub fn fmtTimeAgo(buf: []u8, before_s: i64, after_s: i64) []const u8 {
if (before_s == 0) return "";
const delta = after_s - before_s;
if (delta < 0) return "just now";
if (delta < 60) return "just now";
if (delta < std.time.s_per_hour) {
@ -270,9 +279,13 @@ pub fn fmtLargeNum(val: f64) [15]u8 {
// Date / financial helpers
/// Get today's date.
pub fn todayDate() Date {
const ts = std.time.timestamp();
/// Get today's date from the system clock.
///
/// Takes `io` because reading wall-clock time is a side-effecting
/// operation in Zig 0.16+. For pure date math in tests, construct
/// dates directly via `Date.fromYmd` instead.
pub fn todayDate(io: std.Io) Date {
const ts = std.Io.Timestamp.now(io, .real).toSeconds();
const days: i32 = @intCast(@divFloor(ts, std.time.s_per_day));
return .{ .days = days };
}
@ -284,10 +297,10 @@ pub fn dayPlural(n: i32) []const u8 {
return if (n == 1) "" else "s";
}
/// Return "LT" if held > 1 year from open_date to today, "ST" otherwise.
pub fn capitalGainsIndicator(open_date: Date) []const u8 {
const today = todayDate();
return if (today.days - open_date.days > 365) "LT" else "ST";
/// Return "LT" if held > 1 year from `open_date` to `as_of`, "ST" otherwise.
/// Caller passes today (or a backfill date) as `as_of`.
pub fn capitalGainsIndicator(as_of: Date, open_date: Date) []const u8 {
return if (as_of.days - open_date.days > 365) "LT" else "ST";
}
/// Return a slice view of candles on or after the given date (no allocation).
@ -377,9 +390,11 @@ pub fn fmtContractLine(buf: []u8, prefix: []const u8, c: OptionContract) []const
// Portfolio helpers
/// Sort lots: open lots first (date descending), closed lots last (date descending).
pub fn lotSortFn(_: void, a: Lot, b: Lot) bool {
const a_open = a.isOpen();
const b_open = b.isOpen();
/// Pass `as_of` as the open/closed reference point; avoids needing an
/// Io in the sort callback.
pub fn lotSortFn(as_of: Date, a: Lot, b: Lot) bool {
const a_open = a.isOpen(as_of);
const b_open = b.isOpen(as_of);
if (a_open and !b_open) return true; // open before closed
if (!a_open and b_open) return false;
return a.open_date.days > b.open_date.days; // newest first
@ -438,11 +453,11 @@ pub const DripAggregation = struct {
/// Aggregate DRIP lots into short-term and long-term buckets.
/// Classifies using `capitalGainsIndicator` (LT if held > 1 year).
pub fn aggregateDripLots(lots: []const Lot) DripAggregation {
pub fn aggregateDripLots(as_of: Date, lots: []const Lot) DripAggregation {
var result: DripAggregation = .{};
for (lots) |lot| {
if (!lot.drip) continue;
const is_lt = std.mem.eql(u8, capitalGainsIndicator(lot.open_date), "LT");
const is_lt = std.mem.eql(u8, capitalGainsIndicator(as_of, lot.open_date), "LT");
const bucket: *DripSummary = if (is_lt) &result.lt else &result.st;
bucket.lot_count += 1;
bucket.shares += lot.shares;
@ -913,11 +928,14 @@ pub fn writeBrailleAnsi(
// ANSI color helpers (for CLI)
/// Determine whether to use ANSI color output.
/// Uses std.Io.tty.Config.detect which handles TTY detection, NO_COLOR,
/// Uses std.Io.Terminal.Mode.detect which handles TTY detection, NO_COLOR,
/// CLICOLOR_FORCE, and Windows console API cross-platform.
pub fn shouldUseColor(no_color_flag: bool) bool {
pub fn shouldUseColor(io: std.Io, environ_map: *const std.process.Environ.Map, no_color_flag: bool) bool {
if (no_color_flag) return false;
return std.Io.tty.Config.detect(std.fs.File.stdout()) != .no_color;
const NO_COLOR = if (environ_map.get("NO_COLOR")) |v| v.len > 0 else false;
const CLICOLOR_FORCE = if (environ_map.get("CLICOLOR_FORCE")) |v| v.len > 0 else false;
const mode = std.Io.Terminal.Mode.detect(io, std.Io.File.stdout(), NO_COLOR, CLICOLOR_FORCE) catch return false;
return mode != .no_color;
}
/// Write an ANSI 24-bit foreground color escape.
@ -1115,11 +1133,11 @@ test "lotSortFn" {
.close_price = 110,
};
// Open before closed
try std.testing.expect(lotSortFn({}, open_new, closed));
try std.testing.expect(!lotSortFn({}, closed, open_new));
try std.testing.expect(lotSortFn(Date.fromYmd(2026, 5, 8), open_new, closed));
try std.testing.expect(!lotSortFn(Date.fromYmd(2026, 5, 8), closed, open_new));
// Among open lots: newest first
try std.testing.expect(lotSortFn({}, open_new, open_old));
try std.testing.expect(!lotSortFn({}, open_old, open_new));
try std.testing.expect(lotSortFn(Date.fromYmd(2026, 5, 8), open_new, open_old));
try std.testing.expect(!lotSortFn(Date.fromYmd(2026, 5, 8), open_old, open_new));
}
test "lotMaturitySortFn" {
@ -1162,7 +1180,9 @@ test "aggregateDripLots" {
.{ .symbol = "VTI", .shares = 0.2, .open_date = Date.fromYmd(2023, 1, 1), .open_price = 200, .drip = true },
.{ .symbol = "VTI", .shares = 10, .open_date = Date.fromYmd(2024, 1, 1), .open_price = 210, .drip = false },
};
const agg = aggregateDripLots(&lots);
// as_of pinned: 2023 lot is >1 year old (LT), 2025 lots are <1 year (ST).
const as_of = Date.fromYmd(2026, 1, 1);
const agg = aggregateDripLots(as_of, &lots);
// The 2023 lot is >1 year old -> LT, the 2025 lots are ST
try std.testing.expect(!agg.lt.isEmpty());
try std.testing.expectEqual(@as(usize, 1), agg.lt.lot_count);
@ -1177,7 +1197,7 @@ test "aggregateDripLots empty" {
const lots = [_]Lot{
.{ .symbol = "VTI", .shares = 10, .open_date = Date.fromYmd(2024, 1, 1), .open_price = 210, .drip = false },
};
const agg = aggregateDripLots(&lots);
const agg = aggregateDripLots(Date.fromYmd(2026, 1, 1), &lots);
try std.testing.expect(agg.st.isEmpty());
try std.testing.expect(agg.lt.isEmpty());
}
@ -1382,3 +1402,42 @@ test "fmtPriceChange" {
const neg = fmtPriceChange(&buf, -2.00, -1.5);
try std.testing.expect(std.mem.startsWith(u8, neg, "-$2.00"));
}
test "fmtTimeAgo: zero before_s returns empty string" {
var buf: [24]u8 = undefined;
try std.testing.expectEqualStrings("", fmtTimeAgo(&buf, 0, 1_700_000_000));
}
test "fmtTimeAgo: before > after renders 'just now'" {
var buf: [24]u8 = undefined;
try std.testing.expectEqualStrings("just now", fmtTimeAgo(&buf, 1_700_000_050, 1_700_000_000));
}
test "fmtTimeAgo: sub-minute delta renders 'just now'" {
var buf: [24]u8 = undefined;
try std.testing.expectEqualStrings("just now", fmtTimeAgo(&buf, 1_700_000_000, 1_700_000_059));
}
test "fmtTimeAgo: minutes" {
var buf: [24]u8 = undefined;
// 5m = 300s
try std.testing.expectEqualStrings("5m ago", fmtTimeAgo(&buf, 1_700_000_000, 1_700_000_000 + 300));
// 59m = 3540s (still minutes)
try std.testing.expectEqualStrings("59m ago", fmtTimeAgo(&buf, 1_700_000_000, 1_700_000_000 + 3540));
}
test "fmtTimeAgo: hours" {
var buf: [24]u8 = undefined;
// 1h exactly
try std.testing.expectEqualStrings("1h ago", fmtTimeAgo(&buf, 1_700_000_000, 1_700_000_000 + 3600));
// 5h
try std.testing.expectEqualStrings("5h ago", fmtTimeAgo(&buf, 1_700_000_000, 1_700_000_000 + 5 * 3600));
}
test "fmtTimeAgo: days" {
var buf: [24]u8 = undefined;
// 1d = 86_400s
try std.testing.expectEqualStrings("1d ago", fmtTimeAgo(&buf, 1_700_000_000, 1_700_000_000 + 86_400));
// 7d
try std.testing.expectEqualStrings("7d ago", fmtTimeAgo(&buf, 1_700_000_000, 1_700_000_000 + 7 * 86_400));
}

View file

@ -117,19 +117,18 @@ pub const CommitRange = struct {
///
/// Allocator is used for the returned `root` and `rel_path` strings
/// (caller-owned).
pub fn findRepo(allocator: std.mem.Allocator, path: []const u8) Error!RepoInfo {
pub fn findRepo(io: std.Io, allocator: std.mem.Allocator, path: []const u8) Error!RepoInfo {
// Resolve the file's directory. realpath requires the file to exist.
const abs_path = std.fs.cwd().realpathAlloc(allocator, path) catch {
const abs_path = std.Io.Dir.cwd().realPathFileAlloc(io, path, allocator) catch {
return error.NotInGitRepo;
};
defer allocator.free(abs_path);
const dir = std.fs.path.dirname(abs_path) orelse "/";
// `git -C <dir> rev-parse --show-toplevel` prints the repo root.
const result = std.process.Child.run(.{
.allocator = allocator,
const result = std.process.run(allocator, io, .{
.argv = &.{ "git", "-C", dir, "rev-parse", "--show-toplevel" },
.max_output_bytes = 64 * 1024,
.stdout_limit = .limited(64 * 1024),
}) catch {
return error.GitUnavailable;
};
@ -137,7 +136,7 @@ pub fn findRepo(allocator: std.mem.Allocator, path: []const u8) Error!RepoInfo {
defer allocator.free(result.stderr);
switch (result.term) {
.Exited => |code| if (code != 0) return error.NotInGitRepo,
.exited => |code| if (code != 0) return error.NotInGitRepo,
else => return error.NotInGitRepo,
}
@ -150,7 +149,7 @@ pub fn findRepo(allocator: std.mem.Allocator, path: []const u8) Error!RepoInfo {
// just the basename (extremely unusual repo root disagrees with
// path).
const rel_raw = if (std.mem.startsWith(u8, abs_path, root) and abs_path.len > root.len)
std.mem.trimLeft(u8, abs_path[root.len..], "/")
std.mem.trimStart(u8, abs_path[root.len..], "/")
else
std.fs.path.basename(abs_path);
const rel = try allocator.dupe(u8, rel_raw);
@ -161,20 +160,20 @@ pub fn findRepo(allocator: std.mem.Allocator, path: []const u8) Error!RepoInfo {
/// Report the tracked/untracked/modified status of `rel_path` relative to
/// the repo at `root`.
pub fn pathStatus(
io: std.Io,
allocator: std.mem.Allocator,
root: []const u8,
rel_path: []const u8,
) Error!PathStatus {
const result = std.process.Child.run(.{
.allocator = allocator,
const result = std.process.run(allocator, io, .{
.argv = &.{ "git", "-C", root, "status", "--porcelain", "--", rel_path },
.max_output_bytes = 64 * 1024,
.stdout_limit = .limited(64 * 1024),
}) catch return error.GitUnavailable;
defer allocator.free(result.stdout);
defer allocator.free(result.stderr);
switch (result.term) {
.Exited => |code| if (code != 0) return error.GitStatusFailed,
.exited => |code| if (code != 0) return error.GitStatusFailed,
else => return error.GitStatusFailed,
}
@ -193,6 +192,7 @@ pub fn pathStatus(
/// message: `UnknownRevision` (e.g. HEAD~1 on a fresh repo),
/// `PathMissingInRev` (file didn't exist at that commit).
pub fn show(
io: std.Io,
allocator: std.mem.Allocator,
root: []const u8,
rev: []const u8,
@ -201,16 +201,15 @@ pub fn show(
const spec = try std.fmt.allocPrint(allocator, "{s}:{s}", .{ rev, rel_path });
defer allocator.free(spec);
const result = std.process.Child.run(.{
.allocator = allocator,
const result = std.process.run(allocator, io, .{
.argv = &.{ "git", "-C", root, "show", spec },
.max_output_bytes = 32 * 1024 * 1024,
.stdout_limit = .limited(32 * 1024 * 1024),
}) catch return error.GitUnavailable;
errdefer allocator.free(result.stdout);
defer allocator.free(result.stderr);
switch (result.term) {
.Exited => |code| {
.exited => |code| {
if (code != 0) {
allocator.free(result.stdout);
// Distinguish "no such revision" from "path missing".
@ -248,6 +247,7 @@ pub fn show(
/// Returned slice (and each `commit` string within) is caller-owned. Free
/// with `freeCommitTouches`.
pub fn listCommitsTouching(
io: std.Io,
allocator: std.mem.Allocator,
root: []const u8,
rel_path: []const u8,
@ -274,16 +274,15 @@ pub fn listCommitsTouching(
}
try argv.appendSlice(allocator, &.{ "--", rel_path });
const result = std.process.Child.run(.{
.allocator = allocator,
const result = std.process.run(allocator, io, .{
.argv = argv.items,
.max_output_bytes = 16 * 1024 * 1024,
.stdout_limit = .limited(16 * 1024 * 1024),
}) catch return error.GitUnavailable;
defer allocator.free(result.stdout);
defer allocator.free(result.stderr);
switch (result.term) {
.Exited => |code| if (code != 0) return error.GitLogFailed,
.exited => |code| if (code != 0) return error.GitLogFailed,
else => return error.GitLogFailed,
}
@ -320,20 +319,20 @@ pub fn freeCommitTouches(allocator: std.mem.Allocator, items: []const CommitTouc
///
/// Equivalent to `git log -1 --format=%ct -- <rel_path>`.
pub fn lastCommitTimestampForPath(
io: std.Io,
allocator: std.mem.Allocator,
root: []const u8,
rel_path: []const u8,
) Error!?i64 {
const result = std.process.Child.run(.{
.allocator = allocator,
const result = std.process.run(allocator, io, .{
.argv = &.{ "git", "-C", root, "log", "-1", "--format=%ct", "--", rel_path },
.max_output_bytes = 64 * 1024,
.stdout_limit = .limited(64 * 1024),
}) catch return error.GitUnavailable;
defer allocator.free(result.stdout);
defer allocator.free(result.stderr);
switch (result.term) {
.Exited => |code| if (code != 0) return error.GitLogFailed,
.exited => |code| if (code != 0) return error.GitLogFailed,
else => return error.GitLogFailed,
}
@ -353,6 +352,7 @@ pub fn lastCommitTimestampForPath(
/// resolve a date to the last commit that stamped a given snapshot of
/// the portfolio file.
pub fn commitAtOrBeforeDate(
io: std.Io,
allocator: std.mem.Allocator,
root: []const u8,
rel_path: []const u8,
@ -370,20 +370,19 @@ pub fn commitAtOrBeforeDate(
const until_arg = try std.fmt.allocPrint(allocator, "--until={s} 23:59:59", .{date_iso});
defer allocator.free(until_arg);
const result = std.process.Child.run(.{
.allocator = allocator,
const result = std.process.run(allocator, io, .{
.argv = &.{
"git", "-C", root,
"log", "-1", "--format=%H",
until_arg, "--", rel_path,
},
.max_output_bytes = 64 * 1024,
.stdout_limit = .limited(64 * 1024),
}) catch return error.GitUnavailable;
defer allocator.free(result.stdout);
defer allocator.free(result.stderr);
switch (result.term) {
.Exited => |code| if (code != 0) return error.GitLogFailed,
.exited => |code| if (code != 0) return error.GitLogFailed,
else => return error.GitLogFailed,
}
@ -406,24 +405,24 @@ pub fn commitAtOrBeforeDate(
/// to emit a snap-note when a date-form spec resolves to a commit
/// that's far from the user's requested date.
pub fn commitTimestamp(
io: std.Io,
allocator: std.mem.Allocator,
root: []const u8,
ref: []const u8,
) Error!i64 {
const result = std.process.Child.run(.{
.allocator = allocator,
const result = std.process.run(allocator, io, .{
.argv = &.{
"git", "-C", root,
"log", "-1", "--format=%ct",
ref,
},
.max_output_bytes = 4 * 1024,
.stdout_limit = .limited(4 * 1024),
}) catch return error.GitUnavailable;
defer allocator.free(result.stdout);
defer allocator.free(result.stderr);
switch (result.term) {
.Exited => |code| if (code != 0) return error.GitLogFailed,
.exited => |code| if (code != 0) return error.GitLogFailed,
else => return error.GitLogFailed,
}
@ -481,6 +480,7 @@ pub fn commitTimestamp(
/// HEAD..working-copy (dirty). Back-compat with pre-flag
/// `zfin contributions` invocations.
pub fn resolveCommitRangeSpec(
io: std.Io,
arena: std.mem.Allocator,
repo: RepoInfo,
before: ?CommitSpec,
@ -494,7 +494,7 @@ pub fn resolveCommitRangeSpec(
// Resolve each endpoint independently.
const before_rev: []const u8 = if (before) |b|
try resolveSpec(arena, repo, b)
try resolveSpec(io, arena, repo, b)
else if (dirty)
"HEAD"
else
@ -503,7 +503,7 @@ pub fn resolveCommitRangeSpec(
const after_rev: ?[]const u8 = if (after) |a|
(switch (a) {
.working_copy => null,
else => try resolveSpec(arena, repo, a),
else => try resolveSpec(io, arena, repo, a),
})
else if (dirty)
null
@ -516,13 +516,13 @@ pub fn resolveCommitRangeSpec(
/// Resolve one non-working `CommitSpec` to a string git can consume.
/// Caller handles the `.working_copy` case separately (it's not a
/// git ref).
fn resolveSpec(arena: std.mem.Allocator, repo: RepoInfo, spec: CommitSpec) Error![]const u8 {
fn resolveSpec(io: std.Io, arena: std.mem.Allocator, repo: RepoInfo, spec: CommitSpec) Error![]const u8 {
return switch (spec) {
.git_ref => |r| r,
.date_at_or_before => |d| blk: {
var buf: [10]u8 = undefined;
const date_str = d.format(&buf);
const sha = (try commitAtOrBeforeDate(arena, repo.root, repo.rel_path, date_str)) orelse
const sha = (try commitAtOrBeforeDate(io, arena, repo.root, repo.rel_path, date_str)) orelse
return error.NoCommitAtOrBefore;
break :blk sha;
},
@ -538,6 +538,7 @@ fn resolveSpec(arena: std.mem.Allocator, repo: RepoInfo, spec: CommitSpec) Error
/// `until` without `since` is rejected via assertion the window is
/// ambiguous without a starting point.
pub fn resolveCommitRange(
io: std.Io,
arena: std.mem.Allocator,
repo: RepoInfo,
since: ?Date,
@ -547,7 +548,7 @@ pub fn resolveCommitRange(
std.debug.assert(!(since == null and until != null));
const before: ?CommitSpec = if (since) |d| .{ .date_at_or_before = d } else null;
const after: ?CommitSpec = if (until) |d| .{ .date_at_or_before = d } else null;
return resolveCommitRangeSpec(arena, repo, before, after, dirty);
return resolveCommitRangeSpec(io, arena, repo, before, after, dirty);
}
// Tests
@ -563,7 +564,7 @@ test "findRepo locates the ambient zfin checkout" {
// environment is responsible for providing git).
const allocator = std.testing.allocator;
// Pick any file that exists in the repo build.zig is stable.
const info = findRepo(allocator, "build.zig") catch return;
const info = findRepo(std.testing.io, allocator, "build.zig") catch return;
defer allocator.free(info.root);
defer allocator.free(info.rel_path);
try std.testing.expect(info.root.len > 0);
@ -572,10 +573,10 @@ test "findRepo locates the ambient zfin checkout" {
test "listCommitsTouching returns at least one commit for build.zig" {
const allocator = std.testing.allocator;
const info = findRepo(allocator, "build.zig") catch return;
const info = findRepo(std.testing.io, allocator, "build.zig") catch return;
defer allocator.free(info.root);
defer allocator.free(info.rel_path);
const commits = listCommitsTouching(allocator, info.root, info.rel_path, null) catch return;
const commits = listCommitsTouching(std.testing.io, allocator, info.root, info.rel_path, null) catch return;
defer freeCommitTouches(allocator, commits);
try std.testing.expect(commits.len >= 1);
// Timestamps are plausible (after 2020).
@ -587,25 +588,25 @@ test "listCommitsTouching with non-null since_iso does not segfault" {
// string literal when since_iso was non-null, segfaulting on the
// debug allocator's memset-to-undefined.
const allocator = std.testing.allocator;
const info = findRepo(allocator, "build.zig") catch return;
const info = findRepo(std.testing.io, allocator, "build.zig") catch return;
defer allocator.free(info.root);
defer allocator.free(info.rel_path);
// The test is primarily about not segfaulting; we don't assert on
// commits.len since git's --since parsing may decline values that
// are too far back (e.g. "100 years ago" can hit pre-epoch dates).
const commits = listCommitsTouching(allocator, info.root, info.rel_path, "30 years ago") catch return;
const commits = listCommitsTouching(std.testing.io, allocator, info.root, info.rel_path, "30 years ago") catch return;
defer freeCommitTouches(allocator, commits);
}
test "commitAtOrBeforeDate returns a SHA for a past date" {
const allocator = std.testing.allocator;
const info = findRepo(allocator, "build.zig") catch return;
const info = findRepo(std.testing.io, allocator, "build.zig") catch return;
defer allocator.free(info.root);
defer allocator.free(info.rel_path);
// Any date well after the repo's creation commitAtOrBeforeDate
// should find the most recent commit touching build.zig.
const sha_opt = commitAtOrBeforeDate(allocator, info.root, info.rel_path, "2099-01-01") catch return;
const sha_opt = commitAtOrBeforeDate(std.testing.io, allocator, info.root, info.rel_path, "2099-01-01") catch return;
try std.testing.expect(sha_opt != null);
const sha = sha_opt.?;
defer allocator.free(sha);
@ -617,12 +618,12 @@ test "commitAtOrBeforeDate returns a SHA for a past date" {
test "commitAtOrBeforeDate returns null for date before repo existed" {
const allocator = std.testing.allocator;
const info = findRepo(allocator, "build.zig") catch return;
const info = findRepo(std.testing.io, allocator, "build.zig") catch return;
defer allocator.free(info.root);
defer allocator.free(info.rel_path);
// Pre-git before any sensible project history.
const sha_opt = commitAtOrBeforeDate(allocator, info.root, info.rel_path, "1970-01-02") catch return;
const sha_opt = commitAtOrBeforeDate(std.testing.io, allocator, info.root, info.rel_path, "1970-01-02") catch return;
try std.testing.expect(sha_opt == null);
}
@ -643,13 +644,13 @@ test "commitAtOrBeforeDate: --until=DATE covers end of day, not current time-of-
// agreeing on `--since 1W` totals (see src/commands/contributions.zig
// tests).
const allocator = std.testing.allocator;
const info = findRepo(allocator, "build.zig") catch return;
const info = findRepo(std.testing.io, allocator, "build.zig") catch return;
defer allocator.free(info.root);
defer allocator.free(info.rel_path);
// Future-dated cutoff should always return the tip of history
// regardless of current wall-clock time.
const sha_opt = commitAtOrBeforeDate(allocator, info.root, info.rel_path, "2099-01-01") catch return;
const sha_opt = commitAtOrBeforeDate(std.testing.io, allocator, info.root, info.rel_path, "2099-01-01") catch return;
try std.testing.expect(sha_opt != null);
if (sha_opt) |s| allocator.free(s);
}
@ -659,7 +660,7 @@ test "resolveCommitRange: legacy clean → HEAD~1..HEAD" {
defer arena_state.deinit();
const repo: RepoInfo = .{ .root = "/tmp", .rel_path = "portfolio.srf" };
const range = try resolveCommitRange(arena_state.allocator(), repo, null, null, false);
const range = try resolveCommitRange(std.testing.io, arena_state.allocator(), repo, null, null, false);
try std.testing.expectEqualStrings("HEAD~1", range.before_rev);
try std.testing.expectEqualStrings("HEAD", range.after_rev.?);
}
@ -669,14 +670,14 @@ test "resolveCommitRange: legacy dirty → HEAD..working-copy" {
defer arena_state.deinit();
const repo: RepoInfo = .{ .root = "/tmp", .rel_path = "portfolio.srf" };
const range = try resolveCommitRange(arena_state.allocator(), repo, null, null, true);
const range = try resolveCommitRange(std.testing.io, arena_state.allocator(), repo, null, null, true);
try std.testing.expectEqualStrings("HEAD", range.before_rev);
try std.testing.expect(range.after_rev == null);
}
test "resolveCommitRange: --since resolves to SHA..HEAD for clean tree" {
const allocator = std.testing.allocator;
const info = findRepo(allocator, "build.zig") catch return;
const info = findRepo(std.testing.io, allocator, "build.zig") catch return;
defer allocator.free(info.root);
defer allocator.free(info.rel_path);
@ -685,6 +686,7 @@ test "resolveCommitRange: --since resolves to SHA..HEAD for clean tree" {
// Any date well after project start resolves to latest commit.
const range = resolveCommitRange(
std.testing.io,
arena_state.allocator(),
info,
Date.fromYmd(2099, 1, 1),
@ -697,7 +699,7 @@ test "resolveCommitRange: --since resolves to SHA..HEAD for clean tree" {
test "resolveCommitRange: --since with no earlier commit → NoCommitAtOrBefore" {
const allocator = std.testing.allocator;
const info = findRepo(allocator, "build.zig") catch return;
const info = findRepo(std.testing.io, allocator, "build.zig") catch return;
defer allocator.free(info.root);
defer allocator.free(info.rel_path);
@ -706,6 +708,7 @@ test "resolveCommitRange: --since with no earlier commit → NoCommitAtOrBefore"
// Before any commit exists in this repo.
const result = resolveCommitRange(
std.testing.io,
arena_state.allocator(),
info,
Date.fromYmd(1970, 1, 2),

View file

@ -166,10 +166,11 @@ pub const LoadedHistory = struct {
/// `analytics.timeline.buildSeries` (which sorts) rather than relying
/// on the loader's order.
pub fn loadHistoryDir(
io: std.Io,
allocator: std.mem.Allocator,
history_dir: []const u8,
) !LoadedHistory {
var dir = std.fs.cwd().openDir(history_dir, .{ .iterate = true }) catch |err| switch (err) {
var dir = std.Io.Dir.cwd().openDir(io, history_dir, .{ .iterate = true }) catch |err| switch (err) {
error.FileNotFound => {
// Missing history dir isn't fatal it just means no
// snapshots captured yet.
@ -177,7 +178,7 @@ pub fn loadHistoryDir(
},
else => return err,
};
defer dir.close();
defer dir.close(io);
var snapshots: std.ArrayList(snapshot.Snapshot) = .empty;
var buffers: std.ArrayList([]u8) = .empty;
@ -189,14 +190,14 @@ pub fn loadHistoryDir(
}
var it = dir.iterate();
while (try it.next()) |entry| {
while (try it.next(io)) |entry| {
if (entry.kind != .file) continue;
if (!std.mem.endsWith(u8, entry.name, snapshot_suffix)) continue;
const full_path = try std.fs.path.join(allocator, &.{ history_dir, entry.name });
defer allocator.free(full_path);
const bytes = std.fs.cwd().readFileAlloc(allocator, full_path, 16 * 1024 * 1024) catch |err| {
const bytes = std.Io.Dir.cwd().readFileAlloc(io, full_path, allocator, .limited(16 * 1024 * 1024)) catch |err| {
std.log.warn("history: failed to read {s}: {s}", .{ full_path, @errorName(err) });
continue;
};
@ -271,13 +272,14 @@ pub const LoadedTimeline = struct {
/// message. Parse failures on individual files are logged to stderr by
/// `loadHistoryDir` and the offending file is skipped.
pub fn loadTimeline(
io: std.Io,
allocator: std.mem.Allocator,
portfolio_path: []const u8,
) !LoadedTimeline {
const history_dir = try deriveHistoryDir(allocator, portfolio_path);
errdefer allocator.free(history_dir);
var loaded = try loadHistoryDir(allocator, history_dir);
var loaded = try loadHistoryDir(io, allocator, history_dir);
errdefer loaded.deinit();
const series = try timeline.buildSeries(allocator, loaded.snapshots);
@ -312,18 +314,19 @@ pub const LoadedSnapshot = struct {
/// suggestion via `findNearestSnapshot`, TUI just won't offer missing
/// dates as selectable rows).
pub fn loadSnapshotAt(
io: std.Io,
allocator: std.mem.Allocator,
history_dir: []const u8,
date: Date,
hist_dir: []const u8,
target: Date,
) !LoadedSnapshot {
var date_buf: [10]u8 = undefined;
const date_str = date.format(&date_buf);
const date_str = target.format(&date_buf);
const filename = try std.fmt.allocPrint(allocator, "{s}{s}", .{ date_str, snapshot_suffix });
defer allocator.free(filename);
const full_path = try std.fs.path.join(allocator, &.{ history_dir, filename });
const full_path = try std.fs.path.join(allocator, &.{ hist_dir, filename });
defer allocator.free(full_path);
const bytes = try std.fs.cwd().readFileAlloc(allocator, full_path, 16 * 1024 * 1024);
const bytes = try std.Io.Dir.cwd().readFileAlloc(io, full_path, allocator, .limited(16 * 1024 * 1024));
errdefer allocator.free(bytes);
const snap = try parseSnapshotBytes(allocator, bytes);
@ -346,20 +349,21 @@ pub const Nearest = struct {
/// print a "no snapshot for X; nearest is Y" hint compose this with
/// their own output pass.
pub fn findNearestSnapshot(
io: std.Io,
history_dir: []const u8,
target: Date,
) !Nearest {
var dir = std.fs.cwd().openDir(history_dir, .{ .iterate = true }) catch |err| switch (err) {
var dir = std.Io.Dir.cwd().openDir(io, history_dir, .{ .iterate = true }) catch |err| switch (err) {
error.FileNotFound => return .{ .earlier = null, .later = null },
else => return err,
};
defer dir.close();
defer dir.close(io);
var earlier: ?Date = null;
var later: ?Date = null;
var it = dir.iterate();
while (try it.next()) |entry| {
while (try it.next(io)) |entry| {
if (entry.kind != .file) continue;
if (!std.mem.endsWith(u8, entry.name, snapshot_suffix)) continue;
const expected_len = 10 + snapshot_suffix.len;
@ -390,7 +394,7 @@ pub const ResolvedSnapshot = struct {
pub const ResolveSnapshotError = error{
/// No snapshot file exists at or before the requested date.
NoSnapshotAtOrBefore,
} || std.mem.Allocator.Error || std.fs.Dir.AccessError || std.fs.File.OpenError;
} || std.mem.Allocator.Error || std.Io.Dir.AccessError || std.Io.File.OpenError;
/// Resolve a requested snapshot date against `hist_dir`:
/// - If `hist_dir/<requested>-portfolio.srf` exists, return it as
@ -409,6 +413,7 @@ pub const ResolveSnapshotError = error{
/// full path). Pass a short-lived arena; the returned struct has no
/// borrowed references.
pub fn resolveSnapshotDate(
io: std.Io,
arena: std.mem.Allocator,
hist_dir: []const u8,
requested: Date,
@ -418,9 +423,9 @@ pub fn resolveSnapshotDate(
const filename = try std.fmt.allocPrint(arena, "{s}{s}", .{ date_str, snapshot_suffix });
const full_path = try std.fs.path.join(arena, &.{ hist_dir, filename });
std.fs.cwd().access(full_path, .{}) catch |err| switch (err) {
std.Io.Dir.cwd().access(io, full_path, .{}) catch |err| switch (err) {
error.FileNotFound => {
const nearest = findNearestSnapshot(hist_dir, requested) catch |e| return e;
const nearest = findNearestSnapshot(io, hist_dir, requested) catch |e| return e;
if (nearest.earlier) |earlier| {
return .{ .requested = requested, .actual = earlier, .exact = false };
}
@ -762,12 +767,13 @@ test "parseSnapshotBytes: totally malformed input returns error" {
test "loadHistoryDir: missing directory returns empty result" {
// No dir created; should silently yield an empty list rather than
// raising FileNotFound to the caller.
var result = try loadHistoryDir(testing.allocator, "/nonexistent/path/for/testing");
var result = try loadHistoryDir(std.testing.io, testing.allocator, "/nonexistent/path/for/testing");
defer result.deinit();
try testing.expectEqual(@as(usize, 0), result.snapshots.len);
}
test "loadHistoryDir: loads snapshots and skips non-matching files" {
const io = std.testing.io;
var tmp_dir = testing.tmpDir(.{});
defer tmp_dir.cleanup();
@ -787,26 +793,15 @@ test "loadHistoryDir: loads snapshots and skips non-matching files" {
\\kind::total,scope::net_worth,value:num:1100
\\
;
{
var f = try tmp_dir.dir.createFile("2026-04-17-portfolio.srf", .{});
try f.writeAll(snap_bytes);
f.close();
}
{
var f = try tmp_dir.dir.createFile("2026-04-18-portfolio.srf", .{});
try f.writeAll(snap2_bytes);
f.close();
}
{
var f = try tmp_dir.dir.createFile("readme.txt", .{});
try f.writeAll("not a snapshot");
f.close();
}
try tmp_dir.dir.writeFile(io, .{ .sub_path = "2026-04-17-portfolio.srf", .data = snap_bytes });
try tmp_dir.dir.writeFile(io, .{ .sub_path = "2026-04-18-portfolio.srf", .data = snap2_bytes });
try tmp_dir.dir.writeFile(io, .{ .sub_path = "readme.txt", .data = "not a snapshot" });
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const dir_path = try tmp_dir.dir.realpath(".", &path_buf);
const dir_path_len = try tmp_dir.dir.realPathFile(io, ".", &path_buf);
const dir_path = path_buf[0..dir_path_len];
var result = try loadHistoryDir(testing.allocator, dir_path);
var result = try loadHistoryDir(std.testing.io, testing.allocator, dir_path);
defer result.deinit();
try testing.expectEqual(@as(usize, 2), result.snapshots.len);
@ -825,6 +820,7 @@ test "loadHistoryDir: loads snapshots and skips non-matching files" {
}
test "loadHistoryDir: corrupt files are skipped, others still load" {
const io = std.testing.io;
var tmp_dir = testing.tmpDir(.{});
defer tmp_dir.cleanup();
@ -833,21 +829,14 @@ test "loadHistoryDir: corrupt files are skipped, others still load" {
\\kind::meta,snapshot_version:num:1,as_of_date::2026-04-17,captured_at:num:0,zfin_version::x,stale_count:num:0
\\
;
{
var f = try tmp_dir.dir.createFile("2026-04-17-portfolio.srf", .{});
try f.writeAll(good_bytes);
f.close();
}
{
var f = try tmp_dir.dir.createFile("2026-04-18-portfolio.srf", .{});
try f.writeAll("totally-not-srf\n");
f.close();
}
try tmp_dir.dir.writeFile(io, .{ .sub_path = "2026-04-17-portfolio.srf", .data = good_bytes });
try tmp_dir.dir.writeFile(io, .{ .sub_path = "2026-04-18-portfolio.srf", .data = "totally-not-srf\n" });
var path_buf: [std.fs.max_path_bytes]u8 = undefined;
const dir_path = try tmp_dir.dir.realpath(".", &path_buf);
const dir_path_len = try tmp_dir.dir.realPathFile(io, ".", &path_buf);
const dir_path = path_buf[0..dir_path_len];
var result = try loadHistoryDir(testing.allocator, dir_path);
var result = try loadHistoryDir(std.testing.io, testing.allocator, dir_path);
defer result.deinit();
// Only the good one lands.
@ -857,19 +846,21 @@ test "loadHistoryDir: corrupt files are skipped, others still load" {
// findNearestSnapshot / loadSnapshotAt tests
test "findNearestSnapshot: empty dir returns both null" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(path);
const result = try findNearestSnapshot(path, Date.fromYmd(2024, 3, 15));
const result = try findNearestSnapshot(std.testing.io, path, Date.fromYmd(2024, 3, 15));
try testing.expectEqual(@as(?Date, null), result.earlier);
try testing.expectEqual(@as(?Date, null), result.later);
}
test "findNearestSnapshot: non-existent dir returns both null" {
const result = try findNearestSnapshot(
std.testing.io,
"/tmp/zfin-history-nearest-never-exists-91823",
Date.fromYmd(2024, 3, 15),
);
@ -878,22 +869,23 @@ test "findNearestSnapshot: non-existent dir returns both null" {
}
test "findNearestSnapshot: earlier and later around gap" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.writeFile(.{ .sub_path = "2024-03-10-portfolio.srf", .data = "" });
try tmp.dir.writeFile(.{ .sub_path = "2024-03-12-portfolio.srf", .data = "" });
try tmp.dir.writeFile(.{ .sub_path = "2024-03-15-portfolio.srf", .data = "" });
try tmp.dir.writeFile(.{ .sub_path = "2024-03-20-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-10-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-12-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-15-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-20-portfolio.srf", .data = "" });
// Noise files that should be ignored.
try tmp.dir.writeFile(.{ .sub_path = "random.txt", .data = "" });
try tmp.dir.writeFile(.{ .sub_path = "rollup.srf", .data = "" });
try tmp.dir.writeFile(.{ .sub_path = "bogus-date-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "random.txt", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "rollup.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "bogus-date-portfolio.srf", .data = "" });
const path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(path);
const result = try findNearestSnapshot(path, Date.fromYmd(2024, 3, 14));
const result = try findNearestSnapshot(std.testing.io, path, Date.fromYmd(2024, 3, 14));
try testing.expect(result.earlier != null);
try testing.expect(result.later != null);
try testing.expectEqual(@as(i32, Date.fromYmd(2024, 3, 12).days), result.earlier.?.days);
@ -901,65 +893,70 @@ test "findNearestSnapshot: earlier and later around gap" {
}
test "findNearestSnapshot: before earliest — only later set" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.writeFile(.{ .sub_path = "2024-03-10-portfolio.srf", .data = "" });
try tmp.dir.writeFile(.{ .sub_path = "2024-03-12-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-10-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-12-portfolio.srf", .data = "" });
const path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(path);
const result = try findNearestSnapshot(path, Date.fromYmd(2024, 1, 1));
const result = try findNearestSnapshot(std.testing.io, path, Date.fromYmd(2024, 1, 1));
try testing.expectEqual(@as(?Date, null), result.earlier);
try testing.expect(result.later != null);
try testing.expectEqual(@as(i32, Date.fromYmd(2024, 3, 10).days), result.later.?.days);
}
test "findNearestSnapshot: after latest — only earlier set" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.writeFile(.{ .sub_path = "2024-03-10-portfolio.srf", .data = "" });
try tmp.dir.writeFile(.{ .sub_path = "2024-03-12-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-10-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-12-portfolio.srf", .data = "" });
const path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(path);
const result = try findNearestSnapshot(path, Date.fromYmd(2025, 1, 1));
const result = try findNearestSnapshot(std.testing.io, path, Date.fromYmd(2025, 1, 1));
try testing.expect(result.earlier != null);
try testing.expectEqual(@as(?Date, null), result.later);
try testing.expectEqual(@as(i32, Date.fromYmd(2024, 3, 12).days), result.earlier.?.days);
}
test "findNearestSnapshot: target hits a file exactly — returns neighbors, not self" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.writeFile(.{ .sub_path = "2024-03-10-portfolio.srf", .data = "" });
try tmp.dir.writeFile(.{ .sub_path = "2024-03-12-portfolio.srf", .data = "" });
try tmp.dir.writeFile(.{ .sub_path = "2024-03-15-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-10-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-12-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-15-portfolio.srf", .data = "" });
const path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(path);
const result = try findNearestSnapshot(path, Date.fromYmd(2024, 3, 12));
const result = try findNearestSnapshot(std.testing.io, path, Date.fromYmd(2024, 3, 12));
try testing.expectEqual(@as(i32, Date.fromYmd(2024, 3, 10).days), result.earlier.?.days);
try testing.expectEqual(@as(i32, Date.fromYmd(2024, 3, 15).days), result.later.?.days);
}
test "loadSnapshotAt: missing file returns FileNotFound" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(path);
const result = loadSnapshotAt(testing.allocator, path, Date.fromYmd(2024, 3, 15));
const result = loadSnapshotAt(io, testing.allocator, path, Date.fromYmd(2024, 3, 15));
try testing.expectError(error.FileNotFound, result);
}
test "loadSnapshotAt: happy path loads and parses" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
@ -973,12 +970,12 @@ test "loadSnapshotAt: happy path loads and parses" {
\\kind::total,scope::illiquid,value:num:0
\\
;
try tmp.dir.writeFile(.{ .sub_path = "2024-03-15-portfolio.srf", .data = snap_bytes });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-15-portfolio.srf", .data = snap_bytes });
const path = try tmp.dir.realpathAlloc(testing.allocator, ".");
const path = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(path);
var loaded = try loadSnapshotAt(testing.allocator, path, Date.fromYmd(2024, 3, 15));
var loaded = try loadSnapshotAt(io, testing.allocator, path, Date.fromYmd(2024, 3, 15));
defer loaded.deinit(testing.allocator);
try testing.expectEqual(@as(i32, Date.fromYmd(2024, 3, 15).days), loaded.snap.meta.as_of_date.days);
@ -1302,66 +1299,70 @@ test "sliceCandlesAsOf: as_of after all candles returns everything" {
// resolveSnapshotDate tests
test "resolveSnapshotDate: exact match returns exact=true" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.writeFile(.{ .sub_path = "2024-03-15-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-15-portfolio.srf", .data = "" });
const hist_dir = try tmp.dir.realpathAlloc(testing.allocator, ".");
const hist_dir = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(hist_dir);
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const resolved = try resolveSnapshotDate(arena.allocator(), hist_dir, Date.fromYmd(2024, 3, 15));
const resolved = try resolveSnapshotDate(io, arena.allocator(), hist_dir, Date.fromYmd(2024, 3, 15));
try testing.expect(resolved.exact);
try testing.expectEqual(Date.fromYmd(2024, 3, 15).days, resolved.actual.days);
try testing.expectEqual(Date.fromYmd(2024, 3, 15).days, resolved.requested.days);
}
test "resolveSnapshotDate: no exact match snaps to nearest earlier" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
try tmp.dir.writeFile(.{ .sub_path = "2024-03-10-portfolio.srf", .data = "" });
try tmp.dir.writeFile(.{ .sub_path = "2024-03-20-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-10-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-03-20-portfolio.srf", .data = "" });
const hist_dir = try tmp.dir.realpathAlloc(testing.allocator, ".");
const hist_dir = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(hist_dir);
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const resolved = try resolveSnapshotDate(arena.allocator(), hist_dir, Date.fromYmd(2024, 3, 15));
const resolved = try resolveSnapshotDate(io, arena.allocator(), hist_dir, Date.fromYmd(2024, 3, 15));
try testing.expect(!resolved.exact);
try testing.expectEqual(Date.fromYmd(2024, 3, 10).days, resolved.actual.days);
try testing.expectEqual(Date.fromYmd(2024, 3, 15).days, resolved.requested.days);
}
test "resolveSnapshotDate: no earlier snapshot returns NoSnapshotAtOrBefore" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
// Only a later snapshot can't satisfy a request for an earlier date.
try tmp.dir.writeFile(.{ .sub_path = "2024-04-01-portfolio.srf", .data = "" });
try tmp.dir.writeFile(io, .{ .sub_path = "2024-04-01-portfolio.srf", .data = "" });
const hist_dir = try tmp.dir.realpathAlloc(testing.allocator, ".");
const hist_dir = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(hist_dir);
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const result = resolveSnapshotDate(arena.allocator(), hist_dir, Date.fromYmd(2024, 3, 15));
const result = resolveSnapshotDate(io, arena.allocator(), hist_dir, Date.fromYmd(2024, 3, 15));
try testing.expectError(error.NoSnapshotAtOrBefore, result);
}
test "resolveSnapshotDate: empty history dir returns NoSnapshotAtOrBefore" {
const io = std.testing.io;
var tmp = std.testing.tmpDir(.{});
defer tmp.cleanup();
const hist_dir = try tmp.dir.realpathAlloc(testing.allocator, ".");
const hist_dir = try tmp.dir.realPathFileAlloc(io, ".", testing.allocator);
defer testing.allocator.free(hist_dir);
var arena = std.heap.ArenaAllocator.init(testing.allocator);
defer arena.deinit();
const result = resolveSnapshotDate(arena.allocator(), hist_dir, Date.fromYmd(2024, 3, 15));
const result = resolveSnapshotDate(io, arena.allocator(), hist_dir, Date.fromYmd(2024, 3, 15));
try testing.expectError(error.NoSnapshotAtOrBefore, result);
}

View file

@ -192,6 +192,7 @@ fn parseGlobals(args: []const []const u8) GlobalParseError!Globals {
/// resolve through cwd ZFIN_HOME. If null, use the given default filename
/// and run through resolveUserFile.
fn resolveUserPath(
io: std.Io,
allocator: std.mem.Allocator,
config: zfin.Config,
explicit: ?[]const u8,
@ -199,19 +200,19 @@ fn resolveUserPath(
) struct { path: []const u8, resolved: ?zfin.Config.ResolvedPath } {
if (explicit) |p| {
// Try resolveUserFile so bare names like "foo.srf" fall back to ZFIN_HOME.
if (config.resolveUserFile(allocator, p)) |r| {
if (config.resolveUserFile(io, allocator, p)) |r| {
return .{ .path = r.path, .resolved = r };
}
return .{ .path = p, .resolved = null };
}
if (config.resolveUserFile(allocator, default_name)) |r| {
if (config.resolveUserFile(io, allocator, default_name)) |r| {
return .{ .path = r.path, .resolved = r };
}
return .{ .path = default_name, .resolved = null };
}
pub fn main() !u8 {
return runCli() catch |err| switch (err) {
pub fn main(init: std.process.Init) !u8 {
return runCli(init) catch |err| switch (err) {
// Downstream pipe closed (e.g., `zfin earnings AAPL | head`). Zig's
// file writer surfaces EPIPE as WriteFailed. Treat as a clean exit
// the consumer got what it needed and closed the pipe; further
@ -222,24 +223,24 @@ pub fn main() !u8 {
};
}
fn runCli() !u8 {
// Long-lived allocator for things that span the whole process. Only
// actually used for the early argsAlloc and the TUI path CLI
// commands run under a per-invocation arena (see below).
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const gpa_alloc = gpa.allocator();
fn runCli(init: std.process.Init) !u8 {
// Juicy Main provides two allocators: `init.gpa` (debug-mode leak-checked
// heap) and `init.arena` (process-lifetime arena). We use gpa for the
// argv copy and long-lived TUI state; per-command work runs under a
// fresh ArenaAllocator below.
const gpa_alloc = init.gpa;
const io = init.io;
const args = try std.process.argsAlloc(gpa_alloc);
defer std.process.argsFree(gpa_alloc, args);
const args = try init.minimal.args.toSlice(gpa_alloc);
defer gpa_alloc.free(args);
// Single buffered writer for all stdout output
var stdout_buf: [4096]u8 = undefined;
var stdout_writer = std.fs.File.stdout().writer(&stdout_buf);
var stdout_writer = std.Io.File.stdout().writer(io, &stdout_buf);
const out: *std.Io.Writer = &stdout_writer.interface;
if (args.len < 2) {
try cli.stderrPrint(usage);
try cli.stderrPrint(io, usage);
return 1;
}
@ -256,23 +257,36 @@ fn runCli() !u8 {
// Parse global flags.
const globals = parseGlobals(args) catch |err| {
switch (err) {
error.MissingValue => try cli.stderrPrint("Error: global flag is missing its value\n"),
error.MissingValue => try cli.stderrPrint(io, "Error: global flag is missing its value\n"),
error.UnknownGlobalFlag => {
try cli.stderrPrint("Error: unknown global flag: ");
try cli.stderrPrint(io, "Error: unknown global flag: ");
if (globalOffender(args)) |bad| {
try cli.stderrPrint(bad);
try cli.stderrPrint(io, bad);
}
try cli.stderrPrint("\nRun 'zfin help' for usage.\n");
try cli.stderrPrint(io, "\nRun 'zfin help' for usage.\n");
},
}
return 1;
};
if (globals.cursor >= args.len) {
try cli.stderrPrint("Error: missing command.\nRun 'zfin help' for usage.\n");
try cli.stderrPrint(io, "Error: missing command.\nRun 'zfin help' for usage.\n");
return 1;
}
// Single wall-clock capture for the rest of this invocation. `now_s`
// is threaded into commands that record "when did this happen"
// (snapshot metadata, audit staleness, rollup header timestamps).
// `today` derives from the same read, so every dated computation in
// this process sees a consistent date even if the wall clock ticks
// over mid-run.
//
// wall-clock required: the one legitimate Timestamp.now() call in
// main dispatch everything downstream takes now_s / today.
const Date = @import("models/date.zig").Date;
const now_s = std.Io.Timestamp.now(io, .real).toSeconds();
const today = Date.fromEpoch(now_s);
// Nag on stderr when hand-maintained data sources are overdue for
// refresh (T-bill rates, Shiller ie_data.csv). See
// src/data/staleness.zig for the registry and rules. Runs here
@ -280,26 +294,24 @@ fn runCli() !u8 {
// lands above command output on every CLI and TUI invocation.
{
const staleness = @import("data/staleness.zig");
const Date = @import("models/date.zig").Date;
var stale_buf: [2048]u8 = undefined;
var stale_writer = std.fs.File.stderr().writer(&stale_buf);
const today = Date.fromEpoch(std.time.timestamp());
var stale_writer = std.Io.File.stderr().writer(io, &stale_buf);
staleness.check(&stale_writer.interface, today, &staleness.entries) catch {};
stale_writer.interface.flush() catch {};
}
const color = @import("format.zig").shouldUseColor(globals.no_color);
const color = @import("format.zig").shouldUseColor(io, init.environ_map, globals.no_color);
const command = args[globals.cursor];
const cmd_args = args[globals.cursor + 1 ..];
var cmd_args: []const []const u8 = @ptrCast(args[globals.cursor + 1 ..]);
// Interactive TUI: long-lived, per-frame allocations benefit from a
// real (non-arena) allocator. Runs against `gpa` directly.
if (std.mem.eql(u8, command, "interactive") or std.mem.eql(u8, command, "i")) {
var tui_config = zfin.Config.fromEnv(gpa_alloc);
var tui_config = zfin.Config.fromEnv(io, gpa_alloc, init.environ_map);
defer tui_config.deinit();
try out.flush();
try tui.run(gpa_alloc, tui_config, globals.portfolio_path, globals.watchlist_path, cmd_args);
try tui.run(io, gpa_alloc, tui_config, globals.portfolio_path, globals.watchlist_path, cmd_args, today);
return 0;
}
@ -319,12 +331,12 @@ fn runCli() !u8 {
defer arena.deinit();
const allocator = arena.allocator();
var config = zfin.Config.fromEnv(allocator);
var config = zfin.Config.fromEnv(io, allocator, init.environ_map);
defer config.deinit();
// Version: doesn't need DataService; uses build_info + Config paths.
if (std.mem.eql(u8, command, "version")) {
commands.version.run(config, cmd_args, out) catch |err| switch (err) {
commands.version.run(io, config, cmd_args, out) catch |err| switch (err) {
error.UnexpectedArg => return 1,
else => return err,
};
@ -332,7 +344,7 @@ fn runCli() !u8 {
return 0;
}
var svc = zfin.DataService.init(allocator, config);
var svc = zfin.DataService.init(io, allocator, config);
defer svc.deinit();
// Normalize symbol argument (cmd_args[0]) to uppercase for commands
@ -353,24 +365,36 @@ fn runCli() !u8 {
// the arg is a flag (starts with '-'). This lets commands like
// `history` have both symbol mode (`zfin history VTI`) and
// flag-driven mode (`zfin history --since 2026-01-01`).
//
// Args returned by `init.minimal.args.toSlice` are `[]const [:0]const u8`
// we can't mutate the slice. Build an owned mutable copy when the
// symbol upper-cased form differs from the raw arg.
var cmd_args_owned: ?[][]const u8 = null;
defer if (cmd_args_owned) |c| allocator.free(c);
if (symbol_cmd and cmd_args.len >= 1 and
(cmd_args[0].len == 0 or cmd_args[0][0] != '-'))
{
for (cmd_args[0]) |*c| c.* = std.ascii.toUpper(c.*);
const upper = try allocator.dupe(u8, cmd_args[0]);
for (upper) |*c| c.* = std.ascii.toUpper(c.*);
const owned = try allocator.alloc([]const u8, cmd_args.len);
owned[0] = upper;
for (cmd_args[1..], 1..) |a, i| owned[i] = a;
cmd_args_owned = owned;
cmd_args = owned;
}
if (std.mem.eql(u8, command, "perf")) {
if (cmd_args.len < 1) {
try cli.stderrPrint("Error: 'perf' requires a symbol argument\n");
try cli.stderrPrint(io, "Error: 'perf' requires a symbol argument\n");
return 1;
}
try commands.perf.run(allocator, &svc, cmd_args[0], color, out);
try commands.perf.run(io, allocator, &svc, cmd_args[0], today, color, out);
} else if (std.mem.eql(u8, command, "quote")) {
if (cmd_args.len < 1) {
try cli.stderrPrint("Error: 'quote' requires a symbol argument\n");
try cli.stderrPrint(io, "Error: 'quote' requires a symbol argument\n");
return 1;
}
try commands.quote.run(allocator, &svc, cmd_args[0], color, out);
try commands.quote.run(io, allocator, &svc, cmd_args[0], today, color, out);
} else if (std.mem.eql(u8, command, "history")) {
// Two modes in one command:
// zfin history <SYMBOL> candle history for a symbol (legacy)
@ -382,33 +406,33 @@ fn runCli() !u8 {
// inside the command.
const is_symbol_mode = cmd_args.len > 0 and cmd_args[0].len > 0 and cmd_args[0][0] != '-';
if (is_symbol_mode) {
commands.history.run(allocator, &svc, "", cmd_args, color, out) catch |err| switch (err) {
commands.history.run(io, allocator, &svc, "", cmd_args, today, color, out) catch |err| switch (err) {
error.UnexpectedArg, error.MissingFlagValue, error.InvalidFlagValue, error.UnknownMetric => return 1,
else => return err,
};
} else {
const pf = resolveUserPath(allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
const pf = resolveUserPath(io, allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
defer if (pf.resolved) |r| r.deinit(allocator);
commands.history.run(allocator, &svc, pf.path, cmd_args, color, out) catch |err| switch (err) {
commands.history.run(io, allocator, &svc, pf.path, cmd_args, today, color, out) catch |err| switch (err) {
error.UnexpectedArg, error.MissingFlagValue, error.InvalidFlagValue, error.UnknownMetric => return 1,
else => return err,
};
}
} else if (std.mem.eql(u8, command, "divs")) {
if (cmd_args.len < 1) {
try cli.stderrPrint("Error: 'divs' requires a symbol argument\n");
try cli.stderrPrint(io, "Error: 'divs' requires a symbol argument\n");
return 1;
}
try commands.divs.run(&svc, cmd_args[0], color, out);
try commands.divs.run(io, &svc, cmd_args[0], today, color, out);
} else if (std.mem.eql(u8, command, "splits")) {
if (cmd_args.len < 1) {
try cli.stderrPrint("Error: 'splits' requires a symbol argument\n");
try cli.stderrPrint(io, "Error: 'splits' requires a symbol argument\n");
return 1;
}
try commands.splits.run(&svc, cmd_args[0], color, out);
try commands.splits.run(io, &svc, cmd_args[0], color, out);
} else if (std.mem.eql(u8, command, "options")) {
if (cmd_args.len < 1) {
try cli.stderrPrint("Error: 'options' requires a symbol argument\n");
try cli.stderrPrint(io, "Error: 'options' requires a symbol argument\n");
return 1;
}
// Parse --ntm flag.
@ -420,19 +444,19 @@ fn runCli() !u8 {
ntm = std.fmt.parseInt(usize, cmd_args[ai], 10) catch 8;
}
}
try commands.options.run(&svc, cmd_args[0], ntm, color, out);
try commands.options.run(io, &svc, cmd_args[0], ntm, color, out);
} else if (std.mem.eql(u8, command, "earnings")) {
if (cmd_args.len < 1) {
try cli.stderrPrint("Error: 'earnings' requires a symbol argument\n");
try cli.stderrPrint(io, "Error: 'earnings' requires a symbol argument\n");
return 1;
}
try commands.earnings.run(&svc, cmd_args[0], color, out);
try commands.earnings.run(io, &svc, cmd_args[0], color, out);
} else if (std.mem.eql(u8, command, "etf")) {
if (cmd_args.len < 1) {
try cli.stderrPrint("Error: 'etf' requires a symbol argument\n");
try cli.stderrPrint(io, "Error: 'etf' requires a symbol argument\n");
return 1;
}
try commands.etf.run(&svc, cmd_args[0], color, out);
try commands.etf.run(io, &svc, cmd_args[0], color, out);
} else if (std.mem.eql(u8, command, "portfolio")) {
// Parse --refresh flag; reject any other token (including old
// positional FILE, which is now a global -p).
@ -441,46 +465,46 @@ fn runCli() !u8 {
if (std.mem.eql(u8, a, "--refresh")) {
force_refresh = true;
} else {
try reportUnexpectedArg("portfolio", a);
try reportUnexpectedArg(io, "portfolio", a);
return 1;
}
}
const pf = resolveUserPath(allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
const pf = resolveUserPath(io, allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
defer if (pf.resolved) |r| r.deinit(allocator);
const wl = resolveUserPath(allocator, config, globals.watchlist_path, zfin.Config.default_watchlist_filename);
const wl = resolveUserPath(io, allocator, config, globals.watchlist_path, zfin.Config.default_watchlist_filename);
defer if (wl.resolved) |r| r.deinit(allocator);
const wl_path: ?[]const u8 = if (globals.watchlist_path != null or wl.resolved != null) wl.path else null;
try commands.portfolio.run(allocator, &svc, pf.path, wl_path, force_refresh, color, out);
try commands.portfolio.run(io, allocator, &svc, pf.path, wl_path, force_refresh, today, color, out);
} else if (std.mem.eql(u8, command, "lookup")) {
if (cmd_args.len < 1) {
try cli.stderrPrint("Error: 'lookup' requires a CUSIP argument\n");
try cli.stderrPrint(io, "Error: 'lookup' requires a CUSIP argument\n");
return 1;
}
try commands.lookup.run(allocator, &svc, cmd_args[0], color, out);
try commands.lookup.run(io, allocator, &svc, cmd_args[0], color, out);
} else if (std.mem.eql(u8, command, "cache")) {
if (cmd_args.len < 1) {
try cli.stderrPrint("Error: 'cache' requires a subcommand (stats, clear)\n");
try cli.stderrPrint(io, "Error: 'cache' requires a subcommand (stats, clear)\n");
return 1;
}
try commands.cache.run(allocator, config, cmd_args[0], out);
try commands.cache.run(io, allocator, config, cmd_args[0], out);
} else if (std.mem.eql(u8, command, "enrich")) {
if (cmd_args.len < 1) {
try cli.stderrPrint("Error: 'enrich' requires a portfolio file path or symbol\n");
try cli.stderrPrint(io, "Error: 'enrich' requires a portfolio file path or symbol\n");
return 1;
}
try commands.enrich.run(allocator, &svc, cmd_args[0], out);
try commands.enrich.run(io, allocator, &svc, cmd_args[0], today, out);
} else if (std.mem.eql(u8, command, "audit")) {
const pf = resolveUserPath(allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
const pf = resolveUserPath(io, allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
defer if (pf.resolved) |r| r.deinit(allocator);
try commands.audit.run(allocator, &svc, pf.path, cmd_args, color, out);
try commands.audit.run(io, allocator, &svc, pf.path, cmd_args, today, now_s, color, out);
} else if (std.mem.eql(u8, command, "analysis")) {
for (cmd_args) |a| {
try reportUnexpectedArg("analysis", a);
try reportUnexpectedArg(io, "analysis", a);
return 1;
}
const pf = resolveUserPath(allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
const pf = resolveUserPath(io, allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
defer if (pf.resolved) |r| r.deinit(allocator);
try commands.analysis.run(allocator, &svc, pf.path, color, out);
try commands.analysis.run(io, allocator, &svc, pf.path, today, color, out);
} else if (std.mem.eql(u8, command, "projections")) {
var events_enabled = true;
var as_of: ?zfin.Date = null;
@ -492,23 +516,22 @@ fn runCli() !u8 {
events_enabled = false;
} else if (std.mem.eql(u8, a, "--as-of") or std.mem.eql(u8, a, "--vs")) {
if (i + 1 >= cmd_args.len) {
try cli.stderrPrint("Error: ");
try cli.stderrPrint(a);
try cli.stderrPrint(" requires a value (YYYY-MM-DD, N[WMQY], or 'live').\n");
try cli.stderrPrint(io, "Error: ");
try cli.stderrPrint(io, a);
try cli.stderrPrint(io, " requires a value (YYYY-MM-DD, N[WMQY], or 'live').\n");
return 1;
}
const value = cmd_args[i + 1];
const today = cli.fmt.todayDate();
const parsed = cli.parseAsOfDate(value, today) catch |err| {
var buf: [256]u8 = undefined;
const msg = cli.fmtAsOfParseError(&buf, value, err);
try cli.stderrPrint(msg);
try cli.stderrPrint("\n");
try cli.stderrPrint(io, msg);
try cli.stderrPrint(io, "\n");
return 1;
};
if (parsed) |d| {
if (d.days > today.days) {
try cli.stderrPrint("Error: date is in the future.\n");
try cli.stderrPrint(io, "Error: date is in the future.\n");
return 1;
}
if (std.mem.eql(u8, a, "--as-of")) {
@ -521,14 +544,14 @@ fn runCli() !u8 {
// as not passing the flag at all.
i += 1; // consume the value
} else {
try reportUnexpectedArg("projections", a);
try reportUnexpectedArg(io, "projections", a);
return 1;
}
}
if (as_of != null and vs_date == null) {
// Single-date mode: view that snapshot only.
}
const pf = resolveUserPath(allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
const pf = resolveUserPath(io, allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
defer if (pf.resolved) |r| r.deinit(allocator);
if (vs_date) |d| {
// Compare mode. `as_of` (if set) designates the "now"
@ -536,9 +559,9 @@ fn runCli() !u8 {
// live against a historical date; `--vs X --as-of Y`
// compares two historical dates with Y being the later
// one.
try commands.projections.runCompare(allocator, &svc, pf.path, events_enabled, d, as_of, color, out);
try commands.projections.runCompare(io, allocator, &svc, pf.path, events_enabled, d, as_of orelse today, as_of != null, color, out);
} else {
try commands.projections.run(allocator, &svc, pf.path, events_enabled, as_of, color, out);
try commands.projections.run(io, allocator, &svc, pf.path, events_enabled, as_of orelse today, as_of != null, color, out);
}
} else if (std.mem.eql(u8, command, "contributions")) {
var since: ?zfin.Date = null;
@ -550,27 +573,26 @@ fn runCli() !u8 {
const a = cmd_args[i];
if (std.mem.eql(u8, a, "--since") or std.mem.eql(u8, a, "--until")) {
if (i + 1 >= cmd_args.len) {
try cli.stderrPrint("Error: ");
try cli.stderrPrint(a);
try cli.stderrPrint(" requires a value (YYYY-MM-DD or N[WMQY]).\n");
try cli.stderrPrint(io, "Error: ");
try cli.stderrPrint(io, a);
try cli.stderrPrint(io, " requires a value (YYYY-MM-DD or N[WMQY]).\n");
return 1;
}
const value = cmd_args[i + 1];
const today = cli.fmt.todayDate();
const parsed = cli.parseAsOfDate(value, today) catch |err| {
var buf: [256]u8 = undefined;
const msg = cli.fmtAsOfParseError(&buf, value, err);
try cli.stderrPrint(msg);
try cli.stderrPrint("\n");
try cli.stderrPrint(io, msg);
try cli.stderrPrint(io, "\n");
return 1;
};
// `parsed == null` means the user typed "live" or an
// empty string meaningless for --since/--until, which
// require concrete dates.
const resolved = parsed orelse {
try cli.stderrPrint("Error: ");
try cli.stderrPrint(a);
try cli.stderrPrint(" does not accept 'live'. Use an explicit date or relative offset.\n");
try cli.stderrPrint(io, "Error: ");
try cli.stderrPrint(io, a);
try cli.stderrPrint(io, " does not accept 'live'. Use an explicit date or relative offset.\n");
return 1;
};
if (std.mem.eql(u8, a, "--since")) {
@ -581,23 +603,22 @@ fn runCli() !u8 {
i += 1; // consume the value
} else if (std.mem.eql(u8, a, "--commit-before") or std.mem.eql(u8, a, "--commit-after")) {
if (i + 1 >= cmd_args.len) {
try cli.stderrPrint("Error: ");
try cli.stderrPrint(a);
try cli.stderrPrint(" requires a value (working, YYYY-MM-DD, 1W/1M/1Q/1Y, HEAD, HEAD~N, or SHA).\n");
try cli.stderrPrint(io, "Error: ");
try cli.stderrPrint(io, a);
try cli.stderrPrint(io, " requires a value (working, YYYY-MM-DD, 1W/1M/1Q/1Y, HEAD, HEAD~N, or SHA).\n");
return 1;
}
const value = cmd_args[i + 1];
const today = cli.fmt.todayDate();
const spec = cli.parseCommitSpec(value, today) catch |err| {
var buf: [256]u8 = undefined;
const msg = cli.fmtCommitSpecError(&buf, value, err);
try cli.stderrPrint(msg);
try cli.stderrPrint("\n");
try cli.stderrPrint(io, msg);
try cli.stderrPrint(io, "\n");
return 1;
};
if (std.mem.eql(u8, a, "--commit-before")) {
if (spec == .working_copy) {
try cli.stderrPrint("Error: --commit-before cannot be `working` — diffing the working copy against itself is meaningless.\n");
try cli.stderrPrint(io, "Error: --commit-before cannot be `working` — diffing the working copy against itself is meaningless.\n");
return 1;
}
before_spec = spec;
@ -606,7 +627,7 @@ fn runCli() !u8 {
}
i += 1; // consume the value
} else {
try reportUnexpectedArg("contributions", a);
try reportUnexpectedArg(io, "contributions", a);
return 1;
}
}
@ -614,15 +635,15 @@ fn runCli() !u8 {
// same axis, same for --until and --commit-after. Taking both
// would be ambiguous about which wins.
if (since != null and before_spec != null) {
try cli.stderrPrint("Error: --since and --commit-before both specify the before side. Pick one.\n");
try cli.stderrPrint(io, "Error: --since and --commit-before both specify the before side. Pick one.\n");
return 1;
}
if (until != null and after_spec != null) {
try cli.stderrPrint("Error: --until and --commit-after both specify the after side. Pick one.\n");
try cli.stderrPrint(io, "Error: --until and --commit-after both specify the after side. Pick one.\n");
return 1;
}
if (since != null and until != null and since.?.days > until.?.days) {
try cli.stderrPrint("Error: --since must be on or before --until.\n");
try cli.stderrPrint(io, "Error: --since must be on or before --until.\n");
return 1;
}
// Resolve to CommitSpec for the command. Date flags become
@ -640,20 +661,20 @@ fn runCli() !u8 {
else
null;
const pf = resolveUserPath(allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
const pf = resolveUserPath(io, allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
defer if (pf.resolved) |r| r.deinit(allocator);
try commands.contributions.run(allocator, &svc, pf.path, before_final, after_final, color, out);
try commands.contributions.run(io, allocator, &svc, pf.path, before_final, after_final, today, color, out);
} else if (std.mem.eql(u8, command, "snapshot")) {
const pf = resolveUserPath(allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
const pf = resolveUserPath(io, allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
defer if (pf.resolved) |r| r.deinit(allocator);
commands.snapshot.run(allocator, &svc, pf.path, cmd_args, color, out) catch |err| switch (err) {
commands.snapshot.run(io, allocator, &svc, pf.path, cmd_args, now_s, color, out) catch |err| switch (err) {
error.UnexpectedArg, error.PortfolioEmpty, error.WriteFailed => return 1,
else => return err,
};
} else if (std.mem.eql(u8, command, "compare")) {
const pf = resolveUserPath(allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
const pf = resolveUserPath(io, allocator, config, globals.portfolio_path, zfin.Config.default_portfolio_filename);
defer if (pf.resolved) |r| r.deinit(allocator);
commands.compare.run(allocator, &svc, pf.path, cmd_args, color, out) catch |err| switch (err) {
commands.compare.run(io, allocator, &svc, pf.path, cmd_args, today, color, out) catch |err| switch (err) {
// All user-level validation errors return 1 silently the
// command already printed a message to stderr.
error.UnexpectedArg,
@ -666,7 +687,7 @@ fn runCli() !u8 {
else => return err,
};
} else {
try cli.stderrPrint("Unknown command. Run 'zfin help' for usage.\n");
try cli.stderrPrint(io, "Unknown command. Run 'zfin help' for usage.\n");
return 1;
}
@ -679,21 +700,21 @@ fn runCli() !u8 {
/// the global-flag migration. Called when a command finds an arg it doesn't
/// understand (typically a stale positional file path or a misplaced global
/// flag like `--no-color` after the subcommand).
fn reportUnexpectedArg(command: []const u8, arg: []const u8) !void {
try cli.stderrPrint("Error: unexpected argument to '");
try cli.stderrPrint(command);
try cli.stderrPrint("': ");
try cli.stderrPrint(arg);
try cli.stderrPrint("\n");
fn reportUnexpectedArg(io: std.Io, command: []const u8, arg: []const u8) !void {
try cli.stderrPrint(io, "Error: unexpected argument to '");
try cli.stderrPrint(io, command);
try cli.stderrPrint(io, "': ");
try cli.stderrPrint(io, arg);
try cli.stderrPrint(io, "\n");
if (std.mem.eql(u8, arg, "--no-color") or
std.mem.eql(u8, arg, "-p") or std.mem.eql(u8, arg, "--portfolio") or
std.mem.eql(u8, arg, "-w") or std.mem.eql(u8, arg, "--watchlist"))
{
try cli.stderrPrint("Hint: global flags must appear before the subcommand.\n");
try cli.stderrPrint(io, "Hint: global flags must appear before the subcommand.\n");
} else {
try cli.stderrPrint("Hint: the portfolio file is now a global option; use `zfin -p <FILE> ");
try cli.stderrPrint(command);
try cli.stderrPrint("`.\n");
try cli.stderrPrint(io, "Hint: the portfolio file is now a global option; use `zfin -p <FILE> ");
try cli.stderrPrint(io, command);
try cli.stderrPrint(io, "`.\n");
}
}
@ -795,16 +816,16 @@ test "parseGlobals: subcommand-local flag NOT consumed as global" {
}
// Single test binary: all source is in one module (file imports, no module
// boundaries), so refAllDeclsRecursive discovers every test in the tree.
// boundaries). `std.testing.refAllDecls(@This())` walks main.zig's top-level
// decls; explicit `_ = @import(...)` lines below cover files reachable only
// indirectly (e.g. via non-pub re-exports, or through types extracted by
// signature without the file's struct itself ever being walked).
//
// IMPORTANT: refAllDeclsRecursive only walks files reachable from main.zig's
// public decl graph. Files that are only reached indirectly (e.g. a module
// re-exported from root.zig as `pub const foo = @import("foo.zig")` where
// main.zig imports root.zig via a *non-pub* `const zfin = @import("root.zig")`)
// are compiled (because their types are referenced) but their `test` blocks
// are NOT collected. Add explicit `_ = @import("path/to/file.zig");` lines
// in the test block below for any such orphaned test files.
// See AGENTS.md "Adding tests" for details.
// To find missing imports: comment out a candidate, run `zig build test
// --summary all`, and watch the test count. If it drops, the import was
// load-bearing. Per AGENTS.md, this list is the minimum set; do not add
// imports speculatively.
test {
std.testing.refAllDeclsRecursive(@This());
std.testing.refAllDecls(@This());
}

View file

@ -210,8 +210,8 @@ pub const Lot = struct {
return self.ticker orelse self.symbol;
}
pub fn isOpen(self: Lot) bool {
return self.lotIsOpenAsOf(Date.fromEpoch(std.time.timestamp()));
pub fn isOpen(self: Lot, as_of: Date) bool {
return self.lotIsOpenAsOf(as_of);
}
/// Was the lot held at end-of-day on `as_of`?
@ -411,8 +411,8 @@ pub const Portfolio = struct {
/// Uses wall-clock today for the open/closed determination. For
/// historical snapshot backfill where "today" is not the right
/// reference, use `positionsAsOf(allocator, as_of)`.
pub fn positions(self: Portfolio, allocator: std.mem.Allocator) ![]Position {
return self.positionsAsOf(allocator, Date.fromEpoch(std.time.timestamp()));
pub fn positions(self: Portfolio, as_of: Date, allocator: std.mem.Allocator) ![]Position {
return self.positionsAsOf(allocator, as_of);
}
/// Like `positions` but evaluates lot open/closed against `as_of`
@ -491,7 +491,7 @@ pub const Portfolio = struct {
/// Aggregate stock/ETF lots into positions for a single account.
/// Same logic as positions() but filtered to lots matching `account_name`.
/// Only includes positions with at least one open lot (closed-only symbols are excluded).
pub fn positionsForAccount(self: Portfolio, allocator: std.mem.Allocator, account_name: []const u8) ![]Position {
pub fn positionsForAccount(self: Portfolio, as_of: Date, allocator: std.mem.Allocator, account_name: []const u8) ![]Position {
var result = std.ArrayList(Position).empty;
errdefer result.deinit(allocator);
@ -529,7 +529,7 @@ pub const Portfolio = struct {
}
const pos = found.?;
if (lot.isOpen()) {
if (lot.isOpen(as_of)) {
pos.shares += lot.shares;
pos.total_cost += lot.costBasis();
pos.open_lots += 1;
@ -567,10 +567,10 @@ pub const Portfolio = struct {
/// Total value of non-stock holdings (cash, CDs, options) for a single account.
/// Only includes open lots (respects close_date and maturity_date).
pub fn nonStockValueForAccount(self: Portfolio, account_name: []const u8) f64 {
pub fn nonStockValueForAccount(self: Portfolio, as_of: Date, account_name: []const u8) f64 {
var total: f64 = 0;
for (self.lots) |lot| {
if (!lot.isOpen()) continue;
if (!lot.isOpen(as_of)) continue;
const lot_acct = lot.account orelse continue;
if (!std.mem.eql(u8, lot_acct, account_name)) continue;
switch (lot.security_type) {
@ -585,10 +585,10 @@ pub const Portfolio = struct {
/// Total value of an account: stocks (priced from the given map, falling back to avg_cost)
/// plus cash, CDs, and options. Only includes open lots.
pub fn totalForAccount(self: Portfolio, allocator: std.mem.Allocator, account_name: []const u8, prices: std.StringHashMap(f64)) f64 {
pub fn totalForAccount(self: Portfolio, as_of: Date, allocator: std.mem.Allocator, account_name: []const u8, prices: std.StringHashMap(f64)) f64 {
var total: f64 = 0;
const acct_positions = self.positionsForAccount(allocator, account_name) catch return self.nonStockValueForAccount(account_name);
const acct_positions = self.positionsForAccount(as_of, allocator, account_name) catch return self.nonStockValueForAccount(as_of, account_name);
defer allocator.free(acct_positions);
for (acct_positions) |pos| {
@ -596,15 +596,15 @@ pub const Portfolio = struct {
total += pos.shares * price * pos.price_ratio;
}
total += self.nonStockValueForAccount(account_name);
total += self.nonStockValueForAccount(as_of, account_name);
return total;
}
/// Total cost basis of all open stock lots.
pub fn totalCostBasis(self: Portfolio) f64 {
pub fn totalCostBasis(self: Portfolio, as_of: Date) f64 {
var total: f64 = 0;
for (self.lots) |lot| {
if (lot.isOpen() and lot.security_type == .stock) total += lot.costBasis();
if (lot.isOpen(as_of) and lot.security_type == .stock) total += lot.costBasis();
}
return total;
}
@ -621,8 +621,8 @@ pub const Portfolio = struct {
}
/// Total cash across all accounts (open lots only).
pub fn totalCash(self: Portfolio) f64 {
return self.totalCashAsOf(Date.fromEpoch(std.time.timestamp()));
pub fn totalCash(self: Portfolio, as_of: Date) f64 {
return self.totalCashAsOf(as_of);
}
/// `totalCash` evaluated against an arbitrary date used by
@ -638,8 +638,8 @@ pub const Portfolio = struct {
}
/// Total illiquid asset value across all accounts (open lots only).
pub fn totalIlliquid(self: Portfolio) f64 {
return self.totalIlliquidAsOf(Date.fromEpoch(std.time.timestamp()));
pub fn totalIlliquid(self: Portfolio, as_of: Date) f64 {
return self.totalIlliquidAsOf(as_of);
}
/// `totalIlliquid` evaluated against an arbitrary date.
@ -655,8 +655,8 @@ pub const Portfolio = struct {
/// Total CD face value across all accounts (open lots only
/// matured CDs are excluded).
pub fn totalCdFaceValue(self: Portfolio) f64 {
return self.totalCdFaceValueAsOf(Date.fromEpoch(std.time.timestamp()));
pub fn totalCdFaceValue(self: Portfolio, as_of: Date) f64 {
return self.totalCdFaceValueAsOf(as_of);
}
/// `totalCdFaceValue` evaluated against an arbitrary date.
@ -672,8 +672,8 @@ pub const Portfolio = struct {
/// Total option cost basis (|shares| * open_price * multiplier)
/// open lots only. Closed/matured options are excluded.
pub fn totalOptionCost(self: Portfolio) f64 {
return self.totalOptionCostAsOf(Date.fromEpoch(std.time.timestamp()));
pub fn totalOptionCost(self: Portfolio, as_of: Date) f64 {
return self.totalOptionCostAsOf(as_of);
}
/// `totalOptionCost` evaluated against an arbitrary date.
@ -731,7 +731,7 @@ test "lot basics" {
.open_date = Date.fromYmd(2024, 1, 15),
.open_price = 150.0,
};
try std.testing.expect(lot.isOpen());
try std.testing.expect(lot.isOpen(Date.fromYmd(2026, 5, 8)));
try std.testing.expectApproxEqAbs(@as(f64, 1500.0), lot.costBasis(), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 2000.0), lot.marketValue(200.0, true), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 500.0), lot.unrealizedGainLoss(200.0), 0.01);
@ -747,7 +747,7 @@ test "closed lot" {
.close_date = Date.fromYmd(2024, 6, 15),
.close_price = 200.0,
};
try std.testing.expect(!lot.isOpen());
try std.testing.expect(!lot.isOpen(Date.fromYmd(2026, 5, 8)));
try std.testing.expectApproxEqAbs(@as(f64, 500.0), lot.realizedGainLoss().?, 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 0.3333), lot.returnPct(0), 0.001);
}
@ -765,7 +765,7 @@ test "portfolio positions" {
var portfolio = Portfolio{ .lots = &lots, .allocator = allocator };
// Don't call deinit since these are stack-allocated test strings
const pos = try portfolio.positions(allocator);
const pos = try portfolio.positions(Date.fromYmd(2026, 5, 8), allocator);
defer allocator.free(pos);
try std.testing.expectEqual(@as(usize, 2), pos.len);
@ -831,17 +831,17 @@ test "Portfolio totals" {
const portfolio = Portfolio{ .lots = &lots, .allocator = std.testing.allocator };
// totalCostBasis: only open stock lots -> 10 * 150 = 1500
try std.testing.expectApproxEqAbs(@as(f64, 1500.0), portfolio.totalCostBasis(), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 1500.0), portfolio.totalCostBasis(Date.fromYmd(2026, 5, 8)), 0.01);
// totalRealizedGainLoss: closed stock lots -> 5 * (160-140) = 100
try std.testing.expectApproxEqAbs(@as(f64, 100.0), portfolio.totalRealizedGainLoss(), 0.01);
// totalCash
try std.testing.expectApproxEqAbs(@as(f64, 50000.0), portfolio.totalCash(), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 50000.0), portfolio.totalCash(Date.fromYmd(2026, 5, 8)), 0.01);
// totalIlliquid
try std.testing.expectApproxEqAbs(@as(f64, 500000.0), portfolio.totalIlliquid(), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 500000.0), portfolio.totalIlliquid(Date.fromYmd(2026, 5, 8)), 0.01);
// totalCdFaceValue
try std.testing.expectApproxEqAbs(@as(f64, 10000.0), portfolio.totalCdFaceValue(), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 10000.0), portfolio.totalCdFaceValue(Date.fromYmd(2026, 5, 8)), 0.01);
// totalOptionCost: |2| * 5.50 * 100 = 1100
try std.testing.expectApproxEqAbs(@as(f64, 1100.0), portfolio.totalOptionCost(), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 1100.0), portfolio.totalOptionCost(Date.fromYmd(2026, 5, 8)), 0.01);
// hasType
try std.testing.expect(portfolio.hasType(.stock));
try std.testing.expect(portfolio.hasType(.cash));
@ -885,7 +885,7 @@ test "Portfolio.totalOptionCost: excludes closed options" {
// Only CALL_OPEN contributes: |-5| * 2.00 * 100 = 1000.
// Pre-fix would have been 1000 + |-3| * 4.00 * 100 = 2200.
try std.testing.expectApproxEqAbs(@as(f64, 1000.0), portfolio.totalOptionCost(), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 1000.0), portfolio.totalOptionCost(Date.fromYmd(2026, 5, 8)), 0.01);
}
test "Portfolio.totalOptionCost: excludes matured options" {
@ -909,7 +909,7 @@ test "Portfolio.totalOptionCost: excludes matured options" {
};
const portfolio = Portfolio{ .lots = &lots, .allocator = std.testing.allocator };
try std.testing.expectApproxEqAbs(@as(f64, 1000.0), portfolio.totalOptionCost(), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 1000.0), portfolio.totalOptionCost(Date.fromYmd(2026, 5, 8)), 0.01);
}
test "Portfolio.totalCdFaceValue: excludes matured CDs" {
@ -934,7 +934,7 @@ test "Portfolio.totalCdFaceValue: excludes matured CDs" {
const portfolio = Portfolio{ .lots = &lots, .allocator = std.testing.allocator };
// Pre-fix would have been 50000 + 75000 = 125000.
try std.testing.expectApproxEqAbs(@as(f64, 50000.0), portfolio.totalCdFaceValue(), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 50000.0), portfolio.totalCdFaceValue(Date.fromYmd(2026, 5, 8)), 0.01);
}
test "Portfolio.totalCash: excludes closed cash lots" {
@ -957,7 +957,7 @@ test "Portfolio.totalCash: excludes closed cash lots" {
};
const portfolio = Portfolio{ .lots = &lots, .allocator = std.testing.allocator };
try std.testing.expectApproxEqAbs(@as(f64, 10000.0), portfolio.totalCash(), 0.01);
try std.testing.expectApproxEqAbs(@as(f64, 10000.0), portfolio.totalCash(Date.fromYmd(2026, 5, 8)), 0.01);
}
test "Portfolio.totalIlliquidAsOf: respects as_of for backfill" {
@ -1056,7 +1056,7 @@ test "positions propagates price_ratio from lot" {
};
var portfolio = Portfolio{ .lots = &lots, .allocator = allocator };
const pos = try portfolio.positions(allocator);
const pos = try portfolio.positions(Date.fromYmd(2026, 5, 8), allocator);
defer allocator.free(pos);
try std.testing.expectEqual(@as(usize, 2), pos.len);
@ -1083,7 +1083,7 @@ test "positions separates lots with different price_ratio" {
};
var portfolio = Portfolio{ .lots = &lots, .allocator = allocator };
const pos = try portfolio.positions(allocator);
const pos = try portfolio.positions(Date.fromYmd(2026, 5, 8), allocator);
defer allocator.free(pos);
// Should produce 2 separate positions, not 1 merged position
@ -1122,7 +1122,7 @@ test "positionsForAccount excludes closed-only symbols" {
var portfolio = Portfolio{ .lots = &lots, .allocator = allocator };
// Account A: should only see AAPL (XLV is fully closed there)
const pos_a = try portfolio.positionsForAccount(allocator, "Acct A");
const pos_a = try portfolio.positionsForAccount(Date.fromYmd(2026, 5, 8), allocator, "Acct A");
defer allocator.free(pos_a);
try std.testing.expectEqual(@as(usize, 1), pos_a.len);
@ -1130,7 +1130,7 @@ test "positionsForAccount excludes closed-only symbols" {
try std.testing.expectApproxEqAbs(@as(f64, 10.0), pos_a[0].shares, 0.01);
// Account B: should see XLV with 50 shares
const pos_b = try portfolio.positionsForAccount(allocator, "Acct B");
const pos_b = try portfolio.positionsForAccount(Date.fromYmd(2026, 5, 8), allocator, "Acct B");
defer allocator.free(pos_b);
try std.testing.expectEqual(@as(usize, 1), pos_b.len);
@ -1150,7 +1150,7 @@ test "isOpen respects maturity_date" {
.security_type = .option,
.maturity_date = past,
};
try std.testing.expect(!expired_option.isOpen());
try std.testing.expect(!expired_option.isOpen(Date.fromYmd(2026, 5, 8)));
const active_option = Lot{
.symbol = "AAPL 12/31/2099 150 C",
@ -1160,7 +1160,7 @@ test "isOpen respects maturity_date" {
.security_type = .option,
.maturity_date = future,
};
try std.testing.expect(active_option.isOpen());
try std.testing.expect(active_option.isOpen(Date.fromYmd(2026, 5, 8)));
const closed_option = Lot{
.symbol = "AAPL 12/31/2099 150 C",
@ -1171,7 +1171,7 @@ test "isOpen respects maturity_date" {
.maturity_date = future,
.close_date = Date.fromYmd(2024, 6, 1),
};
try std.testing.expect(!closed_option.isOpen());
try std.testing.expect(!closed_option.isOpen(Date.fromYmd(2026, 5, 8)));
const stock = Lot{
.symbol = "AAPL",
@ -1179,7 +1179,7 @@ test "isOpen respects maturity_date" {
.open_date = Date.fromYmd(2023, 1, 1),
.open_price = 150.0,
};
try std.testing.expect(stock.isOpen());
try std.testing.expect(stock.isOpen(Date.fromYmd(2026, 5, 8)));
}
// lotIsOpenAsOf
@ -1289,7 +1289,7 @@ test "lotIsOpenAsOf: isOpen() stays compatible via today" {
.open_date = Date.fromYmd(2024, 1, 15),
.open_price = 150.0,
};
try std.testing.expectEqual(stock.isOpen(), stock.lotIsOpenAsOf(Date.fromEpoch(std.time.timestamp())));
try std.testing.expectEqual(stock.isOpen(Date.fromYmd(2026, 5, 8)), stock.lotIsOpenAsOf(Date.fromYmd(2026, 5, 8)));
const closed = Lot{
.symbol = "AAPL",
@ -1299,7 +1299,7 @@ test "lotIsOpenAsOf: isOpen() stays compatible via today" {
.close_date = Date.fromYmd(2024, 6, 15),
.close_price = 200.0,
};
try std.testing.expectEqual(closed.isOpen(), closed.lotIsOpenAsOf(Date.fromEpoch(std.time.timestamp())));
try std.testing.expectEqual(closed.isOpen(Date.fromYmd(2026, 5, 8)), closed.lotIsOpenAsOf(Date.fromYmd(2026, 5, 8)));
}
test "nonStockValueForAccount" {
@ -1320,10 +1320,10 @@ test "nonStockValueForAccount" {
// cash(5000) + cd(50000) + open option(2*3.50*100=700) = 55700
// expired option excluded
const ns = portfolio.nonStockValueForAccount("IRA");
const ns = portfolio.nonStockValueForAccount(Date.fromYmd(2026, 5, 8), "IRA");
try std.testing.expectApproxEqAbs(@as(f64, 55700.0), ns, 0.01);
const ns_other = portfolio.nonStockValueForAccount("Other");
const ns_other = portfolio.nonStockValueForAccount(Date.fromYmd(2026, 5, 8), "Other");
try std.testing.expectApproxEqAbs(@as(f64, 1000.0), ns_other, 0.01);
}
@ -1349,7 +1349,7 @@ test "totalForAccount" {
// stocks: AAPL(100*170=17000) + MSFT(50*300=15000) = 32000
// non-stock: cash(2000) + cd(10000) + option(1*5*100=500) = 12500
// total = 44500
const total = portfolio.totalForAccount(allocator, "IRA", prices);
const total = portfolio.totalForAccount(Date.fromYmd(2026, 5, 8), allocator, "IRA", prices);
try std.testing.expectApproxEqAbs(@as(f64, 44500.0), total, 0.01);
}

View file

@ -4,42 +4,50 @@
//! token bucket algorithm. Tokens refill continuously; each request
//! consumes one token. When the bucket is empty, callers can either
//! poll with `tryAcquire` or block with `acquire`.
//!
//! wall-clock required: a rate limiter is, by definition, a clock
//! consumer. Every refill computation needs the actual elapsed time
//! since the last refill. Threading a caller-captured timestamp would
//! collapse every `acquire` in the same frame to the same "now," which
//! would under-refill the bucket across a series of rate-limited calls.
const std = @import("std");
io: std.Io,
/// Maximum tokens (requests) in the bucket
max_tokens: u32,
/// Current available tokens
tokens: f64,
/// Tokens added per nanosecond
refill_rate_per_ns: f64,
/// Last time tokens were refilled
/// Last time tokens were refilled (nanoseconds from clock.real)
last_refill: i128,
const RateLimiter = @This();
/// Create a rate limiter.
/// `max_per_window` is the max requests allowed in `window_ns` nanoseconds.
pub fn init(max_per_window: u32, window_ns: u64) RateLimiter {
pub fn init(io: std.Io, max_per_window: u32, window_ns: u64) RateLimiter {
return .{
.io = io,
.max_tokens = max_per_window,
.tokens = @floatFromInt(max_per_window),
.refill_rate_per_ns = @as(f64, @floatFromInt(max_per_window)) / @as(f64, @floatFromInt(window_ns)),
.last_refill = std.time.nanoTimestamp(),
.last_refill = @intCast(std.Io.Timestamp.now(io, .real).nanoseconds),
};
}
/// Convenience: N requests per minute.
/// Starts with 1 token (no burst) to stay within provider sliding-window limits.
pub fn perMinute(n: u32) RateLimiter {
var rl = init(n, std.time.ns_per_min);
pub fn perMinute(io: std.Io, n: u32) RateLimiter {
var rl = init(io, n, std.time.ns_per_min);
rl.tokens = 1.0;
return rl;
}
/// Convenience: N requests per day
pub fn perDay(n: u32) RateLimiter {
return init(n, std.time.ns_per_day);
pub fn perDay(io: std.Io, n: u32) RateLimiter {
return init(io, n, std.time.ns_per_day);
}
/// Try to acquire a token. Returns true if granted, false if rate-limited.
@ -58,7 +66,7 @@ pub fn acquire(self: *RateLimiter) void {
while (!self.tryAcquire()) {
// Sleep for the time needed to generate 1 token
const wait_ns: u64 = @intFromFloat(1.0 / self.refill_rate_per_ns);
std.Thread.sleep(wait_ns);
std.Io.sleep(self.io, .{ .nanoseconds = @intCast(wait_ns) }, .awake) catch {};
}
}
@ -66,7 +74,7 @@ pub fn acquire(self: *RateLimiter) void {
/// Use after receiving a server-side 429 to wait before retrying.
pub fn backoff(self: *RateLimiter) void {
const wait_ns: u64 = @max(self.estimateWaitNs(), 2 * std.time.ns_per_s);
std.Thread.sleep(wait_ns);
std.Io.sleep(self.io, .{ .nanoseconds = @intCast(wait_ns) }, .awake) catch {};
}
/// Returns estimated wait time in nanoseconds until a token is available.
@ -79,7 +87,7 @@ pub fn estimateWaitNs(self: *RateLimiter) u64 {
}
fn refill(self: *RateLimiter) void {
const now = std.time.nanoTimestamp();
const now: i128 = @intCast(std.Io.Timestamp.now(self.io, .real).nanoseconds);
const elapsed = now - self.last_refill;
if (elapsed <= 0) return;
@ -89,7 +97,7 @@ fn refill(self: *RateLimiter) void {
}
test "rate limiter basic" {
var rl = RateLimiter.perMinute(60);
var rl = RateLimiter.perMinute(std.testing.io, 60);
// perMinute starts with 1 token (no burst)
try std.testing.expect(rl.tryAcquire());
// Second call should be rate-limited immediately
@ -97,7 +105,7 @@ test "rate limiter basic" {
}
test "rate limiter perDay keeps full burst" {
var rl = RateLimiter.perDay(25);
var rl = RateLimiter.perDay(std.testing.io, 25);
// perDay starts with full bucket
for (0..25) |_| {
try std.testing.expect(rl.tryAcquire());
@ -106,7 +114,7 @@ test "rate limiter perDay keeps full burst" {
}
test "rate limiter exhaustion" {
var rl = RateLimiter.init(2, std.time.ns_per_s);
var rl = RateLimiter.init(std.testing.io, 2, std.time.ns_per_s);
try std.testing.expect(rl.tryAcquire());
try std.testing.expect(rl.tryAcquire());
// Bucket should be empty now

View file

@ -98,15 +98,17 @@ fn parseSha256Etag(etag: []const u8) ?[]const u8 {
/// Thin HTTP client wrapper with retry and error classification.
pub const Client = struct {
io: std.Io,
allocator: std.mem.Allocator,
http_client: std.http.Client,
max_retries: u8 = 3,
base_backoff_ms: u64 = 500,
pub fn init(allocator: std.mem.Allocator) Client {
pub fn init(io: std.Io, allocator: std.mem.Allocator) Client {
return .{
.io = io,
.allocator = allocator,
.http_client = std.http.Client{ .allocator = allocator },
.http_client = std.http.Client{ .allocator = allocator, .io = io },
};
}
@ -144,7 +146,7 @@ pub const Client = struct {
fn backoffSleep(self: *Client, attempt: u8) void {
const backoff = self.base_backoff_ms * std.math.shl(u64, 1, attempt);
std.Thread.sleep(backoff * std.time.ns_per_ms);
std.Io.sleep(self.io, std.Io.Duration.fromMilliseconds(@intCast(backoff)), .awake) catch {};
}
fn doRequest(self: *Client, method: std.http.Method, url: []const u8, body: ?[]const u8, extra_headers: []const std.http.Header) HttpError!Response {

View file

@ -181,11 +181,11 @@ pub const AlphaVantage = struct {
rate_limiter: RateLimiter,
allocator: std.mem.Allocator,
pub fn init(allocator: std.mem.Allocator, api_key: []const u8) AlphaVantage {
pub fn init(io: std.Io, allocator: std.mem.Allocator, api_key: []const u8) AlphaVantage {
return .{
.api_key = api_key,
.client = http.Client.init(allocator),
.rate_limiter = RateLimiter.perDay(25),
.client = http.Client.init(io, allocator),
.rate_limiter = RateLimiter.perDay(io, 25),
.allocator = allocator,
};
}

View file

@ -22,10 +22,10 @@ pub const Cboe = struct {
rate_limiter: RateLimiter,
allocator: std.mem.Allocator,
pub fn init(allocator: std.mem.Allocator) Cboe {
pub fn init(io: std.Io, allocator: std.mem.Allocator) Cboe {
return .{
.client = http.Client.init(allocator),
.rate_limiter = RateLimiter.perMinute(30),
.client = http.Client.init(io, allocator),
.rate_limiter = RateLimiter.perMinute(io, 30),
.allocator = allocator,
};
}

View file

@ -44,11 +44,11 @@ pub const Fmp = struct {
rate_limiter: RateLimiter,
allocator: std.mem.Allocator,
pub fn init(allocator: std.mem.Allocator, api_key: []const u8) Fmp {
pub fn init(io: std.Io, allocator: std.mem.Allocator, api_key: []const u8) Fmp {
return .{
.api_key = api_key,
.client = http.Client.init(allocator),
.rate_limiter = RateLimiter.perDay(250),
.client = http.Client.init(io, allocator),
.rate_limiter = RateLimiter.perDay(io, 250),
.allocator = allocator,
};
}

View file

@ -24,11 +24,12 @@ pub const FigiResult = struct {
/// Look up a single CUSIP via OpenFIGI. Caller must free returned strings.
/// Returns null ticker if not found.
pub fn lookupCusip(
io: std.Io,
allocator: std.mem.Allocator,
cusip: []const u8,
api_key: ?[]const u8,
) !FigiResult {
const results = try lookupCusips(allocator, &.{cusip}, api_key);
const results = try lookupCusips(io, allocator, &.{cusip}, api_key);
defer {
for (results) |r| {
if (r.ticker) |t| allocator.free(t);
@ -52,6 +53,7 @@ pub fn lookupCusip(
/// Look up multiple CUSIPs in a single batch request. Caller owns all returned slices.
/// Results array is parallel to the input cusips array (same length, same order).
pub fn lookupCusips(
io: std.Io,
allocator: std.mem.Allocator,
cusips: []const []const u8,
api_key: ?[]const u8,
@ -81,7 +83,7 @@ pub fn lookupCusips(
n_headers += 1;
}
var client = http.Client.init(allocator);
var client = http.Client.init(io, allocator);
defer client.deinit();
var response = try client.post(api_url, body, headers_buf[0..n_headers]);

View file

@ -23,11 +23,11 @@ pub const Polygon = struct {
rate_limiter: RateLimiter,
allocator: std.mem.Allocator,
pub fn init(allocator: std.mem.Allocator, api_key: []const u8) Polygon {
pub fn init(io: std.Io, allocator: std.mem.Allocator, api_key: []const u8) Polygon {
return .{
.api_key = api_key,
.client = http.Client.init(allocator),
.rate_limiter = RateLimiter.perMinute(5),
.client = http.Client.init(io, allocator),
.rate_limiter = RateLimiter.perMinute(io, 5),
.allocator = allocator,
};
}

View file

@ -21,9 +21,9 @@ pub const Tiingo = struct {
allocator: std.mem.Allocator,
api_key: []const u8,
pub fn init(allocator: std.mem.Allocator, api_key: []const u8) Tiingo {
pub fn init(io: std.Io, allocator: std.mem.Allocator, api_key: []const u8) Tiingo {
return .{
.client = http.Client.init(allocator),
.client = http.Client.init(io, allocator),
.allocator = allocator,
.api_key = api_key,
};

View file

@ -24,13 +24,13 @@ pub const TwelveData = struct {
rate_limiter: RateLimiter,
allocator: std.mem.Allocator,
pub fn init(allocator: std.mem.Allocator, api_key: []const u8) TwelveData {
pub fn init(io: std.Io, allocator: std.mem.Allocator, api_key: []const u8) TwelveData {
return .{
.api_key = api_key,
.client = http.Client.init(allocator),
.client = http.Client.init(io, allocator),
// Provider is 8/min, but we seem to be bumping against it, so we
// will be a bit more conservative here. Slow and steady
.rate_limiter = RateLimiter.perMinute(7),
.rate_limiter = RateLimiter.perMinute(io, 7),
.allocator = allocator,
};
}

View file

@ -20,9 +20,9 @@ pub const Yahoo = struct {
client: http.Client,
allocator: std.mem.Allocator,
pub fn init(allocator: std.mem.Allocator) Yahoo {
pub fn init(io: std.Io, allocator: std.mem.Allocator) Yahoo {
return .{
.client = http.Client.init(allocator),
.client = http.Client.init(io, allocator),
.allocator = allocator,
};
}

View file

@ -38,6 +38,17 @@ const performance = @import("analytics/performance.zig");
const http = @import("net/http.zig");
const atomic = @import("atomic.zig");
// Wall-clock policy
//
// `FetchResult.timestamp` records when a given fetch or cached-read
// completed. Each `std.Io.Timestamp.now(self.io, .real)` call in
// this file stamps one specific fetch a single command invocation
// produces many fetches, each with its own real-time stamp. Threading
// `now_s` in from the caller would collapse all per-fetch timestamps to
// the command-entry time, which is not what callers want when they
// display "fetched 3s ago" for some symbols and "cached 2d ago" for
// others in the same command.
pub const DataError = error{
NoApiKey,
FetchFailed,
@ -147,15 +158,20 @@ pub const DataService = struct {
///
/// The wrapper serializes every allocation with a mutex. Cost is
/// one lock acquire/release per alloc negligible next to the I/O
/// these allocations feed (HTTP requests, cache writes). The
/// alternative (threading per-worker arenas through every
/// transitive callsite) was rejected as error-prone.
/// Thread-safe allocator used for all DataService-internal allocations.
///
/// DO NOT add an "unwrap" method or store the child allocator
/// directly. The point is that internal callers don't need to
/// know whether they're running under threads every path goes
/// through the lock by construction.
thread_safe: std.heap.ThreadSafeAllocator,
/// In Zig 0.16, the Juicy-Main-provided `init.gpa` (DebugAllocator)
/// is thread-safe by default when not single-threaded, and
/// `ArenaAllocator` is thread-safe and lock-free. Callers should
/// pass whichever thread-safe allocator is appropriate we no
/// longer wrap it ourselves.
///
/// DO NOT add an "unwrap" method or pass a non-thread-safe
/// allocator. The point is that internal callers don't need to
/// know whether they're running under threads the allocator
/// itself guarantees safety.
allocator: std.mem.Allocator,
io: std.Io,
config: Config,
// Lazily initialized providers (null until first use)
@ -167,9 +183,10 @@ pub const DataService = struct {
yh: ?Yahoo = null,
tg: ?Tiingo = null,
pub fn init(base_allocator: std.mem.Allocator, config: Config) DataService {
pub fn init(io: std.Io, allocator: std.mem.Allocator, config: Config) DataService {
const self = DataService{
.thread_safe = .{ .child_allocator = base_allocator },
.allocator = allocator,
.io = io,
.config = config,
};
// Missing-key warnings are noise under `zig build test` where
@ -179,17 +196,6 @@ pub const DataService = struct {
return self;
}
/// Return the thread-safe allocator. Always go through this, never
/// access the child allocator directly see the doc-comment on
/// `thread_safe` for why.
///
/// Safe to call from any method that holds `*DataService`. The
/// returned `std.mem.Allocator` embeds `&self.thread_safe`, which
/// is stable for as long as `self` is.
pub fn allocator(self: *DataService) std.mem.Allocator {
return self.thread_safe.allocator();
}
/// Log warnings for missing API keys so users know which features are unavailable.
fn logMissingKeys(self: DataService) void {
// Primary candle provider
@ -235,7 +241,7 @@ pub const DataService = struct {
if (@field(self, field_name)) |*p| return p;
if (T == Cboe or T == Yahoo) {
// CBOE and Yahoo have no API key
@field(self, field_name) = T.init(self.allocator());
@field(self, field_name) = T.init(self.io, self.allocator);
} else {
// All we're doing here is lower casing the type name, then
// appending _key to it, so AlphaVantage -> alphavantage_key
@ -252,7 +258,7 @@ pub const DataService = struct {
break :blk buf[0 .. short.len + 4];
};
const key = @field(self.config, config_key) orelse return DataError.NoApiKey;
@field(self, field_name) = T.init(self.allocator(), key);
@field(self, field_name) = T.init(self.io, self.allocator, key);
}
return &@field(self, field_name).?;
}
@ -267,7 +273,7 @@ pub const DataService = struct {
// Cache helper
fn store(self: *DataService) cache.Store {
return cache.Store.init(self.allocator(), self.config.cache_dir);
return cache.Store.init(self.io, self.allocator, self.config.cache_dir);
}
/// Generic fetch-or-cache for simple data types (dividends, splits, options).
@ -285,14 +291,14 @@ pub const DataService = struct {
if (s.read(T, symbol, postProcess, .fresh_only)) |cached| {
log.debug("{s}: {s} fresh in local cache", .{ symbol, @tagName(data_type) });
return .{ .data = cached.data, .source = .cached, .timestamp = cached.timestamp, .allocator = self.allocator() };
return .{ .data = cached.data, .source = .cached, .timestamp = cached.timestamp, .allocator = self.allocator };
}
// Try server sync before hitting providers
if (self.syncFromServer(symbol, data_type)) {
if (s.read(T, symbol, postProcess, .fresh_only)) |cached| {
log.debug("{s}: {s} synced from server and fresh", .{ symbol, @tagName(data_type) });
return .{ .data = cached.data, .source = .cached, .timestamp = cached.timestamp, .allocator = self.allocator() };
return .{ .data = cached.data, .source = .cached, .timestamp = cached.timestamp, .allocator = self.allocator };
}
log.debug("{s}: {s} synced from server but stale, falling through to provider", .{ symbol, @tagName(data_type) });
}
@ -306,14 +312,14 @@ pub const DataService = struct {
return DataError.FetchFailed;
};
s.write(T, symbol, retried, data_type.ttl());
return .{ .data = retried, .source = .fetched, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = retried, .source = .fetched, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
}
s.writeNegative(symbol, data_type);
return DataError.FetchFailed;
};
s.write(T, symbol, fetched, data_type.ttl());
return .{ .data = fetched, .source = .fetched, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = fetched, .source = .fetched, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
}
/// Dispatch a fetch to the correct provider based on model type.
@ -321,15 +327,15 @@ pub const DataService = struct {
return switch (T) {
Dividend => {
var pg = try self.getProvider(Polygon);
return pg.fetchDividends(self.allocator(), symbol, null, null);
return pg.fetchDividends(self.allocator, symbol, null, null);
},
Split => {
var pg = try self.getProvider(Polygon);
return pg.fetchSplits(self.allocator(), symbol);
return pg.fetchSplits(self.allocator, symbol);
},
OptionsChain => {
var cboe = try self.getProvider(Cboe);
return cboe.fetchOptionsChain(self.allocator(), symbol);
return cboe.fetchOptionsChain(self.allocator, symbol);
},
else => @compileError("unsupported type for fetchFromProvider"),
};
@ -366,7 +372,7 @@ pub const DataService = struct {
// If preferred is Yahoo (degraded symbol), try Yahoo first
if (preferred == .yahoo) {
if (self.getProvider(Yahoo)) |yh| {
if (yh.fetchCandles(self.allocator(), symbol, from, to)) |candles| {
if (yh.fetchCandles(self.allocator, symbol, from, to)) |candles| {
log.debug("{s}: candles from Yahoo (preferred)", .{symbol});
return .{ .candles = candles, .provider = .yahoo };
} else |err| {
@ -377,7 +383,7 @@ pub const DataService = struct {
// Primary: Tiingo
if (self.getProvider(Tiingo)) |tg| {
if (tg.fetchCandles(self.allocator(), symbol, from, to)) |candles| {
if (tg.fetchCandles(self.allocator, symbol, from, to)) |candles| {
log.debug("{s}: candles from Tiingo", .{symbol});
return .{ .candles = candles, .provider = .tiingo };
} else |err| {
@ -392,7 +398,7 @@ pub const DataService = struct {
// Rate limited: back off and retry this is expected, not a failure
log.info("{s}: Tiingo rate limited, backing off", .{symbol});
self.rateLimitBackoff();
if (tg.fetchCandles(self.allocator(), symbol, from, to)) |candles| {
if (tg.fetchCandles(self.allocator, symbol, from, to)) |candles| {
log.debug("{s}: candles from Tiingo (after rate limit backoff)", .{symbol});
return .{ .candles = candles, .provider = .tiingo };
} else |retry_err| {
@ -400,7 +406,7 @@ pub const DataService = struct {
if (retry_err == error.RateLimited) {
// Still rate limited after backoff one more try
self.rateLimitBackoff();
if (tg.fetchCandles(self.allocator(), symbol, from, to)) |candles| {
if (tg.fetchCandles(self.allocator, symbol, from, to)) |candles| {
log.debug("{s}: candles from Tiingo (after second backoff)", .{symbol});
return .{ .candles = candles, .provider = .tiingo };
} else |_| {}
@ -425,7 +431,7 @@ pub const DataService = struct {
// Fallback: Yahoo (symbol not on Tiingo)
if (preferred != .yahoo) {
if (self.getProvider(Yahoo)) |yh| {
if (yh.fetchCandles(self.allocator(), symbol, from, to)) |candles| {
if (yh.fetchCandles(self.allocator, symbol, from, to)) |candles| {
log.info("{s}: candles from Yahoo (Tiingo fallback)", .{symbol});
return .{ .candles = candles, .provider = .yahoo };
} else |err| {
@ -454,7 +460,7 @@ pub const DataService = struct {
/// the entire history.
pub fn getCandles(self: *DataService, symbol: []const u8) DataError!FetchResult(Candle) {
var s = self.store();
const today = fmt.todayDate();
const today = fmt.todayDate(self.io);
// Check candle metadata for freshness (tiny file, no candle deserialization)
const meta_result = s.readCandleMeta(symbol);
@ -469,14 +475,14 @@ pub const DataService = struct {
// Fresh deserialize candles and return
log.debug("{s}: candles fresh in local cache", .{symbol});
if (s.read(Candle, symbol, null, .any)) |r|
return .{ .data = r.data, .source = .cached, .timestamp = mr.created, .allocator = self.allocator() };
return .{ .data = r.data, .source = .cached, .timestamp = mr.created, .allocator = self.allocator };
} else {
// Stale try server sync before incremental fetch
if (self.syncCandlesFromServer(symbol)) {
if (s.isCandleMetaFresh(symbol)) {
log.debug("{s}: candles synced from server and fresh", .{symbol});
if (s.read(Candle, symbol, null, .any)) |r|
return .{ .data = r.data, .source = .cached, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = r.data, .source = .cached, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
}
log.debug("{s}: candles synced from server but stale, falling through to incremental fetch", .{symbol});
}
@ -488,7 +494,7 @@ pub const DataService = struct {
if (!fetch_from.lessThan(today)) {
s.updateCandleMeta(symbol, m.last_close, m.last_date, m.provider, m.fail_count);
if (s.read(Candle, symbol, null, .any)) |r|
return .{ .data = r.data, .source = .cached, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = r.data, .source = .cached, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
} else {
// Incremental fetch from day after last cached candle
const result = self.fetchCandlesFromProviders(symbol, fetch_from, today, m.provider) catch |err| {
@ -502,31 +508,31 @@ pub const DataService = struct {
if (new_fail_count >= 3) {
log.warn("{s}: degraded after {d} consecutive failures, returning stale data", .{ symbol, new_fail_count });
if (s.read(Candle, symbol, null, .any)) |r|
return .{ .data = r.data, .source = .cached, .timestamp = mr.created, .allocator = self.allocator() };
return .{ .data = r.data, .source = .cached, .timestamp = mr.created, .allocator = self.allocator };
}
return DataError.TransientError;
}
// Non-transient failure return stale data if available
if (s.read(Candle, symbol, null, .any)) |r|
return .{ .data = r.data, .source = .cached, .timestamp = mr.created, .allocator = self.allocator() };
return .{ .data = r.data, .source = .cached, .timestamp = mr.created, .allocator = self.allocator };
return DataError.FetchFailed;
};
const new_candles = result.candles;
if (new_candles.len == 0) {
// No new candles (weekend/holiday) refresh TTL, reset fail_count
self.allocator().free(new_candles);
self.allocator.free(new_candles);
s.updateCandleMeta(symbol, m.last_close, m.last_date, result.provider, 0);
if (s.read(Candle, symbol, null, .any)) |r|
return .{ .data = r.data, .source = .cached, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = r.data, .source = .cached, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
} else {
// Append new candles to existing file + update meta, reset fail_count
s.appendCandles(symbol, new_candles, result.provider, 0);
if (s.read(Candle, symbol, null, .any)) |r| {
self.allocator().free(new_candles);
return .{ .data = r.data, .source = .fetched, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
self.allocator.free(new_candles);
return .{ .data = r.data, .source = .fetched, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
}
return .{ .data = new_candles, .source = .fetched, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = new_candles, .source = .fetched, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
}
}
}
@ -537,7 +543,7 @@ pub const DataService = struct {
if (s.isCandleMetaFresh(symbol)) {
log.debug("{s}: candles synced from server and fresh (no prior cache)", .{symbol});
if (s.read(Candle, symbol, null, .any)) |r|
return .{ .data = r.data, .source = .cached, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = r.data, .source = .cached, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
}
log.debug("{s}: candles synced from server but stale, falling through to full fetch", .{symbol});
}
@ -563,7 +569,7 @@ pub const DataService = struct {
s.cacheCandles(symbol, result.candles, result.provider, 0); // reset fail_count on success
}
return .{ .data = result.candles, .source = .fetched, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = result.candles, .source = .fetched, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
}
/// Fetch dividend history for a symbol.
@ -588,11 +594,11 @@ pub const DataService = struct {
pub fn getEarnings(self: *DataService, symbol: []const u8) DataError!FetchResult(EarningsEvent) {
// Mutual funds (5-letter tickers ending in X) don't have quarterly earnings.
if (isMutualFund(symbol)) {
return .{ .data = &.{}, .source = .cached, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = &.{}, .source = .cached, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
}
var s = self.store();
const today = fmt.todayDate();
const today = fmt.todayDate(self.io);
if (s.read(EarningsEvent, symbol, earningsPostProcess, .fresh_only)) |cached| {
// Check if any past/today earnings event is still missing actual results.
@ -603,17 +609,17 @@ pub const DataService = struct {
if (!needs_refresh) {
log.debug("{s}: earnings fresh in local cache", .{symbol});
return .{ .data = cached.data, .source = .cached, .timestamp = cached.timestamp, .allocator = self.allocator() };
return .{ .data = cached.data, .source = .cached, .timestamp = cached.timestamp, .allocator = self.allocator };
}
// Stale: free cached events and re-fetch below
self.allocator().free(cached.data);
self.allocator.free(cached.data);
}
// Try server sync before hitting FMP
if (self.syncFromServer(symbol, .earnings)) {
if (s.read(EarningsEvent, symbol, earningsPostProcess, .fresh_only)) |cached| {
log.debug("{s}: earnings synced from server and fresh", .{symbol});
return .{ .data = cached.data, .source = .cached, .timestamp = cached.timestamp, .allocator = self.allocator() };
return .{ .data = cached.data, .source = .cached, .timestamp = cached.timestamp, .allocator = self.allocator };
}
log.debug("{s}: earnings synced from server but stale, falling through to provider", .{symbol});
}
@ -621,10 +627,10 @@ pub const DataService = struct {
log.debug("{s}: fetching earnings from provider", .{symbol});
var fmp = try self.getProvider(Fmp);
const fetched = fmp.fetchEarnings(self.allocator(), symbol) catch |err| blk: {
const fetched = fmp.fetchEarnings(self.allocator, symbol) catch |err| blk: {
if (err == error.RateLimited) {
self.rateLimitBackoff();
break :blk fmp.fetchEarnings(self.allocator(), symbol) catch {
break :blk fmp.fetchEarnings(self.allocator, symbol) catch {
return DataError.FetchFailed;
};
}
@ -634,7 +640,7 @@ pub const DataService = struct {
s.write(EarningsEvent, symbol, fetched, cache.Ttl.earnings);
return .{ .data = fetched, .source = .fetched, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = fetched, .source = .fetched, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
}
/// Fetch ETF profile for a symbol.
@ -643,13 +649,13 @@ pub const DataService = struct {
var s = self.store();
if (s.read(EtfProfile, symbol, null, .fresh_only)) |cached|
return .{ .data = cached.data, .source = .cached, .timestamp = cached.timestamp, .allocator = self.allocator() };
return .{ .data = cached.data, .source = .cached, .timestamp = cached.timestamp, .allocator = self.allocator };
var av = try self.getProvider(AlphaVantage);
const fetched = av.fetchEtfProfile(self.allocator(), symbol) catch |err| blk: {
const fetched = av.fetchEtfProfile(self.allocator, symbol) catch |err| blk: {
if (err == error.RateLimited) {
self.rateLimitBackoff();
break :blk av.fetchEtfProfile(self.allocator(), symbol) catch {
break :blk av.fetchEtfProfile(self.allocator, symbol) catch {
return DataError.FetchFailed;
};
}
@ -659,7 +665,7 @@ pub const DataService = struct {
s.write(EtfProfile, symbol, fetched, cache.Ttl.etf_profile);
return .{ .data = fetched, .source = .fetched, .timestamp = std.time.timestamp(), .allocator = self.allocator() };
return .{ .data = fetched, .source = .fetched, .timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds(), .allocator = self.allocator };
}
/// Fetch a real-time quote for a symbol.
@ -668,7 +674,7 @@ pub const DataService = struct {
pub fn getQuote(self: *DataService, symbol: []const u8) DataError!Quote {
// Primary: Yahoo Finance (free, real-time)
if (self.getProvider(Yahoo)) |yh| {
if (yh.fetchQuote(self.allocator(), symbol)) |quote| {
if (yh.fetchQuote(self.allocator, symbol)) |quote| {
log.debug("{s}: quote from Yahoo", .{symbol});
return quote;
} else |_| {}
@ -677,7 +683,7 @@ pub const DataService = struct {
// Fallback: TwelveData (requires API key, may be 15-min delayed)
var td = try self.getProvider(TwelveData);
log.debug("{s}: quote fallback to TwelveData", .{symbol});
return td.fetchQuote(self.allocator(), symbol) catch
return td.fetchQuote(self.allocator, symbol) catch
return DataError.FetchFailed;
}
@ -685,7 +691,7 @@ pub const DataService = struct {
/// No cache -- always fetches fresh. Caller must free the returned string fields.
pub fn getCompanyOverview(self: *DataService, symbol: []const u8) DataError!CompanyOverview {
var av = try self.getProvider(AlphaVantage);
return av.fetchCompanyOverview(self.allocator(), symbol) catch
return av.fetchCompanyOverview(self.allocator, symbol) catch
return DataError.FetchFailed;
}
@ -707,7 +713,7 @@ pub const DataService = struct {
const c = candle_result.data;
if (c.len == 0) return DataError.FetchFailed;
const today = fmt.todayDate();
const today = fmt.todayDate(self.io);
// As-of-date (end = last candle)
const asof_price = performance.trailingReturns(c);
@ -787,7 +793,7 @@ pub const DataService = struct {
var s = self.store();
if (s.isNegative(symbol, .candles_daily)) return null;
const result = s.read(Candle, symbol, null, .any) orelse return null;
return .{ .data = result.data, .source = .cached, .timestamp = result.timestamp, .allocator = self.allocator() };
return .{ .data = result.data, .source = .cached, .timestamp = result.timestamp, .allocator = self.allocator };
}
/// Read dividends from cache only (no network fetch).
@ -897,7 +903,7 @@ pub const DataService = struct {
// 2. Try API fetch
if (self.getCandles(sym)) |candle_result| {
defer self.allocator().free(candle_result.data);
defer self.allocator.free(candle_result.data);
if (candle_result.data.len > 0) {
const last = candle_result.data[candle_result.data.len - 1];
prices.put(sym, last.close) catch {};
@ -1018,7 +1024,7 @@ pub const DataService = struct {
symbol_progress: ?ProgressCallback,
) LoadAllResult {
var result = LoadAllResult{
.prices = std.StringHashMap(f64).init(self.allocator()),
.prices = std.StringHashMap(f64).init(self.allocator),
.cached_count = 0,
.server_synced_count = 0,
.provider_fetched_count = 0,
@ -1035,13 +1041,13 @@ pub const DataService = struct {
if (total_count == 0) return result;
// Build combined symbol list
var all_symbols = std.ArrayList([]const u8).initCapacity(self.allocator(), total_count) catch return result;
defer all_symbols.deinit(self.allocator());
var all_symbols = std.ArrayList([]const u8).initCapacity(self.allocator, total_count) catch return result;
defer all_symbols.deinit(self.allocator);
if (portfolio_syms) |ps| {
for (ps) |sym| all_symbols.append(self.allocator(), sym) catch {};
for (ps) |sym| all_symbols.append(self.allocator, sym) catch {};
}
for (watch_syms) |sym| all_symbols.append(self.allocator(), sym) catch {};
for (watch_syms) |sym| all_symbols.append(self.allocator, sym) catch {};
// Invalidate cache if force refresh
if (config.force_refresh) {
@ -1052,7 +1058,7 @@ pub const DataService = struct {
// Phase 1: Check local cache (fast path)
var needs_fetch: std.ArrayList([]const u8) = .empty;
defer needs_fetch.deinit(self.allocator());
defer needs_fetch.deinit(self.allocator);
if (aggregate_progress) |p| p.emit(0, total_count, .cache_check);
@ -1064,7 +1070,7 @@ pub const DataService = struct {
}
result.cached_count += 1;
} else {
needs_fetch.append(self.allocator(), sym) catch {};
needs_fetch.append(self.allocator, sym) catch {};
}
}
@ -1077,7 +1083,7 @@ pub const DataService = struct {
// Phase 2: Server sync (parallel if server configured)
var server_failures: std.ArrayList([]const u8) = .empty;
defer server_failures.deinit(self.allocator());
defer server_failures.deinit(self.allocator);
if (self.config.server_url != null) {
self.parallelServerSync(
@ -1091,7 +1097,7 @@ pub const DataService = struct {
} else {
// No server all need provider fetch
for (needs_fetch.items) |sym| {
server_failures.append(self.allocator(), sym) catch {};
server_failures.append(self.allocator, sym) catch {};
}
}
@ -1133,12 +1139,12 @@ pub const DataService = struct {
// Shared state for worker threads
var completed = AtomicCounter{};
var next_index = AtomicCounter{};
const sync_results = self.allocator().alloc(ServerSyncResult, symbols.len) catch {
const sync_results = self.allocator.alloc(ServerSyncResult, symbols.len) catch {
// Allocation failed fall back to marking all as failures
for (symbols) |sym| failures.append(self.allocator(), sym) catch {};
for (symbols) |sym| failures.append(self.allocator, sym) catch {};
return;
};
defer self.allocator().free(sync_results);
defer self.allocator.free(sync_results);
// Initialize results
for (sync_results, 0..) |*sr, i| {
@ -1146,11 +1152,11 @@ pub const DataService = struct {
}
// Spawn worker threads
var threads = self.allocator().alloc(std.Thread, thread_count) catch {
for (symbols) |sym| failures.append(self.allocator(), sym) catch {};
var threads = self.allocator.alloc(std.Thread, thread_count) catch {
for (symbols) |sym| failures.append(self.allocator, sym) catch {};
return;
};
defer self.allocator().free(threads);
defer self.allocator.free(threads);
const WorkerContext = struct {
svc: *DataService,
@ -1192,7 +1198,7 @@ pub const DataService = struct {
// Progress reporting while waiting
if (aggregate_progress) |p| {
while (completed.load() < symbols.len) {
std.Thread.sleep(50 * std.time.ns_per_ms);
std.Io.sleep(self.io, std.Io.Duration.fromMilliseconds(50), .awake) catch {};
p.emit(result.cached_count + completed.load(), total_count, .server_sync);
}
}
@ -1212,10 +1218,10 @@ pub const DataService = struct {
result.server_synced_count += 1;
} else {
// Sync said success but can't read cache treat as failure
failures.append(self.allocator(), sr.symbol) catch {};
failures.append(self.allocator, sr.symbol) catch {};
}
} else {
failures.append(self.allocator(), sr.symbol) catch {};
failures.append(self.allocator, sr.symbol) catch {};
}
}
}
@ -1238,7 +1244,7 @@ pub const DataService = struct {
// Try provider fetch
if (self.getCandles(sym)) |candle_result| {
defer self.allocator().free(candle_result.data);
defer self.allocator.free(candle_result.data);
if (candle_result.data.len > 0) {
const last = candle_result.data[candle_result.data.len - 1];
result.prices.put(sym, last.close) catch {};
@ -1280,7 +1286,7 @@ pub const DataService = struct {
/// Results array is parallel to the input cusips array (same length, same order).
/// Caller owns the returned slice and all strings within each CusipResult.
pub fn lookupCusips(self: *DataService, cusips: []const []const u8) DataError![]CusipResult {
return OpenFigi.lookupCusips(self.allocator(), cusips, self.config.openfigi_key) catch
return OpenFigi.lookupCusips(self.io, self.allocator, cusips, self.config.openfigi_key) catch
return DataError.FetchFailed;
}
@ -1292,10 +1298,10 @@ pub const DataService = struct {
if (self.getCachedCusipTicker(cusip)) |t| return t;
// Try OpenFIGI
const result = OpenFigi.lookupCusip(self.allocator(), cusip, self.config.openfigi_key) catch return null;
const result = OpenFigi.lookupCusip(self.allocator, cusip, self.config.openfigi_key) catch return null;
defer {
if (result.name) |n| self.allocator().free(n);
if (result.security_type) |s| self.allocator().free(s);
if (result.name) |n| self.allocator.free(n);
if (result.security_type) |s| self.allocator.free(s);
}
if (result.ticker) |ticker| {
@ -1316,20 +1322,20 @@ pub const DataService = struct {
/// Read a cached CUSIP->ticker mapping. Returns null if not cached.
/// Caller owns the returned string.
fn getCachedCusipTicker(self: *DataService, cusip: []const u8) ?[]const u8 {
const path = std.fs.path.join(self.allocator(), &.{ self.config.cache_dir, "cusip_tickers.srf" }) catch return null;
defer self.allocator().free(path);
const path = std.fs.path.join(self.allocator, &.{ self.config.cache_dir, "cusip_tickers.srf" }) catch return null;
defer self.allocator.free(path);
const data = std.fs.cwd().readFileAlloc(self.allocator(), path, 64 * 1024) catch return null;
defer self.allocator().free(data);
const data = std.fs.cwd().readFileAlloc(self.allocator, path, 64 * 1024) catch return null;
defer self.allocator.free(data);
var reader = std.Io.Reader.fixed(data);
var it = srf.iterator(&reader, self.allocator(), .{ .alloc_strings = false }) catch return null;
var it = srf.iterator(&reader, self.allocator, .{ .alloc_strings = false }) catch return null;
defer it.deinit();
while (it.next() catch return null) |fields| {
const entry = fields.to(CusipEntry) catch continue;
if (std.mem.eql(u8, entry.cusip, cusip) and entry.ticker.len > 0) {
return self.allocator().dupe(u8, entry.ticker) catch null;
return self.allocator.dupe(u8, entry.ticker) catch null;
}
}
return null;
@ -1342,39 +1348,39 @@ pub const DataService = struct {
/// valid header plus partial trailing record. See `cache/store.zig
/// appendRaw` for the same pattern and rationale.
pub fn cacheCusipTicker(self: *DataService, cusip: []const u8, ticker: []const u8) void {
const path = std.fs.path.join(self.allocator(), &.{ self.config.cache_dir, "cusip_tickers.srf" }) catch return;
defer self.allocator().free(path);
const path = std.fs.path.join(self.allocator, &.{ self.config.cache_dir, "cusip_tickers.srf" }) catch return;
defer self.allocator.free(path);
// Ensure cache dir exists
if (std.fs.path.dirnamePosix(path)) |dir| {
std.fs.cwd().makePath(dir) catch {};
std.Io.Dir.cwd().createDirPath(self.io, dir) catch {};
}
// Read existing cache if present.
const existing = std.fs.cwd().readFileAlloc(self.allocator(), path, 4 * 1024 * 1024) catch |err| switch (err) {
const existing = std.Io.Dir.cwd().readFileAlloc(self.io, path, self.allocator, .limited(4 * 1024 * 1024)) catch |err| switch (err) {
error.FileNotFound => @as([]u8, &.{}),
else => return,
};
const owns_existing = existing.len > 0;
defer if (owns_existing) self.allocator().free(existing);
defer if (owns_existing) self.allocator.free(existing);
// Serialize the new entry (with `#!srfv1` directives only if the
// cache file doesn't exist yet).
const emit_directives = !owns_existing;
const entry = [_]CusipEntry{.{ .cusip = cusip, .ticker = ticker }};
var aw: std.Io.Writer.Allocating = .init(self.allocator());
var aw: std.Io.Writer.Allocating = .init(self.allocator);
defer aw.deinit();
aw.writer.print("{f}", .{srf.fmtFrom(CusipEntry, self.allocator(), &entry, .{ .emit_directives = emit_directives })}) catch return;
aw.writer.print("{f}", .{srf.fmtFrom(CusipEntry, self.allocator, &entry, .{ .emit_directives = emit_directives })}) catch return;
const encoded = aw.writer.buffered();
if (encoded.len == 0) return;
// Concat existing + new, then atomic-write.
const combined = self.allocator().alloc(u8, existing.len + encoded.len) catch return;
defer self.allocator().free(combined);
const combined = self.allocator.alloc(u8, existing.len + encoded.len) catch return;
defer self.allocator.free(combined);
@memcpy(combined[0..existing.len], existing);
@memcpy(combined[existing.len..], encoded);
atomic.writeFileAtomic(self.allocator(), path, combined) catch {};
atomic.writeFileAtomic(self.io, self.allocator, path, combined) catch {};
}
// Utility
@ -1385,7 +1391,7 @@ pub const DataService = struct {
if (self.td) |*td| {
td.rate_limiter.backoff();
} else {
std.Thread.sleep(10 * std.time.ns_per_s);
std.Io.sleep(self.io, std.Io.Duration.fromSeconds(10), .awake) catch {};
}
}
@ -1419,8 +1425,8 @@ pub const DataService = struct {
.meta => return false,
};
const full_url = std.fmt.allocPrint(self.allocator(), "{s}/{s}{s}", .{ server_url, symbol, endpoint }) catch return false;
defer self.allocator().free(full_url);
const full_url = std.fmt.allocPrint(self.allocator, "{s}/{s}{s}", .{ server_url, symbol, endpoint }) catch return false;
defer self.allocator.free(full_url);
const max_attempts: u8 = 2;
const retry_delay_ms: u64 = 250;
@ -1432,7 +1438,7 @@ pub const DataService = struct {
"{s}: retrying {s} server sync (attempt {d}/{d}) after {d}ms delay",
.{ symbol, @tagName(data_type), attempt + 1, max_attempts, retry_delay_ms },
);
std.Thread.sleep(retry_delay_ms * std.time.ns_per_ms);
std.Io.sleep(self.io, std.Io.Duration.fromMilliseconds(retry_delay_ms), .awake) catch {};
}
switch (self.tryOneSync(symbol, data_type, full_url)) {
.ok => return true,
@ -1450,7 +1456,7 @@ pub const DataService = struct {
fn tryOneSync(self: *DataService, symbol: []const u8, data_type: cache.DataType, full_url: []const u8) SyncAttempt {
log.debug("{s}: syncing {s} from server", .{ symbol, @tagName(data_type) });
var client = http.Client.init(self.allocator());
var client = http.Client.init(self.io, self.allocator);
defer client.deinit();
var response = client.get(full_url) catch |err| {
@ -1472,7 +1478,8 @@ pub const DataService = struct {
switch (response.verifyIntegrity()) {
.mismatch => |m| {
cache.Store.archiveTornBody(
self.allocator(),
self.io,
self.allocator,
self.config.cache_dir,
symbol,
data_type,
@ -1514,7 +1521,8 @@ pub const DataService = struct {
// sidecar on disk is the durable signal.
if (!cache.Store.looksCompleteSrf(response.body)) {
cache.Store.archiveTornBody(
self.allocator(),
self.io,
self.allocator,
self.config.cache_dir,
symbol,
data_type,
@ -1569,13 +1577,13 @@ pub const DataService = struct {
/// Caller owns the returned AccountMap and must call deinit().
pub fn loadAccountMap(self: *DataService, portfolio_path: []const u8) ?analysis.AccountMap {
const dir_end = if (std.mem.lastIndexOfScalar(u8, portfolio_path, std.fs.path.sep)) |idx| idx + 1 else 0;
const acct_path = std.fmt.allocPrint(self.allocator(), "{s}accounts.srf", .{portfolio_path[0..dir_end]}) catch return null;
defer self.allocator().free(acct_path);
const acct_path = std.fmt.allocPrint(self.allocator, "{s}accounts.srf", .{portfolio_path[0..dir_end]}) catch return null;
defer self.allocator.free(acct_path);
const data = std.fs.cwd().readFileAlloc(self.allocator(), acct_path, 1024 * 1024) catch return null;
defer self.allocator().free(data);
const data = std.Io.Dir.cwd().readFileAlloc(self.io, acct_path, self.allocator, .limited(1024 * 1024)) catch return null;
defer self.allocator.free(data);
return analysis.parseAccountsFile(self.allocator(), data) catch null;
return analysis.parseAccountsFile(self.allocator, data) catch null;
}
/// Load and parse `transaction_log.srf` from the same directory as
@ -1588,13 +1596,13 @@ pub const DataService = struct {
/// `deinit()`.
pub fn loadTransferLog(self: *DataService, portfolio_path: []const u8) ?transaction_log.TransactionLog {
const dir_end = if (std.mem.lastIndexOfScalar(u8, portfolio_path, std.fs.path.sep)) |idx| idx + 1 else 0;
const path = std.fmt.allocPrint(self.allocator(), "{s}transaction_log.srf", .{portfolio_path[0..dir_end]}) catch return null;
defer self.allocator().free(path);
const path = std.fmt.allocPrint(self.allocator, "{s}transaction_log.srf", .{portfolio_path[0..dir_end]}) catch return null;
defer self.allocator.free(path);
const data = std.fs.cwd().readFileAlloc(self.allocator(), path, 1024 * 1024) catch return null;
defer self.allocator().free(data);
const data = std.Io.Dir.cwd().readFileAlloc(self.io, path, self.allocator, .limited(1024 * 1024)) catch return null;
defer self.allocator.free(data);
return transaction_log.parseTransactionLogFile(self.allocator(), data) catch null;
return transaction_log.parseTransactionLogFile(self.allocator, data) catch null;
}
};
@ -1623,7 +1631,7 @@ test "DataService init/deinit lifecycle" {
const config = Config{
.cache_dir = "/tmp/zfin-test-cache",
};
var svc = DataService.init(allocator, config);
var svc = DataService.init(std.testing.io, allocator, config);
defer svc.deinit();
// Should be able to access config
@ -1641,7 +1649,7 @@ test "DataService store helper creates valid store" {
const config = Config{
.cache_dir = "/tmp/zfin-test-cache",
};
var svc = DataService.init(allocator, config);
var svc = DataService.init(std.testing.io, allocator, config);
defer svc.deinit();
const s = svc.store();
@ -1654,7 +1662,7 @@ test "DataService getProvider returns NoApiKey without key" {
.cache_dir = "/tmp/zfin-test-cache",
// No API keys set
};
var svc = DataService.init(allocator, config);
var svc = DataService.init(std.testing.io, allocator, config);
defer svc.deinit();
// TwelveData requires API key
@ -1676,7 +1684,7 @@ test "DataService getProvider initializes provider with key" {
.cache_dir = "/tmp/zfin-test-cache",
.tiingo_key = "test-tiingo-key",
};
var svc = DataService.init(allocator, config);
var svc = DataService.init(std.testing.io, allocator, config);
defer svc.deinit();
// First call initializes

View file

@ -307,6 +307,12 @@ pub const ChartState = struct {
/// data loading are delegated to the `tui/*_tab.zig` modules.
pub const App = struct {
allocator: std.mem.Allocator,
io: std.Io,
/// Captured at App init and refreshed at tab change. Using a cached
/// date (rather than calling the clock on every render) keeps render
/// deterministic within a single frame and avoids threading `io`
/// through pure date-consuming helpers like `positions()`.
today: zfin.Date,
config: zfin.Config,
svc: *zfin.DataService,
keymap: keybinds.KeyMap,
@ -796,8 +802,7 @@ pub const App = struct {
.ignored => {},
.committed => {
const input = self.input_buf[0..self.input_len];
const today = fmt.todayDate();
const parsed = cli.parseAsOfDate(input, today) catch |err| {
const parsed = cli.parseAsOfDate(input, self.today) catch |err| {
var buf: [256]u8 = undefined;
const msg = cli.fmtAsOfParseError(&buf, input, err);
self.setStatus(msg);
@ -808,7 +813,7 @@ pub const App = struct {
if (parsed) |d| {
// Guard against future dates.
if (d.days > today.days) {
if (d.days > self.today.days) {
self.setStatus("As-of date is in the future");
self.mode = .normal;
self.input_len = 0;
@ -1053,7 +1058,7 @@ pub const App = struct {
if (name) |n| {
self.account_filter = self.allocator.dupe(u8, n) catch null;
if (self.portfolio) |pf| {
self.filtered_positions = pf.positionsForAccount(self.allocator, n) catch null;
self.filtered_positions = pf.positionsForAccount(self.today, self.allocator, n) catch null;
}
} else {
self.account_filter = null;
@ -1375,7 +1380,10 @@ pub const App = struct {
/// Returns true if this wheel event should be suppressed (too close to the last one).
fn shouldDebounceWheel(self: *App) bool {
const now = std.time.nanoTimestamp();
// wall-clock required: input-event debounce needs the actual
// monotonic moment this wheel event arrived, not a frame-captured
// approximation. `.awake` (monotonic) resists system clock jumps.
const now: i128 = @intCast(std.Io.Timestamp.now(self.io, .awake).nanoseconds);
if (now - self.last_wheel_ns < 1 * std.time.ns_per_ms) return true;
self.last_wheel_ns = now;
return false;
@ -1708,7 +1716,10 @@ pub const App = struct {
if (self.symbol.len > 0) {
if (self.svc.getQuote(self.symbol)) |q| {
self.quote = q;
self.quote_timestamp = std.time.timestamp();
// wall-clock required: records the exact moment
// this quote was served so the "refreshed Xs ago"
// display is honest about freshness.
self.quote_timestamp = std.Io.Timestamp.now(self.io, .real).toSeconds();
} else |_| {}
}
},
@ -2370,11 +2381,13 @@ comptime {
/// Entry point for the interactive TUI.
/// `args` contains only command-local tokens (everything after `interactive`).
pub fn run(
io: std.Io,
allocator: std.mem.Allocator,
config: zfin.Config,
global_portfolio_path: ?[]const u8,
global_watchlist_path: ?[]const u8,
args: []const []const u8,
today: zfin.Date,
) !void {
var portfolio_path: ?[]const u8 = global_portfolio_path;
const watchlist_path: ?[]const u8 = global_watchlist_path;
@ -2386,10 +2399,10 @@ pub fn run(
var i: usize = 0;
while (i < args.len) : (i += 1) {
if (std.mem.eql(u8, args[i], "--default-keys")) {
try keybinds.printDefaults();
try keybinds.printDefaults(io);
return;
} else if (std.mem.eql(u8, args[i], "--default-theme")) {
try theme.printDefaults();
try theme.printDefaults(io);
return;
} else if (std.mem.eql(u8, args[i], "--symbol") or std.mem.eql(u8, args[i], "-s")) {
if (i + 1 < args.len) {
@ -2418,40 +2431,42 @@ pub fn run(
var resolved_pf: ?zfin.Config.ResolvedPath = null;
defer if (resolved_pf) |r| r.deinit(allocator);
if (portfolio_path == null and !has_explicit_symbol) {
if (config.resolveUserFile(allocator, zfin.Config.default_portfolio_filename)) |r| {
if (config.resolveUserFile(io, allocator, zfin.Config.default_portfolio_filename)) |r| {
resolved_pf = r;
portfolio_path = r.path;
}
}
var keymap = blk: {
const home = std.process.getEnvVarOwned(allocator, "HOME") catch break :blk keybinds.defaults();
defer allocator.free(home);
const home_opt = if (config.environ_map) |em| em.get("HOME") else null;
const home = home_opt orelse break :blk keybinds.defaults();
const keys_path = std.fs.path.join(allocator, &.{ home, ".config", "zfin", "keys.srf" }) catch
break :blk keybinds.defaults();
defer allocator.free(keys_path);
break :blk keybinds.loadFromFile(allocator, keys_path) orelse keybinds.defaults();
break :blk keybinds.loadFromFile(io, allocator, keys_path) orelse keybinds.defaults();
};
defer keymap.deinit();
const loaded_theme = blk: {
const home = std.process.getEnvVarOwned(allocator, "HOME") catch break :blk theme.default_theme;
defer allocator.free(home);
const home_opt = if (config.environ_map) |em| em.get("HOME") else null;
const home = home_opt orelse break :blk theme.default_theme;
const theme_path = std.fs.path.join(allocator, &.{ home, ".config", "zfin", "theme.srf" }) catch
break :blk theme.default_theme;
defer allocator.free(theme_path);
break :blk theme.loadFromFile(allocator, theme_path) orelse theme.default_theme;
break :blk theme.loadFromFile(io, allocator, theme_path) orelse theme.default_theme;
};
var svc = try allocator.create(zfin.DataService);
defer allocator.destroy(svc);
svc.* = zfin.DataService.init(allocator, config);
svc.* = zfin.DataService.init(io, allocator, config);
defer svc.deinit();
var app_inst = try allocator.create(App);
defer allocator.destroy(app_inst);
app_inst.* = .{
.allocator = allocator,
.io = io,
.today = today,
.config = config,
.svc = svc,
.keymap = keymap,
@ -2463,7 +2478,7 @@ pub fn run(
};
if (portfolio_path) |path| {
const file_data = std.fs.cwd().readFileAlloc(allocator, path, 10 * 1024 * 1024) catch null;
const file_data = std.Io.Dir.cwd().readFileAlloc(io, path, allocator, .limited(10 * 1024 * 1024)) catch null;
if (file_data) |d| {
defer allocator.free(d);
if (zfin.cache.deserializePortfolio(allocator, d)) |pf| {
@ -2476,14 +2491,14 @@ pub fn run(
defer if (resolved_wl) |r| r.deinit(allocator);
if (!skip_watchlist) {
const wl_path = watchlist_path orelse blk: {
if (config.resolveUserFile(allocator, "watchlist.srf")) |r| {
if (config.resolveUserFile(io, allocator, "watchlist.srf")) |r| {
resolved_wl = r;
break :blk @as(?[]const u8, r.path);
}
break :blk null;
};
if (wl_path) |path| {
app_inst.watchlist = loadWatchlist(allocator, path);
app_inst.watchlist = loadWatchlist(io, allocator, path);
app_inst.watchlist_path = path;
}
}
@ -2538,6 +2553,7 @@ pub fn run(
if (total_count > 0) {
// Use consolidated parallel loader
const load_result = cli.loadPortfolioPrices(
io,
svc,
syms,
watch_syms.items,
@ -2562,7 +2578,11 @@ pub fn run(
defer app_inst.deinitData();
{
var vx_app = try vaxis.vxfw.App.init(allocator);
// vaxis 0.16 requires a pre-allocated app buffer, an Io, and
// an env map. The buffer must outlive vx_app.
var vx_app_buf: [4096]u8 = undefined;
const environ_map = config.environ_map orelse return error.MissingEnvironMap;
var vx_app = try vaxis.vxfw.App.init(io, allocator, @constCast(environ_map), &vx_app_buf);
defer vx_app.deinit();
app_inst.vx_app = &vx_app;
defer app_inst.vx_app = null;

View file

@ -26,7 +26,7 @@ pub fn loadData(app: *App) void {
const meta_path = std.fmt.allocPrint(app.allocator, "{s}metadata.srf", .{ppath[0..dir_end]}) catch return;
defer app.allocator.free(meta_path);
const file_data = std.fs.cwd().readFileAlloc(app.allocator, meta_path, 1024 * 1024) catch {
const file_data = std.Io.Dir.cwd().readFileAlloc(app.io, meta_path, app.allocator, .limited(1024 * 1024)) catch {
app.setStatus("No metadata.srf found. Run: zfin enrich <portfolio.srf> > metadata.srf");
return;
};
@ -61,7 +61,7 @@ fn loadDataFinish(app: *App, pf: zfin.Portfolio, summary: zfin.valuation.Portfol
pf,
summary.total_value,
app.account_map,
null, // null => use wall-clock today (interactive, not backfill)
app.today, // live mode in TUI resolves to app.today
) catch {
app.setStatus("Error computing analysis");
return;
@ -84,8 +84,8 @@ pub fn buildStyledLines(app: *App, arena: std.mem.Allocator) ![]const StyledLine
summary.allocations,
cm_entries,
summary.total_value,
pf.totalCash(),
pf.totalCdFaceValue(),
pf.totalCash(app.today),
pf.totalCdFaceValue(app.today),
);
stock_pct = split.stock_pct;
bond_pct = split.bond_pct;

View file

@ -179,6 +179,7 @@ pub fn computeIndicators(
/// The returned rgb_data is allocated with `alloc` and must be freed by caller.
/// If `cached` is provided, uses pre-computed indicators instead of recomputing.
pub fn renderChart(
io: std.Io,
alloc: std.mem.Allocator,
candles: []const zfin.Candle,
timeframe: Timeframe,
@ -232,7 +233,7 @@ pub fn renderChart(
defer sfc.deinit(alloc);
// Create drawing context
var ctx = Context.init(alloc, &sfc);
var ctx = Context.init(io, alloc, &sfc);
defer ctx.deinit();
// Disable anti-aliasing and use direct pixel writes (.source operator)

View file

@ -56,10 +56,18 @@ pub fn loadData(app: *App) void {
// Rendering
pub fn buildStyledLines(app: *App, arena: std.mem.Allocator) ![]const StyledLine {
return renderEarningsLines(arena, app.theme, app.symbol, app.earnings_disabled, app.earnings_data, app.earnings_timestamp, app.earnings_error);
// wall-clock required: per-frame "now" for the earnings
// "data Xs ago" readout. Captured here so the pure renderer below
// stays free of io.
const now_s = std.Io.Timestamp.now(app.io, .real).toSeconds();
return renderEarningsLines(arena, app.theme, app.symbol, app.earnings_disabled, app.earnings_data, app.earnings_timestamp, app.earnings_error, now_s);
}
/// Render earnings tab content. Pure function no App dependency.
///
/// `now_s` is the unix-epoch-seconds reference point for the
/// "data Xs ago" age readout. Caller captures it once per frame via
/// `std.Io.Timestamp.now(io, .real).toSeconds()` and passes it in.
pub fn renderEarningsLines(
arena: std.mem.Allocator,
th: theme.Theme,
@ -68,6 +76,7 @@ pub fn renderEarningsLines(
earnings_data: ?[]const zfin.EarningsEvent,
earnings_timestamp: i64,
earnings_error: ?[]const u8,
now_s: i64,
) ![]const StyledLine {
var lines: std.ArrayList(StyledLine) = .empty;
@ -83,7 +92,7 @@ pub fn renderEarningsLines(
}
var earn_ago_buf: [16]u8 = undefined;
const earn_ago = fmt.fmtTimeAgo(&earn_ago_buf, earnings_timestamp);
const earn_ago = fmt.fmtTimeAgo(&earn_ago_buf, earnings_timestamp, now_s);
if (earn_ago.len > 0) {
try lines.append(arena, .{ .text = try std.fmt.allocPrint(arena, " Earnings: {s} (data {s})", .{ symbol, earn_ago }), .style = th.headerStyle() });
} else {
@ -141,7 +150,7 @@ test "renderEarningsLines with earnings data" {
.estimate = 1.50,
.actual = 1.65,
}};
const lines = try renderEarningsLines(arena, th, "AAPL", false, &events, 0, null);
const lines = try renderEarningsLines(arena, th, "AAPL", false, &events, 0, null, 1_700_000_000);
// blank + header + blank + col_header + data_row + blank + count = 7
try testing.expectEqual(@as(usize, 7), lines.len);
try testing.expect(std.mem.indexOf(u8, lines[1].text, "AAPL") != null);
@ -156,7 +165,7 @@ test "renderEarningsLines no symbol" {
const arena = arena_state.allocator();
const th = theme.default_theme;
const lines = try renderEarningsLines(arena, th, "", false, null, 0, null);
const lines = try renderEarningsLines(arena, th, "", false, null, 0, null, 1_700_000_000);
try testing.expectEqual(@as(usize, 2), lines.len);
try testing.expect(std.mem.indexOf(u8, lines[1].text, "No symbol") != null);
}
@ -167,7 +176,7 @@ test "renderEarningsLines disabled" {
const arena = arena_state.allocator();
const th = theme.default_theme;
const lines = try renderEarningsLines(arena, th, "VTI", true, null, 0, null);
const lines = try renderEarningsLines(arena, th, "VTI", true, null, 0, null, 1_700_000_000);
try testing.expectEqual(@as(usize, 2), lines.len);
try testing.expect(std.mem.indexOf(u8, lines[1].text, "ETF/index") != null);
}
@ -178,7 +187,7 @@ test "renderEarningsLines no data" {
const arena = arena_state.allocator();
const th = theme.default_theme;
const lines = try renderEarningsLines(arena, th, "AAPL", false, null, 0, null);
const lines = try renderEarningsLines(arena, th, "AAPL", false, null, 0, null, 1_700_000_000);
try testing.expectEqual(@as(usize, 4), lines.len);
try testing.expect(std.mem.indexOf(u8, lines[3].text, "No data") != null);
}
@ -189,7 +198,7 @@ test "renderEarningsLines with error message" {
const arena = arena_state.allocator();
const th = theme.default_theme;
const lines = try renderEarningsLines(arena, th, "AAPL", false, null, 0, "No API key. Set FMP_API_KEY");
const lines = try renderEarningsLines(arena, th, "AAPL", false, null, 0, "No API key. Set FMP_API_KEY", 1_700_000_000);
try testing.expectEqual(@as(usize, 4), lines.len);
try testing.expect(std.mem.indexOf(u8, lines[3].text, "FMP_API_KEY") != null);
}

View file

@ -59,7 +59,7 @@ pub fn loadData(app: *App) void {
return;
};
app.history_timeline = history.loadTimeline(app.allocator, portfolio_path) catch {
app.history_timeline = history.loadTimeline(app.io, app.allocator, portfolio_path) catch {
app.setStatus("Failed to read history/ directory");
return;
};
@ -271,7 +271,7 @@ fn buildCompareFromSelections(app: *App, sel_a: usize, sel_b: usize) !void {
then_map_ptr = &resources.then_live_map.?;
then_liquid = liveLiquid(app);
} else {
const side = try compare_core.loadSnapshotSide(app.allocator, hist_dir, older.date);
const side = try compare_core.loadSnapshotSide(app.io, app.allocator, hist_dir, older.date);
resources.then_snap = side;
then_map_ptr = &resources.then_snap.?.map;
then_liquid = side.liquid;
@ -289,7 +289,7 @@ fn buildCompareFromSelections(app: *App, sel_a: usize, sel_b: usize) !void {
now_map_ptr = &resources.now_live_map.?;
now_liquid = liveLiquid(app);
} else {
const side = try compare_core.loadSnapshotSide(app.allocator, hist_dir, newer.date);
const side = try compare_core.loadSnapshotSide(app.io, app.allocator, hist_dir, newer.date);
resources.now_snap = side;
now_map_ptr = &resources.now_snap.?.map;
now_liquid = side.liquid;
@ -470,7 +470,7 @@ fn buildLiveRow(app: *const App, deltas: []const timeline.RowDelta) ?TableRow {
const summary = app.portfolio_summary orelse return null;
const liquid = summary.total_value;
const illiquid = app.portfolio.?.totalIlliquid();
const illiquid = app.portfolio.?.totalIlliquid(app.today);
const net_worth = liquid + illiquid;
// Deltas vs. the most recent snapshot.
@ -485,7 +485,7 @@ fn buildLiveRow(app: *const App, deltas: []const timeline.RowDelta) ?TableRow {
}
return .{
.date = fmt.todayDate(),
.date = app.today,
.is_live = true,
.liquid = liquid,
.illiquid = illiquid,

View file

@ -305,9 +305,9 @@ fn parseKeyCombo(key_str: []const u8) ?KeyCombo {
}
/// Print default keybindings in SRF format to stdout.
pub fn printDefaults() !void {
pub fn printDefaults(io: std.Io) !void {
var buf: [4096]u8 = undefined;
var writer = std.fs.File.stdout().writer(&buf);
var writer = std.Io.File.stdout().writer(io, &buf);
const out = &writer.interface;
try out.writeAll("#!srfv1\n");
@ -344,8 +344,8 @@ fn parseAction(name: []const u8) ?Action {
/// Load keybindings from an SRF file. Returns null if the file doesn't exist
/// or can't be parsed. On success, the caller owns the returned KeyMap and
/// must call deinit().
pub fn loadFromFile(allocator: std.mem.Allocator, path: []const u8) ?KeyMap {
const data = std.fs.cwd().readFileAlloc(allocator, path, 64 * 1024) catch return null;
pub fn loadFromFile(io: std.Io, allocator: std.mem.Allocator, path: []const u8) ?KeyMap {
const data = std.Io.Dir.cwd().readFileAlloc(io, path, allocator, .limited(64 * 1024)) catch return null;
defer allocator.free(data);
return loadFromData(allocator, data);
}

View file

@ -55,7 +55,11 @@ pub fn buildStyledLines(app: *App, arena: std.mem.Allocator) ![]const StyledLine
}
var opt_ago_buf: [16]u8 = undefined;
const opt_ago = fmt.fmtTimeAgo(&opt_ago_buf, app.options_timestamp);
// wall-clock required: per-frame "now" for the "refreshed Xs ago"
// readout. Captured here rather than on `app` so it refreshes every
// time this tab renders.
const now_s = std.Io.Timestamp.now(app.io, .real).toSeconds();
const opt_ago = fmt.fmtTimeAgo(&opt_ago_buf, app.options_timestamp, now_s);
if (opt_ago.len > 0) {
try lines.append(arena, .{ .text = try std.fmt.allocPrint(arena, " Options: {s} (data {s}, 15 min delay)", .{ app.symbol, opt_ago }), .style = th.headerStyle() });
} else {

View file

@ -123,7 +123,7 @@ pub fn buildStyledLines(app: *App, arena: std.mem.Allocator) ![]const StyledLine
try appendStyledReturnsTable(arena, &lines, app.trailing_price.?, if (has_total) app.trailing_total else null, th);
{
const today = fmt.todayDate();
const today = app.today;
const month_end = today.lastDayOfPriorMonth();
var db: [10]u8 = undefined;
try lines.append(arena, .{ .text = "", .style = th.contentStyle() });
@ -153,7 +153,10 @@ pub fn buildStyledLines(app: *App, arena: std.mem.Allocator) ![]const StyledLine
}), .style = th.contentStyle() });
} else {
try lines.append(arena, .{ .text = try std.fmt.allocPrint(arena, " {s:<20} {s:>14} {s:>14} {s:>14}", .{
risk_labels[i], "", "", "",
risk_labels[i],
"",
"",
"",
}), .style = th.mutedStyle() });
}
}

View file

@ -57,7 +57,7 @@ pub fn loadPortfolioData(app: *App) void {
const pf = app.portfolio orelse return;
const positions = pf.positions(app.allocator) catch {
const positions = pf.positions(app.today, app.allocator) catch {
app.setStatus("Error computing positions");
return;
};
@ -169,7 +169,7 @@ pub fn loadPortfolioData(app: *App) void {
app.candle_last_date = latest_date;
// Build portfolio summary, candle map, and historical snapshots
var pf_data = cli.buildPortfolioData(app.allocator, pf, positions, syms, &prices, app.svc) catch |err| switch (err) {
var pf_data = cli.buildPortfolioData(app.allocator, pf, positions, syms, &prices, app.svc, app.today) catch |err| switch (err) {
error.NoAllocations => {
app.setStatus("No cached prices. Run: zfin perf <SYMBOL> first");
return;
@ -320,7 +320,7 @@ pub fn rebuildPortfolioRows(app: *App) void {
matching.append(app.allocator, lot) catch continue;
}
}
std.mem.sort(zfin.Lot, matching.items, {}, fmt.lotSortFn);
std.mem.sort(zfin.Lot, matching.items, app.today, fmt.lotSortFn);
// Check if any lots are DRIP
var has_drip = false;
@ -355,7 +355,7 @@ pub fn rebuildPortfolioRows(app: *App) void {
}
// Build ST and LT DRIP summaries
const drip = fmt.aggregateDripLots(matching.items);
const drip = fmt.aggregateDripLots(app.today, matching.items);
if (!drip.st.isEmpty()) {
app.portfolio_rows.append(app.allocator, .{
@ -432,7 +432,7 @@ pub fn rebuildPortfolioRows(app: *App) void {
// Options section (sorted by expiration date, then symbol; filtered by account)
if (app.portfolio) |pf| {
app.prepared_options = views.Options.init(app.allocator, pf.lots, app.account_filter) catch null;
app.prepared_options = views.Options.init(app.today, app.allocator, pf.lots, app.account_filter) catch null;
if (app.prepared_options) |opts| {
if (opts.items.len > 0) {
app.portfolio_rows.append(app.allocator, .{
@ -454,7 +454,7 @@ pub fn rebuildPortfolioRows(app: *App) void {
}
// CDs section (sorted by maturity date, earliest first; filtered by account)
app.prepared_cds = views.CDs.init(app.allocator, pf.lots, app.account_filter) catch null;
app.prepared_cds = views.CDs.init(app.today, app.allocator, pf.lots, app.account_filter) catch null;
if (app.prepared_cds) |cds| {
if (cds.items.len > 0) {
app.portfolio_rows.append(app.allocator, .{
@ -649,7 +649,7 @@ fn recomputeFilteredPositions(app: *App) void {
app.filtered_positions = null;
const filter = app.account_filter orelse return;
const pf = app.portfolio orelse return;
app.filtered_positions = pf.positionsForAccount(app.allocator, filter) catch null;
app.filtered_positions = pf.positionsForAccount(app.today, app.allocator, filter) catch null;
}
/// Check if a lot matches the active account filter.
@ -753,7 +753,7 @@ fn computeFilteredTotals(app: *const App) FilteredTotals {
}
}
if (app.portfolio) |pf| {
const ns = pf.nonStockValueForAccount(af);
const ns = pf.nonStockValueForAccount(app.today, af);
value += ns;
cost += ns;
}
@ -830,8 +830,8 @@ pub fn drawContent(app: *App, arena: std.mem.Allocator, buf: []vaxis.Cell, width
// Net Worth line (only if portfolio has illiquid assets)
if (app.portfolio) |pf| {
if (pf.hasType(.illiquid)) {
const illiquid_total = pf.totalIlliquid();
const net_worth = zfin.valuation.netWorth(pf, s);
const illiquid_total = pf.totalIlliquid(app.today);
const net_worth = zfin.valuation.netWorth(app.today, pf, s);
var nw_buf: [24]u8 = undefined;
var il_buf: [24]u8 = undefined;
const nw_text = try std.fmt.allocPrint(arena, " Net Worth: {s} (Liquid: {s} Illiquid: {s})", .{
@ -952,7 +952,7 @@ pub fn drawContent(app: *App, arena: std.mem.Allocator, buf: []vaxis.Cell, width
if (lot.security_type == .stock and std.mem.eql(u8, lot.priceSymbol(), a.symbol)) {
if (matchesAccountFilter(app, lot.account)) {
const ds = lot.open_date.format(&pos_date_buf);
const indicator = fmt.capitalGainsIndicator(lot.open_date);
const indicator = fmt.capitalGainsIndicator(app.today, lot.open_date);
date_col = std.fmt.allocPrint(arena, "{s} {s}", .{ ds, indicator }) catch ds;
acct_col = lot.account orelse "";
break;
@ -1017,8 +1017,8 @@ pub fn drawContent(app: *App, arena: std.mem.Allocator, buf: []vaxis.Cell, width
var price_str2: [24]u8 = undefined;
const lot_price_str = fmt.fmtMoneyAbs(&price_str2, lot.open_price);
const status_str: []const u8 = if (lot.isOpen()) "open" else "closed";
const indicator = fmt.capitalGainsIndicator(lot.open_date);
const status_str: []const u8 = if (lot.isOpen(app.today)) "open" else "closed";
const indicator = fmt.capitalGainsIndicator(app.today, lot.open_date);
const lot_date_col = try std.fmt.allocPrint(arena, "{s} {s}", .{ date_str, indicator });
const acct_col: []const u8 = lot.account orelse "";
const text = try std.fmt.allocPrint(arena, " " ++ fmt.sym_col_spec ++ " {d:>8.1} {s:>10} {s:>10} {s:>16} {s:>14} {s:>8} {s:>13} {s}", .{
@ -1084,7 +1084,7 @@ pub fn drawContent(app: *App, arena: std.mem.Allocator, buf: []vaxis.Cell, width
},
.cash_total => {
if (app.portfolio) |pf| {
const total_cash = pf.totalCash();
const total_cash = pf.totalCash(app.today);
var cash_buf: [24]u8 = undefined;
const arrow3: []const u8 = if (app.cash_expanded) "v " else "> ";
const text = try std.fmt.allocPrint(arena, " {s}Total Cash {s:>14}", .{
@ -1106,7 +1106,7 @@ pub fn drawContent(app: *App, arena: std.mem.Allocator, buf: []vaxis.Cell, width
},
.illiquid_total => {
if (app.portfolio) |pf| {
const total_illiquid = pf.totalIlliquid();
const total_illiquid = pf.totalIlliquid(app.today);
var illiquid_buf: [24]u8 = undefined;
const arrow4: []const u8 = if (app.illiquid_expanded) "v " else "> ";
const text = try std.fmt.allocPrint(arena, " {s}Total Illiquid {s:>14}", .{
@ -1199,7 +1199,7 @@ pub fn reloadPortfolioFile(app: *App) void {
if (app.portfolio) |*pf| pf.deinit();
app.portfolio = null;
if (app.portfolio_path) |path| {
const file_data = std.fs.cwd().readFileAlloc(app.allocator, path, 10 * 1024 * 1024) catch {
const file_data = std.Io.Dir.cwd().readFileAlloc(app.io, path, app.allocator, .limited(10 * 1024 * 1024)) catch {
app.setStatus("Error reading portfolio file");
return;
};
@ -1219,7 +1219,7 @@ pub fn reloadPortfolioFile(app: *App) void {
tui.freeWatchlist(app.allocator, app.watchlist);
app.watchlist = null;
if (app.watchlist_path) |path| {
app.watchlist = tui.loadWatchlist(app.allocator, path);
app.watchlist = tui.loadWatchlist(app.io, app.allocator, path);
}
// Recompute summary using cached prices (no network)
@ -1232,7 +1232,7 @@ pub fn reloadPortfolioFile(app: *App) void {
app.portfolio_rows.clearRetainingCapacity();
const pf = app.portfolio orelse return;
const positions = pf.positions(app.allocator) catch {
const positions = pf.positions(app.today, app.allocator) catch {
app.setStatus("Error computing positions");
return;
};
@ -1266,7 +1266,7 @@ pub fn reloadPortfolioFile(app: *App) void {
app.candle_last_date = latest_date;
// Build portfolio summary, candle map, and historical snapshots from cache
var pf_data = cli.buildPortfolioData(app.allocator, pf, positions, syms, &prices, app.svc) catch |err| switch (err) {
var pf_data = cli.buildPortfolioData(app.allocator, pf, positions, syms, &prices, app.svc, app.today) catch |err| switch (err) {
error.NoAllocations => {
app.setStatus("No cached prices available");
return;

View file

@ -42,6 +42,7 @@ pub const ProjectionChartResult = struct {
/// `bands` is the array of YearPercentiles (year 0 through horizon).
/// The returned rgb_data is allocated with `alloc` and must be freed by caller.
pub fn renderProjectionChart(
io: std.Io,
alloc: std.mem.Allocator,
bands: []const projections.YearPercentiles,
width_px: u32,
@ -55,7 +56,7 @@ pub fn renderProjectionChart(
var sfc = try Surface.init(.image_surface_rgb, alloc, w, h);
defer sfc.deinit(alloc);
var ctx = Context.init(alloc, &sfc);
var ctx = Context.init(io, alloc, &sfc);
defer ctx.deinit();
ctx.setAntiAliasingMode(.none);
@ -320,7 +321,7 @@ test "renderProjectionChart produces valid output" {
};
const th = @import("theme.zig").default_theme;
const result = try renderProjectionChart(alloc, &bands, 200, 100, th);
const result = try renderProjectionChart(std.testing.io, alloc, &bands, 200, 100, th);
defer alloc.free(result.rgb_data);
try std.testing.expectEqual(@as(u16, 200), result.width);
@ -336,6 +337,6 @@ test "renderProjectionChart insufficient data" {
};
const th = @import("theme.zig").default_theme;
const result = renderProjectionChart(alloc, &bands, 200, 100, th);
const result = renderProjectionChart(std.testing.io, alloc, &bands, 200, 100, th);
try std.testing.expectError(error.InsufficientData, result);
}

View file

@ -82,7 +82,7 @@ pub fn loadData(app: *App) void {
};
defer app.allocator.free(hist_dir);
var loaded = history.loadSnapshotAt(app.allocator, hist_dir, actual_date) catch {
var loaded = history.loadSnapshotAt(app.io, app.allocator, hist_dir, actual_date) catch {
app.setStatus("Failed to load snapshot — showing live");
app.projections_as_of = null;
app.projections_as_of_requested = null;
@ -91,6 +91,7 @@ pub fn loadData(app: *App) void {
defer loaded.deinit(app.allocator);
const ctx = view.loadProjectionContextAsOf(
app.io,
app.allocator,
portfolio_dir,
&loaded.snap,
@ -118,14 +119,16 @@ pub fn loadData(app: *App) void {
const portfolio = app.portfolio orelse return;
const ctx = view.loadProjectionContext(
app.io,
app.allocator,
portfolio_dir,
summary.allocations,
summary.total_value,
portfolio.totalCash(),
portfolio.totalCdFaceValue(),
portfolio.totalCash(app.today),
portfolio.totalCdFaceValue(app.today),
app.svc,
app.projections_events_enabled,
app.today,
) catch {
app.setStatus("Failed to compute projections");
return;
@ -152,7 +155,7 @@ fn resolveSnapshotDate(app: *App, portfolio_path: []const u8, requested: zfin.Da
return null;
};
const resolved = history.resolveSnapshotDate(arena, hist_dir, requested) catch |err| switch (err) {
const resolved = history.resolveSnapshotDate(app.io, arena, hist_dir, requested) catch |err| switch (err) {
error.NoSnapshotAtOrBefore => {
var date_buf: [10]u8 = undefined;
var status_buf: [128]u8 = undefined;
@ -297,6 +300,7 @@ fn drawWithKittyChart(app: *App, ctx: vaxis.vxfw.DrawContext, buf: []vaxis.Cell,
if (app.vx_app) |va| {
const chart_result = projection_chart.renderProjectionChart(
app.io,
app.allocator,
bands,
capped_w,
@ -591,13 +595,13 @@ fn buildFooterSection(app: *App, arena: std.mem.Allocator, lines: *std.ArrayList
}
// Life events summary
try appendEventSummary(lines, arena, th, pctx);
try appendEventSummary(lines, app.today, arena, th, pctx);
}
fn appendEventSummary(lines: *std.ArrayListUnmanaged(StyledLine), arena: std.mem.Allocator, th: theme.Theme, pctx: view.ProjectionContext) !void {
fn appendEventSummary(lines: *std.ArrayListUnmanaged(StyledLine), as_of: zfin.Date, arena: std.mem.Allocator, th: theme.Theme, pctx: view.ProjectionContext) !void {
const events = pctx.config.getEvents();
if (events.len == 0) return;
const ages = pctx.config.currentAges();
const ages = pctx.config.currentAges(as_of);
try lines.append(arena, .{ .text = "", .style = th.contentStyle() });
try lines.append(arena, .{ .text = " Life Events", .style = th.headerStyle() });
for (events) |*ev| {
@ -901,7 +905,7 @@ pub fn buildStyledLines(app: *App, arena: std.mem.Allocator) ![]const StyledLine
}
// Life events summary (at the bottom)
try appendEventSummary(&lines, arena, th, ctx);
try appendEventSummary(&lines, app.today, arena, th, ctx);
return lines.toOwnedSlice(arena);
}

View file

@ -189,6 +189,7 @@ fn drawWithKittyChart(app: *App, ctx: vaxis.vxfw.DrawContext, buf: []vaxis.Cell,
const cached_ptr: ?*const chart.CachedIndicators = if (app.chart.cached_indicators) |*ci| ci else null;
const chart_result = chart.renderChart(
app.io,
app.allocator,
c,
app.chart.timeframe,
@ -366,7 +367,10 @@ fn buildStyledLines(app: *App, arena: std.mem.Allocator) ![]const StyledLine {
var ago_buf: [16]u8 = undefined;
if (app.quote != null and app.quote_timestamp > 0) {
const ago_str = fmt.fmtTimeAgo(&ago_buf, app.quote_timestamp);
// wall-clock required: per-frame "now" for the "refreshed Xs ago"
// readout on the live quote header.
const now_s = std.Io.Timestamp.now(app.io, .real).toSeconds();
const ago_str = fmt.fmtTimeAgo(&ago_buf, app.quote_timestamp, now_s);
try lines.append(arena, .{ .text = try std.fmt.allocPrint(arena, " {s} (live, ~15 min delay, refreshed {s})", .{ app.symbol, ago_str }), .style = th.headerStyle() });
} else if (app.candle_last_date) |d| {
var cdate_buf: [10]u8 = undefined;
@ -498,7 +502,7 @@ fn buildDetailColumns(
try col2.add(arena, try std.fmt.allocPrint(arena, " Expense: {d:.2}%", .{er * 100.0}), th.contentStyle());
}
if (profile.net_assets) |na| {
try col2.add(arena, try std.fmt.allocPrint(arena, " Assets: ${s}", .{std.mem.trimRight(u8, &fmt.fmtLargeNum(na), &.{' '})}), th.contentStyle());
try col2.add(arena, try std.fmt.allocPrint(arena, " Assets: ${s}", .{std.mem.trimEnd(u8, &fmt.fmtLargeNum(na), &.{' '})}), th.contentStyle());
}
if (profile.dividend_yield) |dy| {
try col2.add(arena, try std.fmt.allocPrint(arena, " Yield: {d:.2}%", .{dy * 100.0}), th.contentStyle());

View file

@ -228,9 +228,9 @@ fn parseHex(s: []const u8) ?Color {
return .{ r, g, b };
}
pub fn printDefaults() !void {
pub fn printDefaults(io: std.Io) !void {
var buf: [4096]u8 = undefined;
var writer = std.fs.File.stdout().writer(&buf);
var writer = std.Io.File.stdout().writer(io, &buf);
const out = &writer.interface;
try out.writeAll("#!srfv1\n");
@ -250,8 +250,8 @@ pub fn printDefaults() !void {
try out.flush();
}
pub fn loadFromFile(allocator: std.mem.Allocator, path: []const u8) ?Theme {
const data = std.fs.cwd().readFileAlloc(allocator, path, 64 * 1024) catch return null;
pub fn loadFromFile(io: std.Io, allocator: std.mem.Allocator, path: []const u8) ?Theme {
const data = std.Io.Dir.cwd().readFileAlloc(io, path, allocator, .limited(64 * 1024)) catch return null;
defer allocator.free(data);
return loadFromData(data);
}

View file

@ -146,7 +146,7 @@ pub const CompareView = struct {
/// `gainer_count + loser_count + flat_count == held_count`.
flat_count: usize = 0,
/// Optional contributions-vs-gains breakdown of `liquid.delta`.
/// Populated by the CLI from `computeAttribution` when a git repo
/// Populated by the CLI from `computeAttributionSpec` when a git repo
/// is available; always null in unit-tested / TUI flows.
attribution: ?Attribution = null,

View file

@ -57,8 +57,7 @@ pub const Options = struct {
allocator: std.mem.Allocator,
/// Build sorted, filtered, display-ready option rows from raw lots.
pub fn init(allocator: std.mem.Allocator, lots: []const Lot, account_filter: ?[]const u8) !Options {
const today = fmt.todayDate();
pub fn init(as_of: Date, allocator: std.mem.Allocator, lots: []const Lot, account_filter: ?[]const u8) !Options {
var list: std.ArrayList(Option) = .empty;
errdefer {
for (list.items) |opt| allocator.free(opt.columns[0].text);
@ -81,7 +80,7 @@ pub const Options = struct {
const qty = lot.shares;
const cost_per = lot.open_price;
const premium = @abs(qty) * cost_per * lot.multiplier;
const is_expired = if (lot.maturity_date) |md| md.lessThan(today) else false;
const is_expired = if (lot.maturity_date) |md| md.lessThan(as_of) else false;
const received = qty < 0;
const row_style: fmt.StyleIntent = if (is_expired) .muted else .normal;
@ -165,8 +164,7 @@ pub const CDs = struct {
allocator: std.mem.Allocator,
/// Build sorted, filtered, display-ready CD rows from raw lots.
pub fn init(allocator: std.mem.Allocator, lots: []const Lot, account_filter: ?[]const u8) !CDs {
const today = fmt.todayDate();
pub fn init(as_of: Date, allocator: std.mem.Allocator, lots: []const Lot, account_filter: ?[]const u8) !CDs {
var list: std.ArrayList(CD) = .empty;
errdefer {
for (list.items) |cd| allocator.free(cd.text);
@ -186,7 +184,7 @@ pub const CDs = struct {
std.mem.sort(Lot, tmp.items, {}, fmt.lotMaturitySortFn);
for (tmp.items) |lot| {
const is_expired = if (lot.maturity_date) |md| md.lessThan(today) else false;
const is_expired = if (lot.maturity_date) |md| md.lessThan(as_of) else false;
const row_style: fmt.StyleIntent = if (is_expired) .muted else .normal;
var face_buf: [24]u8 = undefined;

View file

@ -218,6 +218,7 @@ pub fn buildProjectionContext(
/// The caller provides the portfolio summary (allocations, total value, cash/CD)
/// and a DataService for candle access. All intermediate allocations use `alloc`.
pub fn loadProjectionContext(
io: std.Io,
alloc: std.mem.Allocator,
portfolio_dir: []const u8,
allocations: []const valuation.Allocation,
@ -226,8 +227,10 @@ pub fn loadProjectionContext(
cd_value: f64,
svc: *zfin.DataService,
events_enabled: bool,
as_of: Date,
) !ProjectionContext {
return buildContextFromParts(
io,
alloc,
portfolio_dir,
allocations,
@ -236,7 +239,7 @@ pub fn loadProjectionContext(
cd_value,
svc,
events_enabled,
null,
as_of,
);
}
@ -267,7 +270,7 @@ pub fn loadProjectionContext(
/// - Benchmark candles truncated to <= as_of_date
/// - Per-symbol trailing returns truncated to <= as_of_date
/// - Life events resolved against ages-as-of-as_of via
/// `UserConfig.currentAgesAsOf`
/// `UserConfig.currentAges`
///
/// Known as-of limitations (documented):
/// - `metadata.srf` classifications are current, not historical.
@ -281,6 +284,7 @@ pub fn loadProjectionContext(
/// (allocation symbol strings borrow from the snapshot's backing
/// buffer see `history.aggregateSnapshotAllocations`).
pub fn loadProjectionContextAsOf(
io: std.Io,
alloc: std.mem.Allocator,
portfolio_dir: []const u8,
snap: *const snapshot_model.Snapshot,
@ -292,6 +296,7 @@ pub fn loadProjectionContextAsOf(
defer snap_allocs.deinit(alloc);
return buildContextFromParts(
io,
alloc,
portfolio_dir,
snap_allocs.allocations,
@ -315,6 +320,7 @@ pub fn loadProjectionContextAsOf(
/// to `<= d`; events resolved against ages-as-of-d
/// (`resolveEventsWithAges(currentAgesAsOf(d))`).
fn buildContextFromParts(
io: std.Io,
alloc: std.mem.Allocator,
portfolio_dir: []const u8,
allocations: []const valuation.Allocation,
@ -323,27 +329,28 @@ fn buildContextFromParts(
cd_value: f64,
svc: *zfin.DataService,
events_enabled: bool,
as_of: ?Date,
as_of: Date,
) !ProjectionContext {
// Load projections.srf
const proj_path = try std.fmt.allocPrint(alloc, "{s}projections.srf", .{portfolio_dir});
defer alloc.free(proj_path);
const proj_data = std.fs.cwd().readFileAlloc(alloc, proj_path, 64 * 1024) catch null;
const proj_data = std.Io.Dir.cwd().readFileAlloc(io, proj_path, alloc, .limited(64 * 1024)) catch null;
defer if (proj_data) |d| alloc.free(d);
var config = projections.parseProjectionsConfig(proj_data);
if (!events_enabled) config.event_count = 0;
// Resolve age-based horizons (if any) against the projection's as-of
// date. For live mode (`as_of == null`), use today. This turns
// Resolve age-based horizons (if any) against `as_of`. The caller
// chooses whether `as_of` is today (live mode) or a historical
// backfill date. This turns
// `horizon_age:num:N` records into concrete year counts appended to
// `config.horizons` see `UserConfig.resolveHorizonAges`.
const horizon_anchor = as_of orelse Date.fromEpoch(std.time.timestamp());
const horizon_anchor = as_of;
try config.resolveHorizonAges(horizon_anchor);
// Load metadata for classification
const meta_path = try std.fmt.allocPrint(alloc, "{s}metadata.srf", .{portfolio_dir});
defer alloc.free(meta_path);
const meta_data = std.fs.cwd().readFileAlloc(alloc, meta_path, 1024 * 1024) catch null;
const meta_data = std.Io.Dir.cwd().readFileAlloc(io, meta_path, alloc, .limited(1024 * 1024)) catch null;
defer if (meta_data) |d| alloc.free(d);
var cm_opt: ?zfin.classification.ClassificationMap = if (meta_data) |d|
zfin.classification.parseClassificationFile(alloc, d) catch null
@ -410,17 +417,11 @@ fn buildContextFromParts(
agg_week,
);
// Event resolution differs by mode:
// - Live: current ages (resolveEvents uses config.currentAges()).
// - As-of: ages-as-of the requested date, so an event at age 67
// that's 17 years from today but 28 years from 2016 resolves
// correctly against the historical reference frame.
const resolved_events = if (as_of) |d| blk: {
const ages = config.currentAgesAsOf(d);
const resolved = config.resolveEventsWithAges(&ages);
break :blk resolved[0..config.event_count];
} else blk: {
const resolved = config.resolveEvents();
// Resolve events against ages-as-of the reference date. The
// caller chooses whether `as_of` is today (live mode) or a
// historical backfill date the math is the same either way.
const resolved_events = blk: {
const resolved = config.resolveEvents(as_of);
break :blk resolved[0..config.event_count];
};