wttr/TARGET_ARCHITECTURE.md
2026-01-06 14:42:03 -08:00

967 lines
26 KiB
Markdown
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# wttr.in Target Architecture (Zig)
## Overview
Single Zig binary (`wttr`) with simplified architecture:
- One HTTP server (karlseguin/http.zig)
- One caching layer (LRU + file-backed)
Note: There are multiple caches for IP->location, geocoding, and weather,
but these are in a single layer...this avoids multiple file-based
cache layers
- Pluggable weather provider interface
- Rate limiting as injectable middleware
- English-only (i18n can be added later)
- No cron-based prefetching (if you want that, just hit the endpoint on a schedule)
## System Diagram
```
Client Request
HTTP Server (http.zig)
Rate Limiter (middleware)
Router
Request Handler
Location Resolver
Provider interface cache check
↓ (miss)
Weather Provider (interface)
├─ MetNo (default)
└─ Mock (tests)
Cache Store
Renderer
├─ ANSI
├─ JSON
├─ Line
└─ HTML
└─ (PNG later)
Response
```
## Module Structure
```
wttr/
├── build.zig
├── build.zig.zon
└── src/
├── main.zig # Entry point, server setup
├── config.zig # Configuration
├── http/
│ ├── server.zig # HTTP server wrapper
│ ├── router.zig # Route matching
│ ├── handler.zig # Request handlers
│ └── middleware.zig # Rate limiter (Token Bucket)
├── cache/
│ ├── cache.zig # Cache interface
│ ├── lru.zig # In-memory LRU
│ └── file.zig # File-backed storage
├── location/
│ ├── resolver.zig # Main location resolution
│ ├── geoip.zig # GeoIP wrapper (libmaxminddb)
│ ├── normalize.zig # Location normalization
│ └── geolocator.zig # HTTP client for geolocator service
├── weather/
│ ├── provider.zig # Weather provider interface
│ ├── metno.zig # met.no implementation
│ ├── mock.zig # Mock for tests
│ └── types.zig # Weather data structures
├── render/
│ ├── ansi.zig # ANSI terminal output
│ ├── json.zig # JSON output
│ └── line.zig # One-line format
└── util/
├── ip.zig # IP address utilities
└── hash.zig # Hashing utilities
```
## Core Components
### 1. HTTP Server (main.zig, http/)
**Responsibilities:**
- Listen on configured port
- Parse HTTP requests
- Route to handlers
- Apply middleware (rate limiting)
- Return responses
**Dependencies:**
- karlseguin/http.zig
**Configuration:**
```zig
const Config = struct {
listen_host: []const u8 = "0.0.0.0",
listen_port: u16 = 8002,
cache_size: usize = 10_000,
cache_dir: []const u8 = "/tmp/wttr-cache",
geolite_path: []const u8 = "./GeoLite2-City.mmdb",
geolocator_url: []const u8 = "http://localhost:8004",
};
```
**Routes:**
```zig
GET / weather for IP location
GET /{location} weather for location
GET /:help help page
GET /files/{path} static files
```
### 2. Rate Limiter (http/middleware.zig)
**Algorithm:** Token Bucket
**Responsibilities:**
- Track token buckets per IP
- Refill tokens at configured rate
- Consume tokens on request
- Return 429 when bucket empty
- Injectable as middleware
**Interface:**
```zig
pub const RateLimiter = struct {
buckets: HashMap([]const u8, TokenBucket),
config: RateLimitConfig,
allocator: Allocator,
pub fn init(allocator: Allocator, config: RateLimitConfig) !RateLimiter;
pub fn check(self: *RateLimiter, ip: []const u8) !void;
pub fn middleware(self: *RateLimiter) Middleware;
pub fn deinit(self: *RateLimiter) void;
};
pub const RateLimitConfig = struct {
capacity: u32 = 300, // Max tokens in bucket
refill_rate: u32 = 5, // Tokens per second
refill_interval_ms: u64 = 200, // Refill every 200ms (5 tokens/sec)
};
const TokenBucket = struct {
tokens: f64,
capacity: u32,
last_refill: i64, // Unix timestamp (ms)
fn refill(self: *TokenBucket, now: i64, rate: u32, interval_ms: u64) void {
const elapsed = now - self.last_refill;
const intervals = @as(f64, @floatFromInt(elapsed)) / @as(f64, @floatFromInt(interval_ms));
const new_tokens = intervals * @as(f64, @floatFromInt(rate));
self.tokens = @min(
self.tokens + new_tokens,
@as(f64, @floatFromInt(self.capacity))
);
self.last_refill = now;
}
fn consume(self: *TokenBucket, count: f64) bool {
if (self.tokens >= count) {
self.tokens -= count;
return true;
}
return false;
}
};
```
**Implementation:**
- HashMap of IP → TokenBucket
- Each request consumes 1 token
- Tokens refill at configured rate (default: 5/second)
- Bucket capacity: 300 tokens (allows bursts)
- Periodic cleanup of old buckets (not accessed in 1 hour)
**Example Configuration:**
```zig
// Allow 300 requests burst, then 5 req/sec sustained
const config = RateLimitConfig{
.capacity = 300,
.refill_rate = 5,
.refill_interval_ms = 200,
};
```
### 3. Cache (cache/)
**Single-layer cache with two storage backends:**
**Interface:**
```zig
pub const Cache = struct {
lru: LRU,
file_store: FileStore,
pub fn init(allocator: Allocator, config: CacheConfig) !Cache;
pub fn get(self: *Cache, key: []const u8) ?[]const u8;
pub fn put(self: *Cache, key: []const u8, value: []const u8, ttl: u64) !void;
};
pub const CacheConfig = struct {
max_entries: usize = 10_000,
file_threshold: usize = 1024, // Store to file if > 1KB
cache_dir: []const u8,
};
```
**Cache Key:**
```
{user_agent}:{path}:{query}:{client_ip}
```
**Storage Strategy:**
- Small responses (<1KB): In-memory LRU only
- Large responses (≥1KB): LRU stores file path, data on disk
- TTL: 1000-2000s (randomized to prevent thundering herd)
**LRU Implementation:**
```zig
pub const LRU = struct {
map: HashMap([]const u8, Entry),
list: DoublyLinkedList([]const u8),
max_entries: usize,
const Entry = struct {
value: []const u8,
expires: i64,
node: *Node,
};
pub fn get(self: *LRU, key: []const u8) ?[]const u8;
pub fn put(self: *LRU, key: []const u8, value: []const u8, expires: i64) !void;
};
```
### 4. Location Resolver (location/)
**Responsibilities:**
- Parse location from URL
- Normalize location names
- Resolve IP to location (GeoIP)
- Resolve names to coordinates (geolocator)
- Handle special prefixes (~, @)
**Interface:**
```zig
pub const LocationResolver = struct {
geoip: GeoIP,
geolocator_url: []const u8,
allocator: Allocator,
pub fn init(allocator: Allocator, config: LocationConfig) !LocationResolver;
pub fn resolve(self: *LocationResolver, location: ?[]const u8, client_ip: []const u8) !Location;
};
pub const Location = struct {
name: []const u8, // "London" or "51.5074,-0.1278"
display_name: ?[]const u8, // Override for display
country: ?[]const u8, // "GB"
source_ip: []const u8, // Client IP
};
```
**Resolution Logic:**
```zig
fn resolve(location: ?[]const u8, client_ip: []const u8) !Location {
// 1. Empty/null → Use client IP
if (location == null) return resolveIP(client_ip);
// 2. IP address → Resolve to location
if (isIP(location)) return resolveIP(location);
// 3. @domain → Resolve domain to IP, then location
if (location[0] == '@') return resolveDomain(location[1..]);
// 4. ~search → Use geolocator
if (location[0] == '~') return geolocate(location[1..]);
// 5. Name → Normalize and use
return Location{
.name = normalize(location),
.display_name = null,
.country = null,
.source_ip = client_ip,
};
}
```
**GeoIP:**
- Wraps libmaxminddb C library via @cImport
- Built from source using Zig build system (not system library)
- Exposes clean Zig interface
- Hides C implementation details
**Build Integration:**
The libmaxminddb library will be built as part of the Zig build process:
- Source code vendored in `vendor/libmaxminddb/`
- Compiled using `b.addStaticLibrary()` or `b.addObject()`
- Linked into the main binary
- No system library dependency required
```zig
// location/geoip.zig
const c = @cImport({
@cInclude("maxminddb.h");
});
pub const GeoIP = struct {
mmdb: c.MMDB_s,
allocator: Allocator,
pub fn init(allocator: Allocator, db_path: []const u8) !GeoIP {
var self = GeoIP{
.mmdb = undefined,
.allocator = allocator,
};
const path_z = try allocator.dupeZ(u8, db_path);
defer allocator.free(path_z);
const status = c.MMDB_open(path_z.ptr, c.MMDB_MODE_MMAP, &self.mmdb);
if (status != c.MMDB_SUCCESS) {
return error.GeoIPOpenFailed;
}
return self;
}
pub fn lookup(self: *GeoIP, ip: []const u8) !?GeoIPResult {
const ip_z = try self.allocator.dupeZ(u8, ip);
defer self.allocator.free(ip_z);
var gai_error: c_int = 0;
var mmdb_error: c_int = 0;
const result = c.MMDB_lookup_string(
&self.mmdb,
ip_z.ptr,
&gai_error,
&mmdb_error
);
if (gai_error != 0 or mmdb_error != c.MMDB_SUCCESS) {
return null;
}
if (!result.found_entry) {
return null;
}
// Extract city and country
var entry_data: c.MMDB_entry_data_s = undefined;
const city = blk: {
const status = c.MMDB_get_value(
&result.entry,
&entry_data,
"city", "names", "en", null
);
if (status == c.MMDB_SUCCESS and entry_data.has_data) {
const str = entry_data.utf8_string[0..entry_data.data_size];
break :blk try self.allocator.dupe(u8, str);
}
break :blk null;
};
const country = blk: {
const status = c.MMDB_get_value(
&result.entry,
&entry_data,
"country", "names", "en", null
);
if (status == c.MMDB_SUCCESS and entry_data.has_data) {
const str = entry_data.utf8_string[0..entry_data.data_size];
break :blk try self.allocator.dupe(u8, str);
}
break :blk null;
};
return GeoIPResult{
.city = city,
.country = country,
};
}
pub fn deinit(self: *GeoIP) void {
c.MMDB_close(&self.mmdb);
}
};
pub const GeoIPResult = struct {
city: ?[]const u8,
country: ?[]const u8,
pub fn deinit(self: GeoIPResult, allocator: Allocator) void {
if (self.city) |city| allocator.free(city);
if (self.country) |country| allocator.free(country);
}
};
```
### 5. Weather Provider (weather/)
**Interface (pluggable):**
```zig
pub const WeatherProvider = struct {
ptr: *anyopaque,
vtable: *const VTable,
pub const VTable = struct {
fetch: *const fn (ptr: *anyopaque, location: []const u8) anyerror!WeatherData,
deinit: *const fn (ptr: *anyopaque) void,
};
pub fn fetch(self: WeatherProvider, location: []const u8) !WeatherData {
return self.vtable.fetch(self.ptr, location);
}
};
pub const WeatherData = struct {
location: []const u8,
current: CurrentCondition,
forecast: []ForecastDay,
pub const CurrentCondition = struct {
temp_c: f32,
temp_f: f32,
condition: []const u8,
weather_code: u16,
humidity: u8,
wind_kph: f32,
wind_dir: []const u8,
pressure_mb: f32,
precip_mm: f32,
};
pub const ForecastDay = struct {
date: []const u8,
max_temp_c: f32,
min_temp_c: f32,
condition: []const u8,
weather_code: u16,
hourly: []HourlyForecast,
};
pub const HourlyForecast = struct {
time: []const u8,
temp_c: f32,
condition: []const u8,
weather_code: u16,
wind_kph: f32,
precip_mm: f32,
};
};
```
**MetNo Implementation:**
```zig
pub const MetNo = struct {
allocator: Allocator,
http_client: HttpClient,
pub fn init(allocator: Allocator) !MetNo;
pub fn provider(self: *MetNo) WeatherProvider;
pub fn fetch(self: *MetNo, location: []const u8) !WeatherData;
};
```
**Mock Implementation (for tests):**
```zig
pub const MockWeather = struct {
allocator: Allocator,
responses: HashMap([]const u8, WeatherData),
pub fn init(allocator: Allocator) !MockWeather;
pub fn provider(self: *MockWeather) WeatherProvider;
pub fn addResponse(self: *MockWeather, location: []const u8, data: WeatherData) !void;
pub fn fetch(self: *MockWeather, location: []const u8) !WeatherData;
};
```
### Provider vs Renderer Responsibilities
**Weather Provider responsibilities:**
- Fetch raw weather data from external APIs
- Parse API responses into structured types
- **Perform timezone conversions once at ingestion time**
- Group forecast data by local date (not UTC date)
- Store both UTC time and local time in forecast data
- Return data in a timezone-agnostic format ready for rendering
**Renderer responsibilities:**
- Format weather data for display (ANSI, plain text, JSON, etc.)
- Select appropriate hourly forecasts for display (morning/noon/evening/night)
- Apply unit conversions (metric/imperial) based on user preferences
- Handle partial days with missing data (render empty slots)
- Format dates and times for human readability
- **Should NOT perform timezone calculations** - use pre-calculated local times from provider
**Key principle:** Timezone conversions are expensive and error-prone. They should happen once at the provider level, not repeatedly at the renderer level. This separation ensures consistent behavior across all output formats and simplifies the rendering logic.
**Implementation details:**
- Core data structures use `zeit.Time` and `zeit.Date` types instead of strings for type safety
- `HourlyForecast` contains both `time: zeit.Time` (UTC) and `local_time: zeit.Time` (pre-calculated)
- `ForecastDay.date` is `zeit.Date` (not string), eliminating parsing/formatting overhead
- The `MetNo` provider uses a pre-computed timezone offset lookup table (360 entries covering global coordinates)
- Date formatting uses `zeit.Time.gofmt()` with Go-style format strings (e.g., "Mon _2 Jan")
- Timezone offset table provides ±1-2.5 hour accuracy at extreme latitudes, sufficient for forecast grouping
**Benefits of zeit integration:**
- Type safety prevents format string errors and invalid date/time operations
- Explicit timezone handling - no implicit UTC assumptions
- Eliminates redundant string parsing and formatting
- Enables proper date arithmetic (e.g., `instant.add(duration)`, `instant.subtract(duration)`)
- Consistent date/time representation across all modules
### 6. Renderers (render/)
**ANSI Renderer:**
```zig
pub const AnsiRenderer = struct {
pub fn render(allocator: Allocator, data: WeatherData, options: RenderOptions) ![]const u8;
};
pub const RenderOptions = struct {
narrow: bool = false,
days: u8 = 3,
use_imperial: bool = false,
no_caption: bool = false,
};
```
**Output Format:**
```
Weather for: London, GB
\ / Clear
.-. 10 11 °C
― ( ) ― ↑ 11 km/h
`-' 10 km
/ \ 0.0 mm
```
**JSON Renderer:**
```zig
pub const JsonRenderer = struct {
pub fn render(allocator: Allocator, data: WeatherData) ![]const u8;
};
```
**One-Line Renderer:**
```zig
pub const LineRenderer = struct {
pub fn render(allocator: Allocator, data: WeatherData, format: []const u8) ![]const u8;
};
// Formats:
// 1: "London: ⛅️ +7°C"
// 2: "London: ⛅️ +7°C 🌬↗11km/h"
// 3: "London: ⛅️ +7°C 🌬↗11km/h 💧65%"
// Custom: "%l: %c %t" → "London: ⛅️ +7°C"
```
### 7. Request Handler (http/handler.zig)
**Main handler flow:**
```zig
pub const HandleWeatherOptions = struct {
cache: *Cache,
resolver: *LocationResolver,
provider: WeatherProvider,
};
pub fn handleWeather(
req: *http.Request,
res: *http.Response,
options: HandleWeatherOptions,
) !void {
// 1. Parse request
const location = parseLocation(req.url.path);
const render_opts = parseOptions(req.url.query);
const client_ip = getClientIP(req);
// 2. Generate cache key
const cache_key = try generateCacheKey(req, client_ip);
// 3. Check cache
if (options.cache.get(cache_key)) |cached| {
return res.write(cached);
}
// 4. Resolve location
const resolved = try options.resolver.resolve(location, client_ip);
// 5. Fetch weather
const weather = try options.provider.fetch(resolved.name);
// 6. Render
const output = try renderWeather(weather, render_opts);
// 7. Cache
const ttl = 1000 + std.crypto.random.intRangeAtMost(u64, 0, 1000);
try options.cache.put(cache_key, output, ttl);
// 8. Return
return res.write(output);
}
```
## Configuration
**Environment Variables:**
```bash
WTTR_LISTEN_HOST=0.0.0.0
WTTR_LISTEN_PORT=8002
WTTR_CACHE_SIZE=10000
WTTR_CACHE_DIR=/tmp/wttr-cache
WTTR_GEOLITE_PATH=./GeoLite2-City.mmdb
WTTR_GEOLOCATOR_URL=http://localhost:8004
```
**Config File (optional):**
```zig
// config.zig
pub const Config = struct {
listen_host: []const u8,
listen_port: u16,
cache_size: usize,
cache_dir: []const u8,
geolite_path: []const u8,
geolocator_url: []const u8,
pub fn load(allocator: Allocator) !Config {
return Config{
.listen_host = std.os.getenv("WTTR_LISTEN_HOST") orelse "0.0.0.0",
.listen_port = parsePort(std.os.getenv("WTTR_LISTEN_PORT")) orelse 8002,
.cache_size = parseSize(std.os.getenv("WTTR_CACHE_SIZE")) orelse 10_000,
.cache_dir = std.os.getenv("WTTR_CACHE_DIR") orelse "/tmp/wttr-cache",
.geolite_path = std.os.getenv("WTTR_GEOLITE_PATH") orelse "./GeoLite2-City.mmdb",
.geolocator_url = std.os.getenv("WTTR_GEOLOCATOR_URL") orelse "http://localhost:8004",
};
}
};
```
## Dependencies
**External (Zig packages):**
- karlseguin/http.zig - HTTP server
- rockorager/zeit - Time utilities
**Vendored (built from source):**
- libmaxminddb - GeoIP lookups (C library, built with Zig build system)
**Standard Library:**
- std.http.Client - HTTP client for weather APIs
- std.json - JSON parsing/serialization
- std.HashMap - Cache and rate limiter storage
- std.crypto.random - Random TTL generation
## Build Configuration
**build.zig:**
```zig
const std = @import("std");
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
// Build libmaxminddb from source
const maxminddb = b.addStaticLibrary(.{
.name = "maxminddb",
.target = target,
.optimize = optimize,
});
maxminddb.linkLibC();
maxminddb.addIncludePath(.{ .path = "vendor/libmaxminddb/include" });
maxminddb.addCSourceFiles(.{
.files = &.{
"vendor/libmaxminddb/src/maxminddb.c",
"vendor/libmaxminddb/src/data-pool.c",
},
.flags = &.{"-std=c99"},
});
const exe = b.addExecutable(.{
.name = "wttr",
.root_source_file = .{ .path = "src/main.zig" },
.target = target,
.optimize = optimize,
});
// Add http.zig dependency
const http = b.dependency("http", .{
.target = target,
.optimize = optimize,
});
exe.root_module.addImport("http", http.module("http"));
// Add zeit dependency
const zeit = b.dependency("zeit", .{
.target = target,
.optimize = optimize,
});
exe.root_module.addImport("zeit", zeit.module("zeit"));
// Link libmaxminddb (built from source)
exe.linkLibrary(maxminddb);
exe.addIncludePath(.{ .path = "vendor/libmaxminddb/include" });
exe.linkLibC();
b.installArtifact(exe);
// Tests
const tests = b.addTest(.{
.root_source_file = .{ .path = "src/main.zig" },
.target = target,
.optimize = optimize,
});
tests.root_module.addImport("http", http.module("http"));
tests.root_module.addImport("zeit", zeit.module("zeit"));
tests.linkLibrary(maxminddb);
tests.addIncludePath(.{ .path = "vendor/libmaxminddb/include" });
tests.linkLibC();
const run_tests = b.addRunArtifact(tests);
const test_step = b.step("test", "Run tests");
test_step.dependOn(&run_tests.step);
}
```
**Project Structure:**
```
wttr/
├── build.zig
├── build.zig.zon
├── vendor/
│ └── libmaxminddb/ # Vendored libmaxminddb source
│ ├── include/
│ │ └── maxminddb.h
│ └── src/
│ ├── maxminddb.c
│ └── data-pool.c
└── src/
└── ...
```
**build.zig.zon:**
```zig
.{
.name = "wttr",
.version = "0.1.0",
.dependencies = .{
.http = .{
.url = "https://github.com/karlseguin/http.zig/archive/refs/tags/v0.1.0.tar.gz",
.hash = "...",
},
.zeit = .{
.url = "https://github.com/rockorager/zeit/archive/refs/tags/v0.1.0.tar.gz",
.hash = "...",
},
},
.paths = .{
"build.zig",
"build.zig.zon",
"src",
},
}
```
## Testing Strategy
**Unit Tests:**
```zig
// cache/lru.zig
test "LRU basic operations" {
var lru = try LRU.init(testing.allocator, 3);
defer lru.deinit();
try lru.put("key1", "value1", 9999999999);
try testing.expectEqualStrings("value1", lru.get("key1").?);
}
test "LRU eviction" {
var lru = try LRU.init(testing.allocator, 2);
defer lru.deinit();
try lru.put("key1", "value1", 9999999999);
try lru.put("key2", "value2", 9999999999);
try lru.put("key3", "value3", 9999999999);
try testing.expect(lru.get("key1") == null); // Evicted
try testing.expectEqualStrings("value2", lru.get("key2").?);
}
// weather/mock.zig
test "mock weather provider" {
var mock = try MockWeather.init(testing.allocator);
defer mock.deinit();
const data = WeatherData{ /* ... */ };
try mock.addResponse("London", data);
const provider = mock.provider();
const result = try provider.fetch("London");
try testing.expectEqual(data.current.temp_c, result.current.temp_c);
}
// http/middleware.zig
test "rate limiter enforces limits" {
const zeit = @import("zeit");
var limiter = try RateLimiter.init(testing.allocator, .{
.capacity = 10,
.refill_rate = 1,
.refill_interval_ms = 1000,
});
defer limiter.deinit();
// Consume all tokens
var i: usize = 0;
while (i < 10) : (i += 1) {
try limiter.check("1.2.3.4");
}
// Next request should fail
try testing.expectError(error.RateLimitExceeded, limiter.check("1.2.3.4"));
// Wait for refill (1 second = 1 token)
std.time.sleep(1_100_000_000); // 1.1 seconds
// Should succeed now
try limiter.check("1.2.3.4");
}
test "rate limiter allows bursts" {
var limiter = try RateLimiter.init(testing.allocator, .{
.capacity = 100,
.refill_rate = 5,
.refill_interval_ms = 200,
});
defer limiter.deinit();
// Should allow 100 requests immediately (burst)
var i: usize = 0;
while (i < 100) : (i += 1) {
try limiter.check("1.2.3.4");
}
// 101st should fail
try testing.expectError(error.RateLimitExceeded, limiter.check("1.2.3.4"));
}
```
**Integration Tests:**
```zig
test "end-to-end weather request" {
var mock = try MockWeather.init(testing.allocator);
defer mock.deinit();
// Setup mock data
try mock.addResponse("London", test_weather_data);
// Create server components
var cache = try Cache.init(testing.allocator, .{});
defer cache.deinit();
var resolver = try LocationResolver.init(testing.allocator, .{});
defer resolver.deinit();
// Simulate request
const output = try handleWeatherRequest(
"London",
&cache,
&resolver,
mock.provider(),
);
defer testing.allocator.free(output);
try testing.expect(std.mem.indexOf(u8, output, "London") != null);
try testing.expect(std.mem.indexOf(u8, output, "°C") != null);
}
```
## Deployment
**Single Binary:**
```bash
# Build
zig build -Doptimize=ReleaseFast
# Run
./zig-out/bin/wttr
# With config
WTTR_LISTEN_PORT=8080 ./zig-out/bin/wttr
```
**Docker:**
See dockerfile in ./docker. This is a FROM SCRATCH image based on a musl static binary
## Performance Targets
**Latency:**
- Cache hit: <1ms
- Cache miss: <100ms (depends on weather API)
- P95: <150ms
- P99: <300ms
**Throughput:**
- >10,000 req/s (cached)
- >1,000 req/s (uncached)
**Memory:**
- Base: <50MB (currently 9MB)
- With 10,000 cache entries: <200MB
- Binary size: <5MB (ReleaseSmall currently less than 2MB)
## Future Enhancements (Not in Initial Version)
1. **i18n Support** - Add translation system
2. **PNG Rendering** - Add image output
4. **v2 Format** - Data-rich experimental format
5. **Prometheus Format** - Metrics output
6. **Moon Phases** - Special moon queries
7. **Multiple Weather Providers** - WWO, OpenWeather, etc.
8. **Metrics/Observability** - Prometheus metrics, structured logging
9. **Admin API** - Cache stats, health checks
## Success Criteria
**Functional:**
- [x] All core endpoints work (/, /{location}, /:help)
- [x] ANSI output matches current system
- [x] JSON output matches current system
- [x] One-line formats work
- [x] Location resolution works (IP, name, coordinates)
- [x] Rate limiting works
- [x] Caching works
**Performance:**
- [x] Latency current system
- [x] Throughput current system
- [x] Memory current system
- [x] Binary size <10MB (under ReleaseSafe)
**Quality:**
- [ ] >80% test coverage
- [ ] No memory leaks (valgrind clean)
- [ ] No crashes under load
- [x] Clean error handling
**Operational:**
- [x] Single binary deployment
- [x] Simple configuration
- [x] Good logging
- [x] Easy to debug