Local DynamoDB, using sqlite back end
Go to file
Emil Lerch 1c275031ad
All checks were successful
AWS-Zig Build / build (push) Successful in 4m45s
add job status to new notify
2024-05-15 14:29:39 -07:00
.gitea/workflows add job status to new notify 2024-05-15 14:29:39 -07:00
src complete src/main.zig zig 0.12.0 upgrade 2024-05-15 14:21:01 -07:00
.gitignore ignore temporary dev files 2024-02-24 11:25:38 -08:00
access_keys_sample.csv rename access key sample file 2024-02-24 11:24:21 -08:00
build.zig update aws-zig dependency 2024-05-15 14:20:10 -07:00
build.zig.zon update aws-zig dependency 2024-05-15 14:20:10 -07:00
LICENSE initial commit - much to do 2023-10-22 13:26:57 -07:00
README.md no new measurement necessary 2024-02-24 15:09:23 -08:00

DDB Local

This project presents itself as Amazon DynamoDB, but uses Sqlite for data storage only supports a handful of operations, and even then not with full fidelity:

  • CreateTable
  • BatchGetItem
  • BatchWriteItem

UpdateItem, PutItem and GetItem should be trivial to implement. Project name mostly mirrors DynamoDB Local, but doesn't have the overhead of a full Java VM, etc. On small data sets, this executable will use <10MB of resident memory.

Security

This uses typical IAM authentication, but does not have authorization implemented yet. This provides a chicken and egg problem, because we need a data store for access keys/secret keys, which would be great to have in...DDB.

Therefore, DDB is designed to adhere to the following algorithm:

  1. Check if this is a test account (used for zig build test). This uses hard-coded creds.
  2. Check if the account information is in access_keys.csv. This file is loaded at startup and contains the root credentials and keys necessary for bootstrap. Future plans are to enable encryption of this file and decryption using an HSM, as it is critical to everything.
  3. Call various services (primarily STS and IAM) if credentials do not exist in #1/#2.

As such, we effectively need a control plane instance on DDB, with appropriate access keys/secret keys stored somewhere other than DDB. Therefore, the following environment variables are planned:

  • IAM_ACCESS_KEY
  • IAM_SECRET_KEY
  • IAM_SECRET_FILE: File that will contain the above three values, allowing for cred rotation
  • STS_SERVICE_ENDPOINT (tbd - may not be named this)
  • IAM_SERVICE_ENDPOINT (tbd - may not be named this)

Secret file, thought here is that we can open/read file only if authentication succeeds, but access key does not match the ADMIN_ACCESS_KEY. This is a bit of a timing oracle, but not sure we care that much

Note that IAM does not have public APIs to perform authentication on access keys, nor does it seem to do authorization.

STS is used to translate access keys -> account ids.

Our plan is to use the aws zig library for authentication, and IAM for authorization, but we'll do that as a bin item.

High level, we have a DDB bootstrap with IAM account id/access key. Those credentials can then add new, we'll call them "root user" records in the IAM table with their own account id/access keys.

Those "root users" can then do whatever they want in their own tables, but cannot touch tables to any other account, including the IAM account. IAM account can only touch tables in their own account.