AWS SDK for Zig
Go to file
Emil Lerch d37aff40ae
continuous-integration/drone/push Build is failing Details
start xml implementation (see details)
This implementation works for trivial cases, and all xml.zig test
cases pass. Using this with a real response (see describe_instances.xml.zig)
triggers a compile error, captured in llvm-ir.log
2021-08-11 18:41:43 -07:00
codegen correct remaining to_snake issues in service manifest 2021-07-23 14:04:12 -07:00
smithy extract smithy to separate package 2021-06-30 09:05:21 -07:00
src start xml implementation (see details) 2021-08-11 18:41:43 -07:00
.drone.yml perform build before test/add gen as a dependency for test 2021-06-30 11:09:30 -07:00
.gitignore extract smithy to separate package 2021-06-30 09:05:21 -07:00
Dockerfile update Dockerfile for 0.8.0 2021-06-12 09:33:58 -07:00
LICENSE first thing that actually works 2021-04-27 11:24:01 -07:00
Makefile first thing that actually works 2021-04-27 11:24:01 -07:00
README.md fix stripping, update readme for proper sizing 2021-06-30 15:14:36 -07:00
build.zig add automatic regen (though there is probably a better way here) 2021-07-23 14:05:26 -07:00

README.md

AWS SDK for Zig

Build Status

Ok, so it's not actually an SDK (yet). Right now the SDK should support any "query-based" operation and probably EC2, though this isn't tested yet. Total service count should be around 18 services supported. If you use an unsupported service, you'll get a compile error.

This is my first serious zig effort, so please issue a PR if the code isn't "ziggy" or if there's a better way.

This is designed to be built statically using the aws_c_* libraries, so we inherit a lot of the goodness of the work going on there. Current executable size is 9.7M, about half of which is due to the SSL library. Running strip on the executable after compilation (it seems zig strip only goes so far), reduces this to 4.3M. This is for x86_linux, (which is all that's tested at the moment).

Building

I am assuming here that if you're playing with zig, you pretty much know what you're doing, so I will stay brief.

First, the dependencies are required. Use the Dockerfile to build these. a docker build will do, but be prepared for it to run a while. Openssl in particular will take a while, but without any particular knowledge I'm also hoping/expecting AWS to factor out that library sometime in the future.

Once that's done, you'll have an alpine image with all dependencies ready to go and zig master installed. There are some build-related things still broken in 0.8.0 and hopefully 0.8.1 will address those and we can be on a standard release.

  • zig build should work. It will build the code generation project, run the code generation, then build the main project with the generated code.
  • Install make and use the included Makefile. Going this path should be fine with zig 0.8.0 release, but code generation has not been added to the Makefile yet (ever?), so you'll be on your own for that.

Running

This library uses the aws c libraries for it's work, so it operates like most other 'AWS things'. Note that I tested by setting the appropriate environment variables, so config files haven't gotten a run through. main.zig gives you a program to call sts GetCallerIdentity. For local testing or alternative endpoints, there's no real standard, so there is code to look for AWS_ENDPOINT_URL environment variable that will supercede all other configuration.

Dependencies

Full dependency tree: aws-c-auth

  • s2n
    • aws-lc
  • aws-c-common
  • aws-c-compression
    • aws-c-common
  • aws-c-http
    • s2n
    • aws-c-common
    • aws-c-io
      • aws-c-common
      • s2n
        • aws-lc
      • aws-c-cal
        • aws-c-common
        • aws-lc
    • aws-c-compression
      • aws-c-common
  • aws-c-cal
    • aws-c-common
    • aws-lc

Build order based on above:

  1. aws-c-common
  2. aws-lc
  3. s2n
  4. aws-c-cal
  5. aws-c-compression
  6. aws-c-io
  7. aws-c-http
  8. aws-c-auth

Dockerfile in this repo will manage this

TODO List:

  • Implement jitter/exponential backoff. This appears to be configuration of aws_c_io and should therefore be trivial
  • Implement timeouts and other TODO's in the code
  • Implement error handling for 4xx, 5xx and other unexpected return values
  • ✓ Implement generic response body -> Response type handling (right now, this is hard-coded)
  • ✓ Implement codegen for services with xml structures (using Smithy models)
  • ✓ Implement codegen for others (using Smithy models)
  • Switch to aws-c-cal upstream once PR for full static musl build support is merged (see Dockerfile)
  • Move to compiler on tagged release (hopefully 0.8.1) (new 2021-05-29. I will proceed in this order unless I get other requests)
  • ✓ Implement AWS query protocol. This is the protocol in use by sts.getcalleridentity. Total service count 18
  • Implement AWS Json 1.0 protocol. Includes dynamodb. Total service count 18
  • Implement AWS Json 1.1 protocol. Includes ecs. Total service count 105
  • Implement AWS restXml protocol. Includes S3. Total service count 4.
  • ✓ Implement AWS EC2 query protocol. Includes EC2. Total service count 1.

Compiler wishlist/watchlist:

This is no longer as important. The primary issue was in the return value, but due to the way AWS responses are provided, we are able to statically declare a type and thus allow our types to be generated.