Compare commits

...

24 Commits

Author SHA1 Message Date
Emil Lerch b0cfb5a1c0
update aws-zig to zig 0.12.0 (example will still fail)
AWS-Zig Build / build-zig-0.11.0-amd64-host (push) Failing after 39s Details
2024-04-22 08:19:24 -07:00
Emil Lerch b2c8fc5f3c
update README for riscv64 situation
aws-zig mach nominated build / build-zig-nominated-mach-latest (push) Successful in 6m56s Details
aws-zig nightly build / build-zig-nightly (push) Failing after 1m46s Details
2024-04-02 18:41:26 -07:00
Emil Lerch 62973bf4bf
update example
aws-zig mach nominated build / build-zig-nominated-mach-latest (push) Successful in 3m46s Details
aws-zig nightly build / build-zig-nightly (push) Failing after 1m11s Details
2024-04-02 18:30:27 -07:00
Emil Lerch 2f5f9edb36
markdown adjustment on README 2024-04-02 18:20:37 -07:00
Emil Lerch b1b2a6cc7a
add example build at the end so we have models to use for the sample
aws-zig mach nominated build / build-zig-nominated-mach-latest (push) Failing after 2m38s Details
aws-zig nightly build / build-zig-nightly (push) Failing after 54s Details
2024-04-02 18:00:08 -07:00
Emil Lerch e74a0e9456
get build LLVM-approved (riscv64-linux disabled for now)
aws-zig mach nominated build / build-zig-nominated-mach-latest (push) Failing after 3m8s Details
aws-zig nightly build / build-zig-nightly (push) Failing after 1m0s Details
2024-04-02 17:49:45 -07:00
Emil Lerch 723e0b0989
add commented out "normal test" pattern 2024-04-02 17:47:50 -07:00
Emil Lerch a3117eea54
update name of build job to what it actually does
aws-zig mach nominated build / build-zig-nominated-mach-latest (push) Failing after 21m29s Details
aws-zig nightly build / build-zig-nightly (push) Failing after 1m9s Details
2024-04-02 13:00:13 -07:00
Emil Lerch afdc75c1c6
stop the zig build DOS attack on the CI infrastructure
aws-zig nightly build / build-zig-nightly (push) Waiting to run Details
aws-zig mach nominated build / build-zig-nightly (push) Has been cancelled Details
2024-04-02 12:54:01 -07:00
Emil Lerch e6634d3c0f
shell32 has been removed, use USERPROFILE env var instead
aws-zig mach nominated build / build-zig-nightly (push) Failing after 1h21m14s Details
aws-zig nightly build / build-zig-nightly (push) Failing after 10m24s Details
2024-04-02 11:05:17 -07:00
Emil Lerch 57e994f80f
upgrade to working smithy 2024-04-02 11:04:40 -07:00
Emil Lerch d46cb9c28b
update smithy
aws-zig mach nominated build / build-zig-nightly (push) Failing after 1m11s Details
aws-zig nightly build / build-zig-nightly (push) Failing after 1m1s Details
2024-04-02 10:26:33 -07:00
Emil Lerch 7dcf3d3a2e
upgrade to nominated zig 2024.3.0-mach (0.12.0-dev.3180+83e578a18)
aws-zig mach nominated build / build-zig-nightly (push) Failing after 28m36s Details
aws-zig nightly build / build-zig-nightly (push) Failing after 57s Details
There were significant changes to the way HTTP operates since 0.11,
effecting client operations, but more substantially, the server
implementation, which effected the test harness.

std.http.Headers was removed, including the getFirstValue function, which
needed to be replicated. On the plus side, a std.http.Header struct was
added, identical to our own structure, so I have removed out own header
in favor of stdlib.

On the Http client side, I have switched to use the fetch API. Proxy
support is built in, but we are using (mostly) our own implementation
for now, with the remaining conversion left as a TODO item. Raw URIs are
now supported, so the workaround for issue 17015 has been removed. Large
payloads should also be fixed, but this has not been tested.

The standard library now adds the content-length header
(unconditionally), which is a decision of dubious nature. I have removed
the addition of content-length, which also means it is not present
during signing. This should be allowed.

Dependency loop on fieldTransformer was fixed. This should have been
a problem on zig 0.11, but was not. This effected the API for the json
parsing, but we were not using that. At the call site, these did not
need to be specified as references.

With the http server no longer doing all the allocations it once was,
the test harness now has a lot more allocations to perform. To alleviate
the bookeeping, this was moved to an Arena allocator. The client,
which is really what is under test, continues to use the allocator
passed.
2024-04-02 09:27:42 -07:00
Emil Lerch d442671275
upgrade `zig build gen` build command to zig 0.12
This removes the copied Package.zig as we can now use the actual package
manager and look at our available dependencies in build.zig. This may
break again later as I believe lazy dependencies are planned, which will
have us search for a solution to tell the build system we're expecting
this dependency.
2024-03-30 15:26:24 -07:00
Emil Lerch 213627c305
update README to latest zig nominated 2024-03-30 08:25:50 -07:00
Emil Lerch 47ab9f6064
update service list, add S3 back 2024-03-30 08:25:21 -07:00
Emil Lerch 866a89d8ae
add sudo to actions 2024-03-30 08:17:59 -07:00
Emil Lerch b8df9e3610
attempt to address github action complaints 2024-03-30 08:17:59 -07:00
Emil Lerch d1d0b294d7
seperate github workflows 2024-03-30 08:17:58 -07:00
Emil Lerch 8a80cbda4a
Action incompatibility across gitea/github makes this a bit too tough rn 2024-03-30 08:17:57 -07:00
Emil Lerch 444173afd2
rename jobs 2024-03-30 08:17:56 -07:00
Emil Lerch b6cdb6f7a7
move workflow actions to .github for use on both platforms 2024-03-30 08:17:56 -07:00
Emil Lerch f7106d0904
add nightly, with versioning 2024-03-30 08:17:55 -07:00
Emil Lerch 3f5e49662f
start a 0.12 branch 2024-03-30 08:17:40 -07:00
25 changed files with 792 additions and 1229 deletions

View File

@ -0,0 +1,83 @@
name: aws-zig mach nominated build
run-name: ${{ github.actor }} building AWS Zig SDK
on:
schedule:
- cron: '0 12 * * *' # noon UTC, 4AM Pacific
push:
branches:
- 'zig-develop*'
env:
ACTIONS_RUNTIME_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ACTIONS_RUNTIME_URL: ${{ env.GITHUB_SERVER_URL }}/api/actions_pipeline/
jobs:
build-zig-nominated-mach-latest:
runs-on: ubuntu-latest
# Need to use the default container with node and all that, so we can
# use JS-based actions like actions/checkout@v3...
# container:
# image: alpine:3.15.0
env:
ZIG_VERSION: mach-latest
ARCH: x86_64
steps:
- name: Check out repository code
uses: actions/checkout@v3
# ARCH is fine, but we can't substitute directly because zig
# uses x86_64 instead of amd64. They also use aarch64 instead of arm64.
#
# However, arm64/linux isn't quite fully tier 1 yet, so this is more of a
# TODO: https://github.com/ziglang/zig/issues/2443
- name: Install zig
run: |
apt-get update && apt-get install -y jq
file="$(curl -Osw '%{filename_effective}' "$(curl -s https://machengine.org/zig/index.json |jq -r '."'${ZIG_VERSION}'"."x86_64-linux".tarball')")"
tar x -C /usr/local -f "${file}"
ln -s /usr/local/"${file%%.tar.xz}"/zig /usr/local/bin/zig
zig version
- name: Run tests
run: zig build test --verbose
# Zig package manager expects everything to be inside a directory in the archive,
# which it then strips out on download. So we need to shove everything inside a directory
# the way GitHub/Gitea does for repo archives
#
# Also, zig tar process doesn't handle gnu format for long names, nor does it seam to
# handle posix long name semantics cleanly either. ustar works. This
# should be using git archive, but we need our generated code to be part of it
- name: Package source code with generated models
run: |
tar -czf ${{ runner.temp }}/${{ github.sha }}-with-models.tar.gz \
--format ustar \
--exclude 'zig-*' \
--transform 's,^,${{ github.sha }}/,' *
# - name: Sign
# id: sign
# uses: https://git.lerch.org/lobo/action-hsm-sign@v1
# with:
# pin: ${{ secrets.HSM_USER_PIN }}
# files: ???
# public_key: 'https://emil.lerch.org/serverpublic.pem'
# - run: |
# echo "Source 0 should be ./bar: ${{ steps.sign.outputs.SOURCE_0 }}"
# - run: |
# echo "Signature 0 should be ./bar.sig: ${{ steps.sign.outputs.SIG_0 }}"
# - run: echo "URL of bar (0) is ${{ steps.sign.outputs.URL_0 }}"
# - run: |
# echo "Source 1 should be ./foo: ${{ steps.sign.outputs.SOURCE_1 }}"
# - run: |
# echo "Signature 1 should be ./foo.sig: ${{ steps.sign.outputs.SIG_1 }}"
# - run: echo "URL of foo (1) is ${{ steps.sign.outputs.URL_1 }}"
- name: Publish source code with generated models
run: |
curl --user ${{ github.actor }}:${{ secrets.PACKAGE_PUSH }} \
--upload-file ${{ runner.temp }}/${{ github.sha }}-with-models.tar.gz \
https://git.lerch.org/api/packages/lobo/generic/aws-sdk-with-models/${{ github.sha }}/${{ github.sha }}-with-models.tar.gz
- name: Build example
run: ( cd example && zig build ) # Make sure example builds
- name: Notify
uses: https://git.lerch.org/lobo/action-notify-ntfy@v2
if: always()
with:
host: ${{ secrets.NTFY_HOST }}
topic: ${{ secrets.NTFY_TOPIC }}
user: ${{ secrets.NTFY_USER }}
password: ${{ secrets.NTFY_PASSWORD }}

View File

@ -0,0 +1,81 @@
name: aws-zig nightly build
run-name: ${{ github.actor }} building AWS Zig SDK
on:
push:
branches:
- 'zig-develop*'
env:
ACTIONS_RUNTIME_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ACTIONS_RUNTIME_URL: ${{ env.GITHUB_SERVER_URL }}/api/actions_pipeline/
jobs:
build-zig-nightly:
runs-on: ubuntu-latest
# Need to use the default container with node and all that, so we can
# use JS-based actions like actions/checkout@v3...
# container:
# image: alpine:3.15.0
env:
ZIG_VERSION: master
ARCH: x86_64
steps:
- name: Check out repository code
uses: actions/checkout@v3
# ARCH is fine, but we can't substitute directly because zig
# uses x86_64 instead of amd64. They also use aarch64 instead of arm64.
#
# However, arm64/linux isn't quite fully tier 1 yet, so this is more of a
# TODO: https://github.com/ziglang/zig/issues/2443
- name: Install zig
run: |
apt-get update && apt-get install -y jq
file="$(curl -Osw '%{filename_effective}' "$(curl -s https://ziglang.org/download/index.json |jq -r '."'${ZIG_VERSION}'"."x86_64-linux".tarball')")"
tar x -C /usr/local -f "${file}"
ln -s /usr/local/"${file%%.tar.xz}"/zig /usr/local/bin/zig
zig version
- name: Run tests
run: zig build test --verbose
# Zig package manager expects everything to be inside a directory in the archive,
# which it then strips out on download. So we need to shove everything inside a directory
# the way GitHub/Gitea does for repo archives
#
# Also, zig tar process doesn't handle gnu format for long names, nor does it seam to
# handle posix long name semantics cleanly either. ustar works. This
# should be using git archive, but we need our generated code to be part of it
- name: Package source code with generated models
run: |
tar -czf ${{ runner.temp }}/${{ github.sha }}-with-models.tar.gz \
--format ustar \
--exclude 'zig-*' \
--transform 's,^,${{ github.sha }}/,' *
# - name: Sign
# id: sign
# uses: https://git.lerch.org/lobo/action-hsm-sign@v1
# with:
# pin: ${{ secrets.HSM_USER_PIN }}
# files: ???
# public_key: 'https://emil.lerch.org/serverpublic.pem'
# - run: |
# echo "Source 0 should be ./bar: ${{ steps.sign.outputs.SOURCE_0 }}"
# - run: |
# echo "Signature 0 should be ./bar.sig: ${{ steps.sign.outputs.SIG_0 }}"
# - run: echo "URL of bar (0) is ${{ steps.sign.outputs.URL_0 }}"
# - run: |
# echo "Source 1 should be ./foo: ${{ steps.sign.outputs.SOURCE_1 }}"
# - run: |
# echo "Signature 1 should be ./foo.sig: ${{ steps.sign.outputs.SIG_1 }}"
# - run: echo "URL of foo (1) is ${{ steps.sign.outputs.URL_1 }}"
- name: Publish source code with generated models
run: |
curl --user ${{ github.actor }}:${{ secrets.PACKAGE_PUSH }} \
--upload-file ${{ runner.temp }}/${{ github.sha }}-with-models.tar.gz \
https://git.lerch.org/api/packages/lobo/generic/aws-sdk-with-models/${{ github.sha }}/${{ github.sha }}-with-models.tar.gz
- name: Build example
run: ( cd example && zig build ) # Make sure example builds
- name: Notify
uses: https://git.lerch.org/lobo/action-notify-ntfy@v2
if: always()
with:
host: ${{ secrets.NTFY_HOST }}
topic: ${{ secrets.NTFY_TOPIC }}
user: ${{ secrets.NTFY_USER }}
password: ${{ secrets.NTFY_PASSWORD }}

31
.github/workflows/build.yaml vendored Normal file
View File

@ -0,0 +1,31 @@
name: AWS-Zig Build
run-name: ${{ github.actor }} building AWS Zig SDK
on:
push:
branches:
- '*'
- '!zig-develop*'
jobs:
build-zig-0.11.0-amd64-host:
runs-on: ubuntu-latest
env:
ZIG_VERSION: 0.11.0
ARCH: x86_64
if: ${{ github.env.GITEA_ACTIONS != 'true' }}
steps:
- name: Check out repository code
uses: actions/checkout@v3
# ARCH is fine, but we can't substitute directly because zig
# uses x86_64 instead of amd64. They also use aarch64 instead of arm64.
#
# However, arm64/linux isn't quite fully tier 1 yet, so this is more of a
# TODO: https://github.com/ziglang/zig/issues/2443
- name: Install zig
run: |
wget -q https://ziglang.org/download/${ZIG_VERSION}/zig-linux-${ARCH}-${ZIG_VERSION}.tar.xz
sudo tar x -C /usr/local -f zig-linux-${ARCH}-${ZIG_VERSION}.tar.xz
sudo ln -s /usr/local/zig-linux-${ARCH}-${ZIG_VERSION}/zig /usr/local/bin/zig
- name: Run tests
run: zig build test --verbose
- name: Build example
run: ( cd example && zig build ) # Make sure example builds

38
.github/workflows/zig-mach.yaml vendored Normal file
View File

@ -0,0 +1,38 @@
name: aws-zig mach nominated build
run-name: ${{ github.actor }} building AWS Zig SDK
on:
schedule:
- cron: '0 12 * * *' # noon UTC, 4AM Pacific
push:
branches:
- 'zig-develop*'
jobs:
build-zig-nightly:
runs-on: ubuntu-latest
# Need to use the default container with node and all that, so we can
# use JS-based actions like actions/checkout@v3...
# container:
# image: alpine:3.15.0
env:
ZIG_VERSION: mach-latest
ARCH: x86_64
if: ${{ github.env.GITEA_ACTIONS != 'true' }}
steps:
- name: Check out repository code
uses: actions/checkout@v3
# ARCH is fine, but we can't substitute directly because zig
# uses x86_64 instead of amd64. They also use aarch64 instead of arm64.
#
# However, arm64/linux isn't quite fully tier 1 yet, so this is more of a
# TODO: https://github.com/ziglang/zig/issues/2443
- name: Install zig
run: |
apt-get update && apt-get install -y jq
file="$(curl -Osw '%{filename_effective}' "$(curl -s https://machengine.org/zig/index.json |jq -r '."'${ZIG_VERSION}'"."x86_64-linux".tarball')")"
sudo tar x -C /usr/local -f "${file}"
sudo ln -s /usr/local/"${file%%.tar.xz}"/zig /usr/local/bin/zig
zig version
- name: Run tests
run: zig build test --verbose
- name: Build example
run: ( cd example && zig build ) # Make sure example builds

36
.github/workflows/zig-nightly.yaml vendored Normal file
View File

@ -0,0 +1,36 @@
name: aws-zig nightly build
run-name: ${{ github.actor }} building AWS Zig SDK
on:
push:
branches:
- 'zig-develop*'
jobs:
build-zig-nightly:
runs-on: ubuntu-latest
# Need to use the default container with node and all that, so we can
# use JS-based actions like actions/checkout@v3...
# container:
# image: alpine:3.15.0
env:
ZIG_VERSION: master
ARCH: x86_64
if: ${{ github.env.GITEA_ACTIONS != 'true' }}
steps:
- name: Check out repository code
uses: actions/checkout@v3
# ARCH is fine, but we can't substitute directly because zig
# uses x86_64 instead of amd64. They also use aarch64 instead of arm64.
#
# However, arm64/linux isn't quite fully tier 1 yet, so this is more of a
# TODO: https://github.com/ziglang/zig/issues/2443
- name: Install zig
run: |
apt-get update && apt-get install -y jq
file="$(curl -Osw '%{filename_effective}' "$(curl -s https://ziglang.org/download/index.json |jq -r '."'${ZIG_VERSION}'"."x86_64-linux".tarball')")"
sudo tar x -C /usr/local -f "${file}"
sudo ln -s /usr/local/"${file%%.tar.xz}"/zig /usr/local/bin/zig
zig version
- name: Run tests
run: zig build test --verbose
- name: Build example
run: ( cd example && zig build ) # Make sure example builds

View File

@ -1,521 +0,0 @@
const builtin = @import("builtin");
const std = @import("std");
const testing = std.testing;
const Hasher = @import("codegen/src/Hasher.zig");
/// This is 128 bits - Even with 2^54 cache entries, the probably of a collision would be under 10^-6
const bin_digest_len = 16;
const hex_digest_len = bin_digest_len * 2;
const Package = @This();
root_src_directory: std.Build.Cache.Directory,
/// Whether to free `root_src_directory` on `destroy`.
root_src_directory_owned: bool = false,
allocator: std.mem.Allocator,
pub const Dependency = struct {
url: []const u8,
hash: ?[]const u8,
};
pub fn deinit(self: *Package) void {
if (self.root_src_directory_owned)
self.root_src_directory.closeAndFree(self.allocator);
}
pub fn fetchOneAndUnpack(
allocator: std.mem.Allocator,
cache_directory: []const u8, // directory to store things
dep: Dependency, // thing to download
) !*Package {
var http_client: std.http.Client = .{ .allocator = allocator };
defer http_client.deinit();
const global_cache_directory: std.Build.Cache.Directory = .{
.handle = try std.fs.cwd().makeOpenPath(cache_directory, .{}),
.path = cache_directory,
};
var thread_pool: std.Thread.Pool = undefined;
try thread_pool.init(.{ .allocator = allocator });
defer thread_pool.deinit();
var progress: std.Progress = .{ .dont_print_on_dumb = true };
const root_prog_node = progress.start("Fetch Packages", 0);
defer root_prog_node.end();
return try fetchAndUnpack(
&thread_pool,
&http_client,
global_cache_directory,
dep,
dep.url,
root_prog_node,
);
}
pub fn fetchAndUnpack(
thread_pool: *std.Thread.Pool, // thread pool for hashing things in parallel
http_client: *std.http.Client, // client to download stuff
global_cache_directory: std.Build.Cache.Directory, // directory to store things
dep: Dependency, // thing to download
fqn: []const u8, // used as name for thing downloaded
root_prog_node: *std.Progress.Node, // used for outputting to terminal
) !*Package {
const gpa = http_client.allocator;
const s = std.fs.path.sep_str;
// Check if the expected_hash is already present in the global package
// cache, and thereby avoid both fetching and unpacking.
if (dep.hash) |h| cached: {
const hex_digest = h[0..Hasher.hex_multihash_len];
const pkg_dir_sub_path = "p" ++ s ++ hex_digest;
const build_root = try global_cache_directory.join(gpa, &.{pkg_dir_sub_path});
errdefer gpa.free(build_root);
var pkg_dir = global_cache_directory.handle.openDir(pkg_dir_sub_path, .{}) catch |err| switch (err) {
error.FileNotFound => break :cached,
else => |e| return e,
};
errdefer pkg_dir.close();
root_prog_node.completeOne();
const ptr = try gpa.create(Package);
errdefer gpa.destroy(ptr);
ptr.* = .{
.root_src_directory = .{
.path = build_root, // TODO: This leaks memory somehow (should be cleaned in deinit()
.handle = pkg_dir,
},
.root_src_directory_owned = true,
.allocator = gpa,
};
return ptr;
}
var pkg_prog_node = root_prog_node.start(fqn, 0);
defer pkg_prog_node.end();
pkg_prog_node.activate();
pkg_prog_node.context.refresh();
const uri = try std.Uri.parse(dep.url);
const rand_int = std.crypto.random.int(u64);
const tmp_dir_sub_path = "tmp" ++ s ++ Hasher.hex64(rand_int);
const actual_hash = a: {
var tmp_directory: std.Build.Cache.Directory = d: {
const path = try global_cache_directory.join(gpa, &.{tmp_dir_sub_path});
errdefer gpa.free(path);
const iterable_dir = try global_cache_directory.handle.makeOpenPathIterable(tmp_dir_sub_path, .{});
errdefer iterable_dir.close();
break :d .{
.path = path,
.handle = iterable_dir.dir,
};
};
defer tmp_directory.closeAndFree(gpa);
var h = std.http.Headers{ .allocator = gpa };
defer h.deinit();
var req = try http_client.request(.GET, uri, h, .{});
defer req.deinit();
try req.start();
try req.wait();
if (req.response.status != .ok) {
std.log.err("Expected response status '200 OK' got '{} {s}'", .{
@intFromEnum(req.response.status),
req.response.status.phrase() orelse "",
});
return error.UnexpectedResponseStatus;
}
const content_type = req.response.headers.getFirstValue("Content-Type") orelse
return error.MissingContentTypeHeader;
var prog_reader: ProgressReader(std.http.Client.Request.Reader) = .{
.child_reader = req.reader(),
.prog_node = &pkg_prog_node,
.unit = if (req.response.content_length) |content_length| unit: {
const kib = content_length / 1024;
const mib = kib / 1024;
if (mib > 0) {
pkg_prog_node.setEstimatedTotalItems(@intCast(mib));
pkg_prog_node.setUnit("MiB");
break :unit .mib;
} else {
pkg_prog_node.setEstimatedTotalItems(@intCast(@max(1, kib)));
pkg_prog_node.setUnit("KiB");
break :unit .kib;
}
} else .any,
};
pkg_prog_node.context.refresh();
if (std.ascii.eqlIgnoreCase(content_type, "application/gzip") or
std.ascii.eqlIgnoreCase(content_type, "application/x-gzip") or
std.ascii.eqlIgnoreCase(content_type, "application/tar+gzip"))
{
// I observed the gzip stream to read 1 byte at a time, so I am using a
// buffered reader on the front of it.
try unpackTarball(gpa, prog_reader.reader(), tmp_directory.handle, std.compress.gzip);
} else if (std.ascii.eqlIgnoreCase(content_type, "application/x-xz")) {
// I have not checked what buffer sizes the xz decompression implementation uses
// by default, so the same logic applies for buffering the reader as for gzip.
try unpackTarball(gpa, prog_reader.reader(), tmp_directory.handle, std.compress.xz);
} else if (std.ascii.eqlIgnoreCase(content_type, "application/octet-stream")) {
// support gitlab tarball urls such as https://gitlab.com/<namespace>/<project>/-/archive/<sha>/<project>-<sha>.tar.gz
// whose content-disposition header is: 'attachment; filename="<project>-<sha>.tar.gz"'
const content_disposition = req.response.headers.getFirstValue("Content-Disposition") orelse
return error.@"Missing 'Content-Disposition' header for Content-Type=application/octet-stream";
if (isTarAttachment(content_disposition)) {
try unpackTarball(gpa, prog_reader.reader(), tmp_directory.handle, std.compress.gzip);
} else {
std.log.err("Unsupported 'Content-Disposition' header value: '{s}' for Content-Type=application/octet-stream", .{content_disposition});
return error.UnsupportedContentDispositionHeader;
}
} else {
std.log.err("Unsupported 'Content-Type' header value: '{s}'", .{content_type});
return error.UnsupportedContentTypeHeader;
}
// Download completed - stop showing downloaded amount as progress
pkg_prog_node.setEstimatedTotalItems(0);
pkg_prog_node.setCompletedItems(0);
pkg_prog_node.context.refresh();
// TODO: delete files not included in the package prior to computing the package hash.
// for example, if the ini file has directives to include/not include certain files,
// apply those rules directly to the filesystem right here. This ensures that files
// not protected by the hash are not present on the file system.
// TODO: raise an error for files that have illegal paths on some operating systems.
// For example, on Linux a path with a backslash should raise an error here.
// Of course, if the ignore rules above omit the file from the package, then everything
// is fine and no error should be raised.
break :a try Hasher.computeDirectoryHash(thread_pool, .{ .dir = tmp_directory.handle }, &.{});
};
const pkg_dir_sub_path = "p" ++ s ++ Hasher.hexDigest(actual_hash);
try renameTmpIntoCache(global_cache_directory.handle, tmp_dir_sub_path, pkg_dir_sub_path);
const actual_hex = Hasher.hexDigest(actual_hash);
if (dep.hash) |h| {
if (!std.mem.eql(u8, h, &actual_hex)) {
std.log.err("hash mismatch: expected: {s}, found: {s}", .{
h, actual_hex,
});
return error.HashMismatch;
}
} else {
std.log.err("No hash supplied. Expecting hash \"{s}\"", .{actual_hex});
return error.NoHashSupplied;
}
const build_root = try global_cache_directory.join(gpa, &.{pkg_dir_sub_path});
defer gpa.free(build_root);
const mod = try createWithDir(gpa, global_cache_directory, pkg_dir_sub_path);
return mod;
}
fn ProgressReader(comptime ReaderType: type) type {
return struct {
child_reader: ReaderType,
bytes_read: u64 = 0,
prog_node: *std.Progress.Node,
unit: enum {
kib,
mib,
any,
},
pub const Error = ReaderType.Error;
pub const Reader = std.io.Reader(*@This(), Error, read);
pub fn read(self: *@This(), buf: []u8) Error!usize {
const amt = try self.child_reader.read(buf);
self.bytes_read += amt;
const kib = self.bytes_read / 1024;
const mib = kib / 1024;
switch (self.unit) {
.kib => self.prog_node.setCompletedItems(@intCast(kib)),
.mib => self.prog_node.setCompletedItems(@intCast(mib)),
.any => {
if (mib > 0) {
self.prog_node.setUnit("MiB");
self.prog_node.setCompletedItems(@intCast(mib));
} else {
self.prog_node.setUnit("KiB");
self.prog_node.setCompletedItems(@intCast(kib));
}
},
}
self.prog_node.context.maybeRefresh();
return amt;
}
pub fn reader(self: *@This()) Reader {
return .{ .context = self };
}
};
}
fn isTarAttachment(content_disposition: []const u8) bool {
const disposition_type_end = std.ascii.indexOfIgnoreCase(content_disposition, "attachment;") orelse return false;
var value_start = std.ascii.indexOfIgnoreCasePos(content_disposition, disposition_type_end + 1, "filename") orelse return false;
value_start += "filename".len;
if (content_disposition[value_start] == '*') {
value_start += 1;
}
if (content_disposition[value_start] != '=') return false;
value_start += 1;
var value_end = std.mem.indexOfPos(u8, content_disposition, value_start, ";") orelse content_disposition.len;
if (content_disposition[value_end - 1] == '\"') {
value_end -= 1;
}
return std.ascii.endsWithIgnoreCase(content_disposition[value_start..value_end], ".tar.gz");
}
fn renameTmpIntoCache(
cache_dir: std.fs.Dir,
tmp_dir_sub_path: []const u8,
dest_dir_sub_path: []const u8,
) !void {
std.debug.assert(dest_dir_sub_path[1] == std.fs.path.sep);
var handled_missing_dir = false;
while (true) {
cache_dir.rename(tmp_dir_sub_path, dest_dir_sub_path) catch |err| switch (err) {
error.FileNotFound => {
if (handled_missing_dir) return err;
cache_dir.makeDir(dest_dir_sub_path[0..1]) catch |mkd_err| switch (mkd_err) {
error.PathAlreadyExists => handled_missing_dir = true,
else => |e| return e,
};
continue;
},
error.PathAlreadyExists, error.AccessDenied => {
// Package has been already downloaded and may already be in use on the system.
cache_dir.deleteTree(tmp_dir_sub_path) catch |del_err| {
std.log.warn("unable to delete temp directory: {s}", .{@errorName(del_err)});
};
},
else => |e| return e,
};
break;
}
}
fn createWithDir(
gpa: std.mem.Allocator,
directory: std.Build.Cache.Directory,
/// Relative to `directory`. If null, means `directory` is the root src dir
/// and is owned externally.
root_src_dir_path: ?[]const u8,
) !*Package {
const ptr = try gpa.create(Package);
errdefer gpa.destroy(ptr);
if (root_src_dir_path) |p| {
const owned_dir_path = try directory.join(gpa, &[1][]const u8{p});
errdefer gpa.free(owned_dir_path);
ptr.* = .{
.root_src_directory = .{
.path = owned_dir_path,
.handle = try directory.handle.openDir(p, .{}),
},
.root_src_directory_owned = true,
.allocator = gpa,
};
} else {
ptr.* = .{
.root_src_directory = directory,
.root_src_directory_owned = false,
.allocator = gpa,
};
}
return ptr;
}
// Create/Write a file, close it, then grab its stat.mtime timestamp.
fn testGetCurrentFileTimestamp(dir: std.fs.Dir) !i128 {
const test_out_file = "test-filetimestamp.tmp";
var file = try dir.createFile(test_out_file, .{
.read = true,
.truncate = true,
});
defer {
file.close();
dir.deleteFile(test_out_file) catch {};
}
return (try file.stat()).mtime;
}
// These functions come from src/Package.zig, src/Manifest.zig in the compiler,
// not the standard library
fn unpackTarball(
gpa: std.mem.Allocator,
req_reader: anytype,
out_dir: std.fs.Dir,
comptime compression: type,
) !void {
var br = std.io.bufferedReaderSize(std.crypto.tls.max_ciphertext_record_len, req_reader);
var decompress = try compression.decompress(gpa, br.reader());
defer decompress.deinit();
try std.tar.pipeToFileSystem(out_dir, decompress.reader(), .{
.strip_components = 1,
// TODO: we would like to set this to executable_bit_only, but two
// things need to happen before that:
// 1. the tar implementation needs to support it
// 2. the hashing algorithm here needs to support detecting the is_executable
// bit on Windows from the ACLs (see the isExecutable function).
.mode_mode = .ignore,
});
}
test {
std.testing.refAllDecls(@This());
}
test "cache a file and recall it" {
if (builtin.os.tag == .wasi) {
// https://github.com/ziglang/zig/issues/5437
return error.SkipZigTest;
}
var tmp = testing.tmpDir(.{});
defer tmp.cleanup();
const temp_file = "test.txt";
const temp_file2 = "test2.txt";
const temp_manifest_dir = "temp_manifest_dir";
try tmp.dir.writeFile(temp_file, "Hello, world!\n");
try tmp.dir.writeFile(temp_file2, "yo mamma\n");
// Wait for file timestamps to tick
const initial_time = try testGetCurrentFileTimestamp(tmp.dir);
while ((try testGetCurrentFileTimestamp(tmp.dir)) == initial_time) {
std.time.sleep(1);
}
var digest1: [hex_digest_len]u8 = undefined;
var digest2: [hex_digest_len]u8 = undefined;
{
var cache = std.build.Cache{
.gpa = testing.allocator,
.manifest_dir = try tmp.dir.makeOpenPath(temp_manifest_dir, .{}),
};
cache.addPrefix(.{ .path = null, .handle = tmp.dir });
defer cache.manifest_dir.close();
{
var ch = cache.obtain();
defer ch.deinit();
ch.hash.add(true);
ch.hash.add(@as(u16, 1234));
ch.hash.addBytes("1234");
_ = try ch.addFile(temp_file, null);
// There should be nothing in the cache
try testing.expectEqual(false, try ch.hit());
digest1 = ch.final();
try ch.writeManifest();
}
{
var ch = cache.obtain();
defer ch.deinit();
ch.hash.add(true);
ch.hash.add(@as(u16, 1234));
ch.hash.addBytes("1234");
_ = try ch.addFile(temp_file, null);
// Cache hit! We just "built" the same file
try testing.expect(try ch.hit());
digest2 = ch.final();
try testing.expectEqual(false, ch.have_exclusive_lock);
}
try testing.expectEqual(digest1, digest2);
}
}
test "fetch and unpack" {
const alloc = std.testing.allocator;
var http_client: std.http.Client = .{ .allocator = alloc };
defer http_client.deinit();
const global_cache_directory: std.Build.Cache.Directory = .{
.handle = try std.fs.cwd().makeOpenPath("test-pkg", .{}),
.path = "test-pkg",
};
var thread_pool: std.Thread.Pool = undefined;
try thread_pool.init(.{ .allocator = alloc });
defer thread_pool.deinit();
var progress: std.Progress = .{ .dont_print_on_dumb = true };
const root_prog_node = progress.start("Fetch Packages", 0);
defer root_prog_node.end();
const pkg = try fetchAndUnpack(
&thread_pool,
&http_client,
global_cache_directory,
.{
.url = "https://github.com/aws/aws-sdk-go-v2/archive/7502ff360b1c3b79cbe117437327f6ff5fb89f65.tar.gz",
.hash = "1220a414719bff14c9362fb1c695e3346fa12ec2e728bae5757a57aae7738916ffd2",
},
"https://github.com/aws/aws-sdk-go-v2/archive/7502ff360b1c3b79cbe117437327f6ff5fb89f65.tar.gz",
root_prog_node,
);
defer alloc.destroy(pkg);
defer pkg.deinit();
}
test "fetch one and unpack" {
const pkg = try fetchOneAndUnpack(
std.testing.allocator,
"test-pkg",
.{
.url = "https://github.com/aws/aws-sdk-go-v2/archive/7502ff360b1c3b79cbe117437327f6ff5fb89f65.tar.gz",
.hash = "1220a414719bff14c9362fb1c695e3346fa12ec2e728bae5757a57aae7738916ffd2",
},
);
defer std.testing.allocator.destroy(pkg);
defer pkg.deinit();
try std.testing.expectEqualStrings(
"test-pkg/p/1220a414719bff14c9362fb1c695e3346fa12ec2e728bae5757a57aae7738916ffd2",
pkg.root_src_directory.path.?,
);
}
test "isTarAttachment" {
try std.testing.expect(isTarAttachment("attaChment; FILENAME=\"stuff.tar.gz\"; size=42"));
try std.testing.expect(isTarAttachment("attachment; filename*=\"stuff.tar.gz\""));
try std.testing.expect(isTarAttachment("ATTACHMENT; filename=\"stuff.tar.gz\""));
try std.testing.expect(isTarAttachment("attachment; FileName=\"stuff.tar.gz\""));
try std.testing.expect(isTarAttachment("attachment; FileName*=UTF-8\'\'xyz%2Fstuff.tar.gz"));
try std.testing.expect(!isTarAttachment("attachment FileName=\"stuff.tar.gz\""));
try std.testing.expect(!isTarAttachment("attachment; FileName=\"stuff.tar\""));
try std.testing.expect(!isTarAttachment("attachment; FileName\"stuff.gz\""));
try std.testing.expect(!isTarAttachment("attachment; size=42"));
try std.testing.expect(!isTarAttachment("inline; size=42"));
try std.testing.expect(!isTarAttachment("FileName=\"stuff.tar.gz\"; attachment;"));
try std.testing.expect(!isTarAttachment("FileName=\"stuff.tar.gz\";"));
}

View File

@ -1,7 +1,13 @@
AWS SDK for Zig
===============
[![Build Status](https://actions-status.lerch.org/lobo/aws-sdk-for-zig/build)](https://git.lerch.org/lobo/aws-sdk-for-zig/actions?workflow=build.yaml&state=closed)
[Last Mach Nominated Zig Version](https://machengine.org/about/nominated-zig/):
[![Build Status: Zig 0.12.0-dev.3180+83e578a18](https://actions-status.lerch.org/lobo/aws-sdk-for-zig/zig-mach)](https://git.lerch.org/lobo/aws-sdk-for-zig/actions?workflow=zig-mach.yaml&state=closed)
[Nightly Zig](https://ziglang.org/download/):
[![Build Status: Zig Nightly](https://actions-status.lerch.org/lobo/aws-sdk-for-zig/zig-nightly)](https://git.lerch.org/lobo/aws-sdk-for-zig/actions?workflow=zig-nightly.yaml&state=closed)
**NOTE: TLS 1.3 support is still deploying across AWS. Some services, especially S3,
may or may not be available without a proxy, depending on the region.
@ -11,15 +17,30 @@ Current executable size for the demo is 980k after compiling with -Doptimize=Rel
in x86_linux, and will vary based on services used. Tested targets:
* x86_64-linux
* riscv64-linux
* riscv64-linux\*
* aarch64-linux
* x86_64-windows
* x86_64-windows\*\*
* arm-linux
* aarch64-macos
* x86_64-macos
Tested targets are built, but not continuously tested, by CI.
\* On Zig 0.12, riscv64-linux tests take a significant time to compile (each aws.zig test takes approximately 1min, 45 seconds to compile on Intel i9 10th gen)
\*\* On Zig 0.12, x86_64-windows tests have one test skipped as LLVM consumes all available RAM on the system
Zig-Develop Branch
------------------
This branch is intended for use with the in-development version of Zig. This
starts with 0.12.0-dev.3180+83e578a18. I will try to keep this branch up to date
with latest, but with a special eye towards aligning with [Mach Engine's Nominated
Zig Versions](https://machengine.org/about/nominated-zig/). As nightly zig versions
disappear off the downloads page (and back end server), we can use the mirroring
that the Mach Engine participates in to pull these versions.
Building
--------
@ -50,7 +71,7 @@ Limitations
The zig 0.11 HTTP client supports TLS 1.3 only. AWS has committed to
[TLS 1.3 support across all services by the end of 2023](https://aws.amazon.com/blogs/security/faster-aws-cloud-connections-with-tls-1-3/),
but a few services as of February 28, 2024 have not been upgraded, and S3 is
but a few services as of April 1, 2024 have not been upgraded, and S3 is
a bit intermittent. Proxy support has been added, so to get to the services that
do not yet support TLS 1.3, you can use something like [mitmproxy](https://mitmproxy.org/)
to proxy those requests until roll out is complete.
@ -81,13 +102,13 @@ this point.
NOTE ON S3: For me, S3 is currently intermittently available using TLS 1.3, so
it appears deployments are in progress. The last couple days it has been
available consistently, so I have removed it from the list.
not been available consistently, so I have added it back to the list.
```
data.iot
models.lex
opsworks
support
s3
```
Dependency tree

121
build.zig
View File

@ -1,12 +1,8 @@
const std = @import("std");
const builtin = @import("builtin");
const Builder = @import("std").build.Builder;
const Package = @import("Package.zig");
const Builder = @import("std").Build;
const models_url = "https://github.com/aws/aws-sdk-go-v2/archive/58cf6509525a12d64fd826da883bfdbacbd2f00e.tar.gz";
const models_hash: ?[]const u8 = "122017a2f3081ce83c23e0c832feb1b8b4176d507b6077f522855dc774bcf83ee315";
const models_subdir = "codegen/sdk-codegen/aws-models/"; // note will probably not work on windows
const models_dir = "p" ++ std.fs.path.sep_str ++ (models_hash orelse "") ++ std.fs.path.sep_str ++ models_subdir;
const test_targets = [_]std.zig.CrossTarget{
.{}, // native
@ -18,10 +14,12 @@ const test_targets = [_]std.zig.CrossTarget{
.cpu_arch = .aarch64,
.os_tag = .linux,
},
.{
.cpu_arch = .riscv64,
.os_tag = .linux,
},
// // The test executable just spins forever in LLVM using nominated zig 0.12 March 2024
// // This is likely a LLVM problem unlikely to be fixed in zig 0.12
// .{
// .cpu_arch = .riscv64,
// .os_tag = .linux,
// },
.{
.cpu_arch = .arm,
.os_tag = .linux,
@ -79,22 +77,18 @@ pub fn build(b: *Builder) !void {
.optimize = optimize,
});
const smithy_module = smithy_dep.module("smithy");
exe.addModule("smithy", smithy_module); // not sure this should be here...
exe.root_module.addImport("smithy", smithy_module); // not sure this should be here...
// Expose module to others
_ = b.addModule("aws", .{
.source_file = .{ .path = "src/aws.zig" },
.dependencies = &[_]std.build.ModuleDependency{
.{ .name = "smithy", .module = smithy_module },
},
.root_source_file = .{ .path = "src/aws.zig" },
.imports = &.{.{ .name = "smithy", .module = smithy_module }},
});
// Expose module to others
_ = b.addModule("aws-signing", .{
.source_file = .{ .path = "src/aws_signing.zig" },
.dependencies = &[_]std.build.ModuleDependency{
.{ .name = "smithy", .module = smithy_module },
},
.root_source_file = .{ .path = "src/aws_signing.zig" },
.imports = &.{.{ .name = "smithy", .module = smithy_module }},
});
// TODO: This does not work correctly due to https://github.com/ziglang/zig/issues/16354
//
@ -118,9 +112,6 @@ pub fn build(b: *Builder) !void {
const run_step = b.step("run", "Run the app");
run_step.dependOn(&run_cmd.step);
const fm = b.step("fetch", "Fetch model files");
var fetch_step = FetchStep.create(b, models_url, models_hash);
fm.dependOn(&fetch_step.step);
const gen_step = blk: {
const cg = b.step("gen", "Generate zig service code from smithy models");
@ -128,21 +119,35 @@ pub fn build(b: *Builder) !void {
.name = "codegen",
.root_source_file = .{ .path = "codegen/src/main.zig" },
// We need this generated for the host, not the real target
// .target = target,
.target = b.host,
.optimize = if (b.verbose) .Debug else .ReleaseSafe,
});
cg_exe.addModule("smithy", smithy_dep.module("smithy"));
cg_exe.root_module.addImport("smithy", smithy_dep.module("smithy"));
var cg_cmd = b.addRunArtifact(cg_exe);
cg_cmd.addArg("--models");
const hash = hash_blk: {
for (b.available_deps) |dep| {
const dep_name = dep.@"0";
const dep_hash = dep.@"1";
if (std.mem.eql(u8, dep_name, "models"))
break :hash_blk dep_hash;
}
return error.DependencyNamedModelsNotFoundInBuildZigZon;
};
cg_cmd.addArg(try std.fs.path.join(
b.allocator,
&[_][]const u8{ b.global_cache_root.path.?, models_dir },
&[_][]const u8{
b.graph.global_cache_root.path.?,
"p",
hash,
models_subdir,
},
));
cg_cmd.addArg("--output");
cg_cmd.addDirectoryArg(std.Build.FileSource.relative("src/models"));
cg_cmd.addDirectoryArg(b.path("src/models"));
if (b.verbose)
cg_cmd.addArg("--verbose");
cg_cmd.step.dependOn(&fetch_step.step);
// cg_cmd.step.dependOn(&fetch_step.step);
// TODO: this should use zig_exe from std.Build
// codegen should store a hash in a comment
// this would be hash of the exe that created the file
@ -168,15 +173,29 @@ pub fn build(b: *Builder) !void {
// running the unit tests.
const test_step = b.step("test", "Run unit tests");
// // Creates a step for unit testing. This only builds the test executable
// // but does not run it.
// const unit_tests = b.addTest(.{
// .root_source_file = .{ .path = "src/aws.zig" },
// .target = target,
// .optimize = optimize,
// });
// unit_tests.root_module.addImport("smithy", smithy_dep.module("smithy"));
// unit_tests.step.dependOn(gen_step);
//
// const run_unit_tests = b.addRunArtifact(unit_tests);
// run_unit_tests.skip_foreign_checks = true;
// test_step.dependOn(&run_unit_tests.step);
for (test_targets) |t| {
// Creates a step for unit testing. This only builds the test executable
// but does not run it.
const unit_tests = b.addTest(.{
.root_source_file = .{ .path = "src/aws.zig" },
.target = t,
.target = b.resolveTargetQuery(t),
.optimize = optimize,
});
unit_tests.addModule("smithy", smithy_dep.module("smithy"));
unit_tests.root_module.addImport("smithy", smithy_dep.module("smithy"));
unit_tests.step.dependOn(gen_step);
const run_unit_tests = b.addRunArtifact(unit_tests);
@ -186,49 +205,3 @@ pub fn build(b: *Builder) !void {
}
b.installArtifact(exe);
}
const FetchStep = struct {
step: std.Build.Step,
url: []const u8,
hash: ?[]const u8,
pub fn create(owner: *std.Build, url: []const u8, hash: ?[]const u8) *FetchStep {
const fs = owner.allocator.create(FetchStep) catch @panic("OOM");
fs.* = .{
.step = std.Build.Step.init(.{
.id = .custom,
.name = "FetchStep",
.owner = owner,
.makeFn = make,
}),
.url = url,
.hash = hash,
};
return fs;
}
fn make(step: *std.Build.Step, prog_node: *std.Progress.Node) !void {
const b = step.owner;
const self = @fieldParentPtr(FetchStep, "step", step);
const alloc = b.allocator;
var http_client: std.http.Client = .{ .allocator = alloc };
defer http_client.deinit();
var thread_pool: std.Thread.Pool = undefined;
try thread_pool.init(.{ .allocator = alloc });
defer thread_pool.deinit();
const pkg = try Package.fetchAndUnpack(
&thread_pool,
&http_client,
b.global_cache_root,
.{
.url = self.url,
.hash = self.hash,
},
self.url,
prog_node,
);
defer alloc.destroy(pkg);
defer pkg.deinit();
}
};

View File

@ -1,11 +1,16 @@
.{
.name = "aws-zig",
.version = "0.0.1",
.paths = .{""},
.dependencies = .{
.smithy = .{
.url = "https://git.lerch.org/lobo/smithy/archive/d6b6331defdfd33f36258caf26b0b82ce794cd28.tar.gz",
.hash = "1220695f5be11b7bd714f6181c60b0e590da5da7411de111ca51cacf1ea4a8169669",
.url = "https://git.lerch.org/lobo/smithy/archive/1e534201c4df5ea4f615faeedc69d414adbec0b1.tar.gz",
.hash = "1220af63ae0498010004af79936cedf3fe6702f516daab77ebbd97a274eba1b42aad",
},
.models = .{
.url = "https://github.com/aws/aws-sdk-go-v2/archive/58cf6509525a12d64fd826da883bfdbacbd2f00e.tar.gz",
.hash = "122017a2f3081ce83c23e0c832feb1b8b4176d507b6077f522855dc774bcf83ee315",
},
},
}

View File

@ -77,13 +77,13 @@ pub fn hex64(x: u64) [16]u8 {
return result;
}
pub const walkerFn = *const fn (std.fs.IterableDir.Walker.WalkerEntry) bool;
pub const walkerFn = *const fn (std.fs.Dir.Walker.Entry) bool;
fn included(entry: std.fs.IterableDir.Walker.WalkerEntry) bool {
fn included(entry: std.fs.Dir.Walker.Entry) bool {
_ = entry;
return true;
}
fn excluded(entry: std.fs.IterableDir.Walker.WalkerEntry) bool {
fn excluded(entry: std.fs.Dir.Walker.Entry) bool {
_ = entry;
return false;
}
@ -96,7 +96,7 @@ pub const ComputeDirectoryOptions = struct {
pub fn computeDirectoryHash(
thread_pool: *std.Thread.Pool,
dir: std.fs.IterableDir,
dir: std.fs.Dir,
options: *ComputeDirectoryOptions,
) ![Hash.digest_length]u8 {
const gpa = thread_pool.allocator;
@ -138,7 +138,7 @@ pub fn computeDirectoryHash(
.failure = undefined, // to be populated by the worker
};
wait_group.start();
try thread_pool.spawn(workerHashFile, .{ dir.dir, hashed_file, &wait_group });
try thread_pool.spawn(workerHashFile, .{ dir, hashed_file, &wait_group });
try all_files.append(hashed_file);
}
@ -206,6 +206,6 @@ fn isExecutable(file: std.fs.File) !bool {
return false;
} else {
const stat = try file.stat();
return (stat.mode & std.os.S.IXUSR) != 0;
return (stat.mode & std.posix.S.IXUSR) != 0;
}
}

View File

@ -17,7 +17,7 @@ pub fn main() anyerror!void {
var output_dir = std.fs.cwd();
defer if (output_dir.fd > 0) output_dir.close();
var models_dir: ?std.fs.IterableDir = null;
var models_dir: ?std.fs.Dir = null;
defer if (models_dir) |*m| m.close();
for (args, 0..) |arg, i| {
if (std.mem.eql(u8, "--help", arg) or
@ -31,7 +31,7 @@ pub fn main() anyerror!void {
if (std.mem.eql(u8, "--output", arg))
output_dir = try output_dir.makeOpenPath(args[i + 1], .{});
if (std.mem.eql(u8, "--models", arg))
models_dir = try std.fs.cwd().openIterableDir(args[i + 1], .{});
models_dir = try std.fs.cwd().openDir(args[i + 1], .{ .iterate = true });
}
// TODO: Seems like we should remove this in favor of a package
try output_dir.writeFile("json.zig", json_zig);
@ -75,7 +75,7 @@ pub fn main() anyerror!void {
defer cwd.close();
defer cwd.setAsCwd() catch unreachable;
try m.dir.setAsCwd();
try m.setAsCwd();
try processDirectories(m, output_dir);
}
}
@ -87,7 +87,7 @@ const OutputManifest = struct {
model_dir_hash_digest: [Hasher.hex_multihash_len]u8,
output_dir_hash_digest: [Hasher.hex_multihash_len]u8,
};
fn processDirectories(models_dir: std.fs.IterableDir, output_dir: std.fs.Dir) !void {
fn processDirectories(models_dir: std.fs.Dir, output_dir: std.fs.Dir) !void {
// Let's get ready to hash!!
var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
defer arena.deinit();
@ -131,15 +131,15 @@ fn processDirectories(models_dir: std.fs.IterableDir, output_dir: std.fs.Dir) !v
}
var model_digest: ?[Hasher.hex_multihash_len]u8 = null;
fn calculateDigests(models_dir: std.fs.IterableDir, output_dir: std.fs.Dir, thread_pool: *std.Thread.Pool) !OutputManifest {
fn calculateDigests(models_dir: std.fs.Dir, output_dir: std.fs.Dir, thread_pool: *std.Thread.Pool) !OutputManifest {
const model_hash = if (model_digest) |m| m[0..Hasher.digest_len].* else try Hasher.computeDirectoryHash(thread_pool, models_dir, @constCast(&Hasher.ComputeDirectoryOptions{
.isIncluded = struct {
pub fn include(entry: std.fs.IterableDir.Walker.WalkerEntry) bool {
pub fn include(entry: std.fs.Dir.Walker.Entry) bool {
return std.mem.endsWith(u8, entry.basename, ".json");
}
}.include,
.isExcluded = struct {
pub fn exclude(entry: std.fs.IterableDir.Walker.WalkerEntry) bool {
pub fn exclude(entry: std.fs.Dir.Walker.Entry) bool {
_ = entry;
return false;
}
@ -148,14 +148,14 @@ fn calculateDigests(models_dir: std.fs.IterableDir, output_dir: std.fs.Dir, thre
}));
if (verbose) std.log.info("Model directory hash: {s}", .{model_digest orelse Hasher.hexDigest(model_hash)});
const output_hash = try Hasher.computeDirectoryHash(thread_pool, try output_dir.openIterableDir(".", .{}), @constCast(&Hasher.ComputeDirectoryOptions{
const output_hash = try Hasher.computeDirectoryHash(thread_pool, try output_dir.openDir(".", .{ .iterate = true }), @constCast(&Hasher.ComputeDirectoryOptions{
.isIncluded = struct {
pub fn include(entry: std.fs.IterableDir.Walker.WalkerEntry) bool {
pub fn include(entry: std.fs.Dir.Walker.Entry) bool {
return std.mem.endsWith(u8, entry.basename, ".zig");
}
}.include,
.isExcluded = struct {
pub fn exclude(entry: std.fs.IterableDir.Walker.WalkerEntry) bool {
pub fn exclude(entry: std.fs.Dir.Walker.Entry) bool {
_ = entry;
return false;
}

View File

@ -30,12 +30,13 @@ pub fn build(b: *std.Build) void {
// .optimize = optimize,
// });
// exe.addModule("smithy", smithy_dep.module("smithy"));
const aws_dep = b.dependency("aws", .{
const aws_dep = b.dependency("aws-zig", .{
// These are the two arguments to the dependency. It expects a target and optimization level.
.target = target,
.optimize = optimize,
});
exe.addModule("aws", aws_dep.module("aws"));
const aws_module = aws_dep.module("aws");
exe.root_module.addImport("aws", aws_module);
// This declares intent for the executable to be installed into the
// standard location when the user invokes the "install" step (the default
// step when running `zig build`).

View File

@ -1,15 +1,16 @@
.{
.name = "myapp",
.version = "0.0.1",
.paths = .{""},
.dependencies = .{
.aws = .{
.url = "https://git.lerch.org/api/packages/lobo/generic/aws-sdk-with-models/d08d0f338fb86f7d679a998ff4f65f4e2d0db595/d08d0f338fb86f7d679a998ff4f65f4e2d0db595-with-models.tar.gz",
.hash = "1220c8871d93592680ea2dcc88cc52fb4f0effabeed0584d2a5c54f93825741b7c66",
},
.smithy = .{
.url = "https://git.lerch.org/lobo/smithy/archive/41b61745d25a65817209dd5dddbb5f9b66896a99.tar.gz",
.hash = "122087deb0ae309b2258d59b40d82fe5921fdfc35b420bb59033244851f7f276fa34",
.url = "https://git.lerch.org/lobo/smithy/archive/1e534201c4df5ea4f615faeedc69d414adbec0b1.tar.gz",
.hash = "1220af63ae0498010004af79936cedf3fe6702f516daab77ebbd97a274eba1b42aad",
},
.@"aws-zig" = .{
.url = "https://git.lerch.org/api/packages/lobo/generic/aws-sdk-with-models/b1b2a6cc7a6104f5e1f6ee4cae33e6ef3ed2ec1c/b1b2a6cc7a6104f5e1f6ee4cae33e6ef3ed2ec1c-with-models.tar.gz",
.hash = "12203d7c7aea80fd5e1f892b4928b89b2c70d5688cf1843e0d03e5af79632c5a9146",
},
},
}

View File

@ -1,14 +1,14 @@
const std = @import("std");
const aws = @import("aws");
pub const std_options = struct {
pub const log_level: std.log.Level = .info;
pub const std_options: std.Options = .{
.log_level = .info,
// usually log_level is enough, but log_scope_levels can be used
// for finer grained control
pub const log_scope_levels = &[_]std.log.ScopeLevel{
.log_scope_levels = &[_]std.log.ScopeLevel{
.{ .scope = .awshttp, .level = .warn },
};
},
};
pub fn main() anyerror!void {

View File

@ -1,3 +1,4 @@
const builtin = @import("builtin");
const std = @import("std");
const awshttp = @import("aws_http.zig");
@ -31,7 +32,7 @@ pub const services = servicemodel.services;
pub const Services = servicemodel.Services;
pub const ClientOptions = struct {
proxy: ?std.http.Client.HttpProxy = null,
proxy: ?std.http.Client.Proxy = null,
};
pub const Client = struct {
allocator: std.mem.Allocator,
@ -226,7 +227,7 @@ pub fn Request(comptime request_action: anytype) type {
defer buffer.deinit();
const writer = buffer.writer();
try url.encode(options.client.allocator, request, writer, .{
.field_name_transformer = &queryFieldTransformer,
.field_name_transformer = queryFieldTransformer,
});
const continuation = if (buffer.items.len > 0) "&" else "";
@ -734,7 +735,7 @@ fn headersFor(allocator: std.mem.Allocator, request: anytype) ![]awshttp.Header
return headers.toOwnedSlice();
}
fn freeHeadersFor(allocator: std.mem.Allocator, request: anytype, headers: []awshttp.Header) void {
fn freeHeadersFor(allocator: std.mem.Allocator, request: anytype, headers: []const awshttp.Header) void {
if (!@hasDecl(@TypeOf(request), "http_header")) return;
const http_header = @TypeOf(request).http_header;
const fields = std.meta.fields(@TypeOf(http_header));
@ -761,7 +762,7 @@ fn firstJsonKey(data: []const u8) []const u8 {
log.debug("First json key: {s}", .{key});
return key;
}
fn isJsonResponse(headers: []awshttp.Header) !bool {
fn isJsonResponse(headers: []const awshttp.Header) !bool {
// EC2 ignores our accept type, but technically query protocol only
// returns XML as well. So, we'll ignore the protocol here and just
// look at the return type
@ -826,7 +827,7 @@ fn ServerResponse(comptime action: anytype) type {
};
const Result = @Type(.{
.Struct = .{
.layout = .Auto,
.layout = .auto,
.fields = &[_]std.builtin.Type.StructField{
.{
.name = action.action_name ++ "Result",
@ -849,7 +850,7 @@ fn ServerResponse(comptime action: anytype) type {
});
return @Type(.{
.Struct = .{
.layout = .Auto,
.layout = .auto,
.fields = &[_]std.builtin.Type.StructField{
.{
.name = action.action_name ++ "Response",
@ -919,8 +920,7 @@ fn safeFree(allocator: std.mem.Allocator, obj: anytype) void {
else => {},
}
}
fn queryFieldTransformer(allocator: std.mem.Allocator, field_name: []const u8, options: url.EncodingOptions) anyerror![]const u8 {
_ = options;
fn queryFieldTransformer(allocator: std.mem.Allocator, field_name: []const u8) anyerror![]const u8 {
return try case.snakeToPascal(allocator, field_name);
}
@ -1363,16 +1363,17 @@ test {
}
const TestOptions = struct {
allocator: std.mem.Allocator,
arena: ?*std.heap.ArenaAllocator = null,
server_port: ?u16 = null,
server_remaining_requests: usize = 1,
server_response: []const u8 = "unset",
server_response_status: std.http.Status = .ok,
server_response_headers: [][2][]const u8 = &[_][2][]const u8{},
server_response_headers: []const std.http.Header = &.{},
server_response_transfer_encoding: ?std.http.TransferEncoding = null,
request_body: []u8 = "",
request_method: std.http.Method = undefined,
request_target: []const u8 = undefined,
request_headers: *std.http.Headers = undefined,
request_headers: []std.http.Header = undefined,
test_server_runtime_uri: ?[]u8 = null,
server_ready: bool = false,
requests_processed: usize = 0,
@ -1380,7 +1381,7 @@ const TestOptions = struct {
const Self = @This();
fn expectHeader(self: *Self, name: []const u8, value: []const u8) !void {
for (self.request_headers.list.items) |h|
for (self.request_headers) |h|
if (std.ascii.eqlIgnoreCase(name, h.name) and
std.mem.eql(u8, value, h.value)) return;
return error.HeaderOrValueNotFound;
@ -1391,17 +1392,6 @@ const TestOptions = struct {
while (!self.server_ready)
std.time.sleep(100);
}
fn deinit(self: Self) void {
if (self.requests_processed > 0) {
self.allocator.free(self.request_body);
self.allocator.free(self.request_target);
self.request_headers.deinit();
self.allocator.destroy(self.request_headers);
}
if (self.test_server_runtime_uri) |_|
self.allocator.free(self.test_server_runtime_uri.?);
}
};
/// This starts a test server. We're not testing the server itself,
@ -1409,16 +1399,19 @@ const TestOptions = struct {
/// whole thing so we can just deallocate everything at once at the end,
/// leaks be damned
fn threadMain(options: *TestOptions) !void {
var server = std.http.Server.init(options.allocator, .{ .reuse_address = true });
// defer server.deinit();
// https://github.com/ziglang/zig/blob/d2be725e4b14c33dbd39054e33d926913eee3cd4/lib/compiler/std-docs.zig#L22-L54
options.arena = try options.allocator.create(std.heap.ArenaAllocator);
options.arena.?.* = std.heap.ArenaAllocator.init(options.allocator);
const allocator = options.arena.?.allocator();
options.allocator = allocator;
const address = try std.net.Address.parseIp("127.0.0.1", 0);
try server.listen(address);
options.server_port = server.socket.listen_address.in.getPort();
var http_server = try address.listen(.{});
options.server_port = http_server.listen_address.in.getPort();
// TODO: remove
options.test_server_runtime_uri = try std.fmt.allocPrint(options.allocator, "http://127.0.0.1:{d}", .{options.server_port.?});
log.debug("server listening at {s}", .{options.test_server_runtime_uri.?});
defer server.deinit();
log.info("starting server thread, tid {d}", .{std.Thread.getCurrentId()});
// var arena = std.heap.ArenaAllocator.init(options.allocator);
// defer arena.deinit();
@ -1427,7 +1420,7 @@ fn threadMain(options: *TestOptions) !void {
// when it's time to shut down
while (options.server_remaining_requests > 0) {
options.server_remaining_requests -= 1;
processRequest(options, &server) catch |e| {
processRequest(options, &http_server) catch |e| {
log.err("Unexpected error processing request: {any}", .{e});
if (@errorReturnTrace()) |trace| {
std.debug.dumpStackTrace(trace.*);
@ -1436,76 +1429,63 @@ fn threadMain(options: *TestOptions) !void {
}
}
fn processRequest(options: *TestOptions, server: *std.http.Server) !void {
fn processRequest(options: *TestOptions, net_server: *std.net.Server) !void {
options.server_ready = true;
errdefer options.server_ready = false;
log.debug(
"tid {d} (server): server waiting to accept. requests remaining: {d}",
.{ std.Thread.getCurrentId(), options.server_remaining_requests + 1 },
);
var res = try server.accept(.{ .allocator = options.allocator });
options.server_ready = false;
defer res.deinit();
defer if (res.headers.owned and res.headers.list.items.len > 0) res.headers.deinit();
defer _ = res.reset();
try res.wait(); // wait for client to send a complete request head
var connection = try net_server.accept();
defer connection.stream.close();
var read_buffer: [1024 * 16]u8 = undefined;
var http_server = std.http.Server.init(connection, &read_buffer);
while (http_server.state == .ready) {
var request = http_server.receiveHead() catch |err| switch (err) {
error.HttpConnectionClosing => return,
else => {
std.log.err("closing http connection: {s}", .{@errorName(err)});
std.log.debug("Error occurred from this request: \n{s}", .{read_buffer[0..http_server.read_buffer_len]});
return;
},
};
try serveRequest(options, &request);
}
}
const errstr = "Internal Server Error\n";
var errbuf: [errstr.len]u8 = undefined;
@memcpy(&errbuf, errstr);
var response_bytes: []const u8 = errbuf[0..];
fn serveRequest(options: *TestOptions, request: *std.http.Server.Request) !void {
options.server_ready = false;
options.requests_processed += 1;
if (res.request.content_length) |l|
options.request_body = try res.reader().readAllAlloc(options.allocator, @as(usize, @intCast(l)))
else
options.request_body = try options.allocator.dupe(u8, "");
options.request_method = res.request.method;
options.request_target = try options.allocator.dupe(u8, res.request.target);
options.request_headers = try options.allocator.create(std.http.Headers);
options.request_headers.allocator = options.allocator;
options.request_headers.list = .{};
options.request_headers.index = .{};
options.request_headers.owned = true;
for (res.request.headers.list.items) |f|
try options.request_headers.append(f.name, f.value);
options.request_body = try (try request.reader()).readAllAlloc(options.allocator, std.math.maxInt(usize));
options.request_method = request.head.method;
options.request_target = try options.allocator.dupe(u8, request.head.target);
var req_headers = std.ArrayList(std.http.Header).init(options.allocator);
defer req_headers.deinit();
var it = request.iterateHeaders();
while (it.next()) |f| {
const h = try options.allocator.create(std.http.Header);
h.* = .{ .name = try options.allocator.dupe(u8, f.name), .value = try options.allocator.dupe(u8, f.value) };
try req_headers.append(h.*);
}
options.request_headers = try req_headers.toOwnedSlice();
log.debug(
"tid {d} (server): {d} bytes read from request",
.{ std.Thread.getCurrentId(), options.request_body.len },
);
// try response.headers.append("content-type", "text/plain");
response_bytes = serve(options, &res) catch |e| brk: {
res.status = .internal_server_error;
// TODO: more about this particular request
log.err("Unexpected error from executor processing request: {any}", .{e});
if (@errorReturnTrace()) |trace| {
std.debug.dumpStackTrace(trace.*);
}
break :brk "Unexpected error generating request to lambda";
};
if (options.server_response_transfer_encoding == null)
res.transfer_encoding = .{ .content_length = response_bytes.len }
else
res.transfer_encoding = .chunked;
try request.respond(options.server_response, .{
.status = options.server_response_status,
.extra_headers = options.server_response_headers,
});
try res.do();
_ = try res.writer().writeAll(response_bytes);
try res.finish();
log.debug(
"tid {d} (server): sent response",
.{std.Thread.getCurrentId()},
);
}
fn serve(options: *TestOptions, res: *std.http.Server.Response) ![]const u8 {
res.status = options.server_response_status;
for (options.server_response_headers) |h|
try res.headers.append(h[0], h[1]);
// try res.headers.append("content-length", try std.fmt.allocPrint(allocator, "{d}", .{server_response.len}));
return options.server_response;
}
////////////////////////////////////////////////////////////////////////
// These will replicate the tests that were in src/main.zig
// The server_response and server_response_headers come from logs of
@ -1527,10 +1507,10 @@ const TestSetup = struct {
const signing_time =
date.dateTimeToTimestamp(date.parseIso8601ToDateTime("20230908T170252Z") catch @compileError("Cannot parse date")) catch @compileError("Cannot parse date");
fn init(allocator: std.mem.Allocator, options: TestOptions) Self {
fn init(options: TestOptions) Self {
return .{
.allocator = allocator,
.request_options = options,
.allocator = options.allocator,
};
}
@ -1542,7 +1522,10 @@ const TestSetup = struct {
);
self.started = true;
try self.request_options.waitForReady();
// Not sure why we're getting sprayed here, but we have an arena allocator, and this
// is testing, so yolo
awshttp.endpoint_override = self.request_options.test_server_runtime_uri;
log.debug("endpoint override set to {?s}", .{awshttp.endpoint_override});
self.creds = aws_auth.Credentials.init(
self.allocator,
try self.allocator.dupe(u8, "ACCESS"),
@ -1563,9 +1546,11 @@ const TestSetup = struct {
self.server_thread.join();
}
fn deinit(self: Self) void {
self.request_options.deinit();
fn deinit(self: *Self) void {
if (self.request_options.arena) |a| {
a.deinit();
self.allocator.destroy(a);
}
if (!self.started) return;
awshttp.endpoint_override = null;
// creds.deinit(); Creds will get deinited in the course of the call. We don't want to do it twice
@ -1576,15 +1561,15 @@ const TestSetup = struct {
test "query_no_input: sts getCallerIdentity comptime" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response =
\\{"GetCallerIdentityResponse":{"GetCallerIdentityResult":{"Account":"123456789012","Arn":"arn:aws:iam::123456789012:user/admin","UserId":"AIDAYAM4POHXHRVANDQBQ"},"ResponseMetadata":{"RequestId":"8f0d54da-1230-40f7-b4ac-95015c4b84cd"}}}
,
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "application/json" },
.{ "x-amzn-RequestId", "8f0d54da-1230-40f7-b4ac-95015c4b84cd" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "application/json" },
.{ .name = "x-amzn-RequestId", .value = "8f0d54da-1230-40f7-b4ac-95015c4b84cd" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1611,7 +1596,7 @@ test "query_no_input: sts getCallerIdentity comptime" {
test "query_with_input: sts getAccessKeyInfo runtime" {
// sqs switched from query to json in aws sdk for go v2 commit f5a08768ef820ff5efd62a49ba50c61c9ca5dbcb
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response =
\\<GetAccessKeyInfoResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
@ -1623,10 +1608,10 @@ test "query_with_input: sts getAccessKeyInfo runtime" {
\\ </ResponseMetadata>
\\</GetAccessKeyInfoResponse>
,
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "text/xml" },
.{ "x-amzn-RequestId", "ec85bf29-1ef0-459a-930e-6446dd14a286" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "text/xml" },
.{ .name = "x-amzn-RequestId", .value = "ec85bf29-1ef0-459a-930e-6446dd14a286" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1649,15 +1634,15 @@ test "query_with_input: sts getAccessKeyInfo runtime" {
}
test "json_1_0_query_with_input: dynamodb listTables runtime" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response =
\\{"LastEvaluatedTableName":"Customer","TableNames":["Customer"]}
,
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "application/json" },
.{ "x-amzn-RequestId", "QBI72OUIN8U9M9AG6PCSADJL4JVV4KQNSO5AEMVJF66Q9ASUAAJG" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "application/json" },
.{ .name = "x-amzn-RequestId", .value = "QBI72OUIN8U9M9AG6PCSADJL4JVV4KQNSO5AEMVJF66Q9ASUAAJG" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1685,15 +1670,15 @@ test "json_1_0_query_with_input: dynamodb listTables runtime" {
test "json_1_0_query_no_input: dynamodb listTables runtime" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response =
\\{"AccountMaxReadCapacityUnits":80000,"AccountMaxWriteCapacityUnits":80000,"TableMaxReadCapacityUnits":40000,"TableMaxWriteCapacityUnits":40000}
,
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "application/json" },
.{ "x-amzn-RequestId", "QBI72OUIN8U9M9AG6PCSADJL4JVV4KQNSO5AEMVJF66Q9ASUAAJG" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "application/json" },
.{ .name = "x-amzn-RequestId", .value = "QBI72OUIN8U9M9AG6PCSADJL4JVV4KQNSO5AEMVJF66Q9ASUAAJG" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1714,15 +1699,15 @@ test "json_1_0_query_no_input: dynamodb listTables runtime" {
}
test "json_1_1_query_with_input: ecs listClusters runtime" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response =
\\{"clusterArns":["arn:aws:ecs:us-west-2:550620852718:cluster/web-applicationehjaf-cluster"],"nextToken":"czE0Og=="}
,
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "application/json" },
.{ "x-amzn-RequestId", "b2420066-ff67-4237-b782-721c4df60744" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "application/json" },
.{ .name = "x-amzn-RequestId", .value = "b2420066-ff67-4237-b782-721c4df60744" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1748,16 +1733,19 @@ test "json_1_1_query_with_input: ecs listClusters runtime" {
try std.testing.expectEqualStrings("arn:aws:ecs:us-west-2:550620852718:cluster/web-applicationehjaf-cluster", call.response.cluster_arns.?[0]);
}
test "json_1_1_query_no_input: ecs listClusters runtime" {
// const old = std.testing.log_level;
// defer std.testing.log_level = old;
// std.testing.log_level = .debug;
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response =
\\{"clusterArns":["arn:aws:ecs:us-west-2:550620852718:cluster/web-applicationehjaf-cluster"],"nextToken":"czE0Og=="}
,
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "application/json" },
.{ "x-amzn-RequestId", "e65322b2-0065-45f2-ba37-f822bb5ce395" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "application/json" },
.{ .name = "x-amzn-RequestId", .value = "e65322b2-0065-45f2-ba37-f822bb5ce395" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1782,15 +1770,15 @@ test "json_1_1_query_no_input: ecs listClusters runtime" {
}
test "rest_json_1_query_with_input: lambda listFunctions runtime" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response =
\\{"Functions":[{"Description":"AWS CDK resource provider framework - onEvent (DevelopmentFrontendStack-g650u/com.amazonaws.cdk.custom-resources.amplify-asset-deployment-provider/amplify-asset-deployment-handler-provider)","TracingConfig":{"Mode":"PassThrough"},"VpcConfig":null,"SigningJobArn":null,"SnapStart":{"OptimizationStatus":"Off","ApplyOn":"None"},"RevisionId":"0c62fc74-a692-403d-9206-5fcbad406424","LastModified":"2023-03-01T18:13:15.704+0000","FileSystemConfigs":null,"FunctionName":"DevelopmentFrontendStack--amplifyassetdeploymentha-aZqB9IbZLIKU","Runtime":"nodejs14.x","Version":"$LATEST","PackageType":"Zip","LastUpdateStatus":null,"Layers":null,"FunctionArn":"arn:aws:lambda:us-west-2:550620852718:function:DevelopmentFrontendStack--amplifyassetdeploymentha-aZqB9IbZLIKU","KMSKeyArn":null,"MemorySize":128,"ImageConfigResponse":null,"LastUpdateStatusReason":null,"DeadLetterConfig":null,"Timeout":900,"Handler":"framework.onEvent","CodeSha256":"m4tt+M0l3p8bZvxIDj83dwGrwRW6atCfS/q8AiXCD3o=","Role":"arn:aws:iam::550620852718:role/DevelopmentFrontendStack-amplifyassetdeploymentha-1782JF7WAPXZ3","SigningProfileVersionArn":null,"MasterArn":null,"RuntimeVersionConfig":null,"CodeSize":4307,"State":null,"StateReason":null,"Environment":{"Variables":{"USER_ON_EVENT_FUNCTION_ARN":"arn:aws:lambda:us-west-2:550620852718:function:DevelopmentFrontendStack--amplifyassetdeploymenton-X9iZJSCSPYDH","WAITER_STATE_MACHINE_ARN":"arn:aws:states:us-west-2:550620852718:stateMachine:amplifyassetdeploymenthandlerproviderwaiterstatemachineB3C2FCBE-Ltggp5wBcHWO","USER_IS_COMPLETE_FUNCTION_ARN":"arn:aws:lambda:us-west-2:550620852718:function:DevelopmentFrontendStack--amplifyassetdeploymentis-jaHopLrSSARV"},"Error":null},"EphemeralStorage":{"Size":512},"StateReasonCode":null,"LastUpdateStatusReasonCode":null,"Architectures":["x86_64"]}],"NextMarker":"lslTXFcbLQKkb0vP9Kgh5hUL7C3VghELNGbWgZfxrRCk3eiDRMkct7D8EmptWfHSXssPdS7Bo66iQPTMpVOHZgANewpgGgFGGr4pVjd6VgLUO6qPe2EMAuNDBjUTxm8z6N28yhlUwEmKbrAV/m0k5qVzizwoxFwvyruMbuMx9kADFACSslcabxXl3/jDI4rfFnIsUVdzTLBgPF1hzwrE1f3lcdkBvUp+QgY+Pn3w5QuJmwsp/di8COzFemY89GgOHbLNqsrBsgR/ee2eXoJp0ZkKM4EcBK3HokqBzefLfgR02PnfNOdXwqTlhkSPW0TKiKGIYu3Bw7lSNrLd+q3+wEr7ZakqOQf0BVo3FMRhMHlVYgwUJzwi3ActyH2q6fuqGG1sS0B8Oa/prUpe5fmp3VaA3WpazioeHtrKF78JwCi6/nfQsrj/8ZtXGQOxlwEgvT1CIUaF+CdHY3biezrK0tRZNpkCtHnkPtF9lq2U7+UiKXSW9yzxT8P2b0M/Qh4IVdnw4rncQK/doYriAeOdrs1wjMEJnHWq9lAaEyipoxYcVr/z5+yaC6Gwxdg45p9X1vIAaYMf6IZxyFuua43SYi0Ls+IBk4VvpR2io7T0dCxHAr3WAo3D2dm0y8OsbM59"}
,
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "application/json" },
.{ "x-amzn-RequestId", "c4025199-226f-4a16-bb1f-48618e9d2ea6" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "application/json" },
.{ .name = "x-amzn-RequestId", .value = "c4025199-226f-4a16-bb1f-48618e9d2ea6" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1816,13 +1804,13 @@ test "rest_json_1_query_with_input: lambda listFunctions runtime" {
}
test "rest_json_1_query_no_input: lambda listFunctions runtime" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response = @embedFile("test_rest_json_1_query_no_input.response"),
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "application/json" },
.{ "x-amzn-RequestId", "b2aad11f-36fc-4d0d-ae92-fe0167fb0f40" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "application/json" },
.{ .name = "x-amzn-RequestId", .value = "b2aad11f-36fc-4d0d-ae92-fe0167fb0f40" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1850,14 +1838,14 @@ test "rest_json_1_query_no_input: lambda listFunctions runtime" {
}
test "rest_json_1_work_with_lambda: lambda tagResource (only), to excercise zig issue 17015" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response = "",
.server_response_status = .no_content,
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "application/json" },
.{ "x-amzn-RequestId", "a521e152-6e32-4e67-9fb3-abc94e34551b" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "application/json" },
.{ .name = "x-amzn-RequestId", .value = "a521e152-6e32-4e67-9fb3-abc94e34551b" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1886,13 +1874,13 @@ test "rest_json_1_work_with_lambda: lambda tagResource (only), to excercise zig
}
test "ec2_query_no_input: EC2 describe regions" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response = @embedFile("test_ec2_query_no_input.response"),
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "text/xml;charset=UTF-8" },
.{ "x-amzn-RequestId", "4cdbdd69-800c-49b5-8474-ae4c17709782" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "text/xml;charset=UTF-8" },
.{ .name = "x-amzn-RequestId", .value = "4cdbdd69-800c-49b5-8474-ae4c17709782" },
},
.server_response_transfer_encoding = .chunked,
});
defer test_harness.deinit();
@ -1911,15 +1899,21 @@ test "ec2_query_no_input: EC2 describe regions" {
try std.testing.expectEqualStrings("4cdbdd69-800c-49b5-8474-ae4c17709782", call.response_metadata.request_id);
try std.testing.expectEqual(@as(usize, 17), call.response.regions.?.len);
}
// LLVM hates this test. Depending on the platform, it will consume all memory
// on the compilation host. Windows x86_64 and Linux riscv64 seem to be a problem so far
// riscv64-linux also seems to have another problem with LLVM basically infinitely
// doing something. My guess is the @embedFile is freaking out LLVM
test "ec2_query_with_input: EC2 describe instances" {
if (builtin.cpu.arch == .x86_64 and builtin.os.tag == .windows) return error.SkipZigTest;
if (builtin.cpu.arch == .riscv64 and builtin.os.tag == .linux) return error.SkipZigTest;
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response = @embedFile("test_ec2_query_with_input.response"),
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "text/xml;charset=UTF-8" },
.{ "x-amzn-RequestId", "150a14cc-785d-476f-a4c9-2aa4d03b14e2" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "text/xml;charset=UTF-8" },
.{ .name = "x-amzn-RequestId", .value = "150a14cc-785d-476f-a4c9-2aa4d03b14e2" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1943,15 +1937,15 @@ test "ec2_query_with_input: EC2 describe instances" {
}
test "rest_xml_no_input: S3 list buckets" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response =
\\<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>3367189aa775bd98da38e55093705f2051443c1e775fc0971d6d77387a47c8d0</ID><DisplayName>emilerch+sub1</DisplayName></Owner><Buckets><Bucket><Name>550620852718-backup</Name><CreationDate>2020-06-17T16:26:51.000Z</CreationDate></Bucket><Bucket><Name>amplify-letmework-staging-185741-deployment</Name><CreationDate>2023-03-10T18:57:49.000Z</CreationDate></Bucket><Bucket><Name>aws-cloudtrail-logs-550620852718-224022a7</Name><CreationDate>2021-06-21T18:32:44.000Z</CreationDate></Bucket><Bucket><Name>aws-sam-cli-managed-default-samclisourcebucket-1gy0z00mj47xe</Name><CreationDate>2021-10-05T16:38:07.000Z</CreationDate></Bucket><Bucket><Name>awsomeprojectstack-pipelineartifactsbucketaea9a05-1uzwo6c86ecr</Name><CreationDate>2021-10-05T22:55:09.000Z</CreationDate></Bucket><Bucket><Name>cdk-hnb659fds-assets-550620852718-us-west-2</Name><CreationDate>2023-02-28T21:49:36.000Z</CreationDate></Bucket><Bucket><Name>cf-templates-12iy6putgdxtk-us-west-2</Name><CreationDate>2020-06-26T02:31:59.000Z</CreationDate></Bucket><Bucket><Name>codepipeline-us-west-2-46714083637</Name><CreationDate>2021-09-14T18:43:07.000Z</CreationDate></Bucket><Bucket><Name>elasticbeanstalk-us-west-2-550620852718</Name><CreationDate>2022-04-15T16:22:42.000Z</CreationDate></Bucket><Bucket><Name>lobo-west</Name><CreationDate>2021-06-21T17:17:22.000Z</CreationDate></Bucket><Bucket><Name>lobo-west-2</Name><CreationDate>2021-11-19T20:12:31.000Z</CreationDate></Bucket><Bucket><Name>logging-backup-550620852718-us-east-2</Name><CreationDate>2022-05-29T21:55:16.000Z</CreationDate></Bucket><Bucket><Name>mysfitszj3t6webstack-hostingbucketa91a61fe-1ep3ezkgwpxr0</Name><CreationDate>2023-03-01T04:53:55.000Z</CreationDate></Bucket></Buckets></ListAllMyBucketsResult>
,
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "application/xml" },
.{ "x-amzn-RequestId", "9PEYBAZ9J7TPRX43" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "application/xml" },
.{ .name = "x-amzn-RequestId", .value = "9PEYBAZ9J7TPRX43" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -1974,15 +1968,15 @@ test "rest_xml_no_input: S3 list buckets" {
}
test "rest_xml_anything_but_s3: CloudFront list key groups" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response =
\\{"Items":null,"MaxItems":100,"NextMarker":null,"Quantity":0}
,
.server_response_headers = @constCast(&[_][2][]const u8{
.{ "Content-Type", "application/json" },
.{ "x-amzn-RequestId", "d3382082-5291-47a9-876b-8df3accbb7ea" },
}),
.server_response_headers = &.{
.{ .name = "Content-Type", .value = "application/json" },
.{ .name = "x-amzn-RequestId", .value = "d3382082-5291-47a9-876b-8df3accbb7ea" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -2000,16 +1994,16 @@ test "rest_xml_anything_but_s3: CloudFront list key groups" {
}
test "rest_xml_with_input: S3 put object" {
const allocator = std.testing.allocator;
var test_harness = TestSetup.init(allocator, .{
var test_harness = TestSetup.init(.{
.allocator = allocator,
.server_response = "",
.server_response_headers = @constCast(&[_][2][]const u8{
.server_response_headers = &.{
// .{ "Content-Type", "application/xml" },
.{ "x-amzn-RequestId", "9PEYBAZ9J7TPRX43" },
.{ "x-amz-id-2", "jdRDo30t7Ge9lf6F+4WYpg+YKui8z0mz2+rwinL38xDZzvloJqrmpCAiKG375OSvHA9OBykJS44=" },
.{ "x-amz-server-side-encryption", "AES256" },
.{ "ETag", "37b51d194a7513e45b56f6524f2d51f2" },
}),
.{ .name = "x-amzn-RequestId", .value = "9PEYBAZ9J7TPRX43" },
.{ .name = "x-amz-id-2", .value = "jdRDo30t7Ge9lf6F+4WYpg+YKui8z0mz2+rwinL38xDZzvloJqrmpCAiKG375OSvHA9OBykJS44=" },
.{ .name = "x-amz-server-side-encryption", .value = "AES256" },
.{ .name = "ETag", .value = "37b51d194a7513e45b56f6524f2d51f2" },
},
});
defer test_harness.deinit();
const options = try test_harness.start();
@ -2018,7 +2012,6 @@ test "rest_xml_with_input: S3 put object" {
.client = options.client,
.signing_time = TestSetup.signing_time,
};
// std.testing.log_level = .debug;
const result = try Request(services.s3.put_object).call(.{
.bucket = "mysfitszj3t6webstack-hostingbucketa91a61fe-1ep3ezkgwpxr0",
.key = "i/am/a/teapot/foo",
@ -2026,7 +2019,7 @@ test "rest_xml_with_input: S3 put object" {
.body = "bar",
.storage_class = "STANDARD",
}, s3opts);
for (test_harness.request_options.request_headers.list.items) |header| {
for (test_harness.request_options.request_headers) |header| {
std.log.info("Request header: {s}: {s}", .{ header.name, header.value });
}
std.log.info("PutObject Request id: {s}", .{result.response_metadata.request_id});

View File

@ -122,29 +122,22 @@ fn getContainerCredentials(allocator: std.mem.Allocator) !?auth.Credentials {
const container_uri = try std.fmt.allocPrint(allocator, "http://169.254.170.2{s}", .{container_relative_uri});
defer allocator.free(container_uri);
var empty_headers = std.http.Headers.init(allocator);
defer empty_headers.deinit();
var cl = std.http.Client{ .allocator = allocator };
defer cl.deinit(); // I don't belive connection pooling would help much here as it's non-ssl and local
var req = try cl.request(.GET, try std.Uri.parse(container_uri), empty_headers, .{});
defer req.deinit();
try req.start();
try req.wait();
if (req.response.status != .ok and req.response.status != .not_found) {
log.warn("Bad status code received from container credentials endpoint: {}", .{@intFromEnum(req.response.status)});
var resp_payload = std.ArrayList(u8).init(allocator);
defer resp_payload.deinit();
const req = try cl.fetch(.{
.location = .{ .url = container_uri },
.response_storage = .{ .dynamic = &resp_payload },
});
if (req.status != .ok and req.status != .not_found) {
log.warn("Bad status code received from container credentials endpoint: {}", .{@intFromEnum(req.status)});
return null;
}
if (req.response.status == .not_found) return null;
if (req.response.content_length == null or req.response.content_length.? == 0) return null;
if (req.status == .not_found) return null;
var resp_payload = try std.ArrayList(u8).initCapacity(allocator, @intCast(req.response.content_length.?));
defer resp_payload.deinit();
try resp_payload.resize(@intCast(req.response.content_length.?));
const response_data = try resp_payload.toOwnedSlice();
defer allocator.free(response_data);
_ = try req.readAll(response_data);
log.debug("Read {d} bytes from container credentials endpoint", .{response_data.len});
if (response_data.len == 0) return null;
log.debug("Read {d} bytes from container credentials endpoint", .{resp_payload.items.len});
if (resp_payload.items.len == 0) return null;
const CredsResponse = struct {
AccessKeyId: []const u8,
@ -154,8 +147,8 @@ fn getContainerCredentials(allocator: std.mem.Allocator) !?auth.Credentials {
Token: []const u8,
};
const creds_response = blk: {
const res = std.json.parseFromSlice(CredsResponse, allocator, response_data, .{}) catch |e| {
log.err("Unexpected Json response from container credentials endpoint: {s}", .{response_data});
const res = std.json.parseFromSlice(CredsResponse, allocator, resp_payload.items, .{}) catch |e| {
log.err("Unexpected Json response from container credentials endpoint: {s}", .{resp_payload.items});
log.err("Error parsing json: {}", .{e});
if (@errorReturnTrace()) |trace| {
std.debug.dumpStackTrace(trace.*);
@ -182,28 +175,27 @@ fn getImdsv2Credentials(allocator: std.mem.Allocator) !?auth.Credentials {
defer cl.deinit(); // I don't belive connection pooling would help much here as it's non-ssl and local
// Get token
{
var headers = std.http.Headers.init(allocator);
defer headers.deinit();
try headers.append("X-aws-ec2-metadata-token-ttl-seconds", "21600");
var req = try cl.request(.PUT, try std.Uri.parse("http://169.254.169.254/latest/api/token"), headers, .{});
defer req.deinit();
try req.start();
try req.wait();
if (req.response.status != .ok) {
log.warn("Bad status code received from IMDS v2: {}", .{@intFromEnum(req.response.status)});
var resp_payload = std.ArrayList(u8).init(allocator);
defer resp_payload.deinit();
const req = try cl.fetch(.{
.method = .PUT,
.location = .{ .url = "http://169.254.169.254/latest/api/token" },
.extra_headers = &[_]std.http.Header{
.{ .name = "X-aws-ec2-metadata-token-ttl-seconds", .value = "21600" },
},
.response_storage = .{ .dynamic = &resp_payload },
});
if (req.status != .ok) {
log.warn("Bad status code received from IMDS v2: {}", .{@intFromEnum(req.status)});
return null;
}
if (req.response.content_length == null or req.response.content_length == 0) {
if (resp_payload.items.len == 0) {
log.warn("Unexpected zero response from IMDS v2", .{});
return null;
}
var resp_payload = try std.ArrayList(u8).initCapacity(allocator, @intCast(req.response.content_length.?));
defer resp_payload.deinit();
try resp_payload.resize(@intCast(req.response.content_length.?));
token = try resp_payload.toOwnedSlice();
errdefer if (token) |t| allocator.free(t);
_ = try req.readAll(token.?);
}
std.debug.assert(token != null);
log.debug("Got token from IMDSv2: {s}", .{token.?});
@ -224,28 +216,26 @@ fn getImdsRoleName(allocator: std.mem.Allocator, client: *std.http.Client, imds_
// "InstanceProfileArn" : "arn:aws:iam::550620852718:instance-profile/ec2-dev",
// "InstanceProfileId" : "AIPAYAM4POHXCFNKZ7HU2"
// }
var headers = std.http.Headers.init(allocator);
defer headers.deinit();
try headers.append("X-aws-ec2-metadata-token", imds_token);
var resp_payload = std.ArrayList(u8).init(allocator);
defer resp_payload.deinit();
const req = try client.fetch(.{
.method = .GET,
.location = .{ .url = "http://169.254.169.254/latest/meta-data/iam/info" },
.extra_headers = &[_]std.http.Header{
.{ .name = "X-aws-ec2-metadata-token", .value = imds_token },
},
.response_storage = .{ .dynamic = &resp_payload },
});
var req = try client.request(.GET, try std.Uri.parse("http://169.254.169.254/latest/meta-data/iam/info"), headers, .{});
defer req.deinit();
try req.start();
try req.wait();
if (req.response.status != .ok and req.response.status != .not_found) {
log.warn("Bad status code received from IMDS iam endpoint: {}", .{@intFromEnum(req.response.status)});
if (req.status != .ok and req.status != .not_found) {
log.warn("Bad status code received from IMDS iam endpoint: {}", .{@intFromEnum(req.status)});
return null;
}
if (req.response.status == .not_found) return null;
if (req.response.content_length == null or req.response.content_length.? == 0) {
if (req.status == .not_found) return null;
if (resp_payload.items.len == 0) {
log.warn("Unexpected empty response from IMDS endpoint post token", .{});
return null;
}
const resp = try allocator.alloc(u8, @intCast(req.response.content_length.?));
defer allocator.free(resp);
_ = try req.readAll(resp);
const ImdsResponse = struct {
Code: []const u8,
@ -253,8 +243,8 @@ fn getImdsRoleName(allocator: std.mem.Allocator, client: *std.http.Client, imds_
InstanceProfileArn: []const u8,
InstanceProfileId: []const u8,
};
const imds_response = std.json.parseFromSlice(ImdsResponse, allocator, resp, .{}) catch |e| {
log.err("Unexpected Json response from IMDS endpoint: {s}", .{resp});
const imds_response = std.json.parseFromSlice(ImdsResponse, allocator, resp_payload.items, .{}) catch |e| {
log.err("Unexpected Json response from IMDS endpoint: {s}", .{resp_payload.items});
log.err("Error parsing json: {}", .{e});
if (@errorReturnTrace()) |trace| {
std.debug.dumpStackTrace(trace.*);
@ -274,31 +264,28 @@ fn getImdsRoleName(allocator: std.mem.Allocator, client: *std.http.Client, imds_
/// Note - this internal function assumes zfetch is initialized prior to use
fn getImdsCredentials(allocator: std.mem.Allocator, client: *std.http.Client, role_name: []const u8, imds_token: []u8) !?auth.Credentials {
var headers = std.http.Headers.init(allocator);
defer headers.deinit();
try headers.append("X-aws-ec2-metadata-token", imds_token);
const url = try std.fmt.allocPrint(allocator, "http://169.254.169.254/latest/meta-data/iam/security-credentials/{s}/", .{role_name});
defer allocator.free(url);
var resp_payload = std.ArrayList(u8).init(allocator);
defer resp_payload.deinit();
const req = try client.fetch(.{
.method = .GET,
.location = .{ .url = url },
.extra_headers = &[_]std.http.Header{
.{ .name = "X-aws-ec2-metadata-token", .value = imds_token },
},
.response_storage = .{ .dynamic = &resp_payload },
});
var req = try client.request(.GET, try std.Uri.parse(url), headers, .{});
defer req.deinit();
try req.start();
try req.wait();
if (req.response.status != .ok and req.response.status != .not_found) {
log.warn("Bad status code received from IMDS role endpoint: {}", .{@intFromEnum(req.response.status)});
if (req.status != .ok and req.status != .not_found) {
log.warn("Bad status code received from IMDS role endpoint: {}", .{@intFromEnum(req.status)});
return null;
}
if (req.response.status == .not_found) return null;
if (req.response.content_length == null or req.response.content_length.? == 0) {
if (req.status == .not_found) return null;
if (resp_payload.items.len == 0) {
log.warn("Unexpected empty response from IMDS role endpoint", .{});
return null;
}
const resp = try allocator.alloc(u8, @intCast(req.response.content_length.?));
defer allocator.free(resp);
_ = try req.readAll(resp);
// log.debug("Read {d} bytes from imds v2 credentials endpoint", .{read});
const ImdsResponse = struct {
@ -310,8 +297,8 @@ fn getImdsCredentials(allocator: std.mem.Allocator, client: *std.http.Client, ro
Token: []const u8,
Expiration: []const u8,
};
const imds_response = std.json.parseFromSlice(ImdsResponse, allocator, resp, .{}) catch |e| {
log.err("Unexpected Json response from IMDS endpoint: {s}", .{resp});
const imds_response = std.json.parseFromSlice(ImdsResponse, allocator, resp_payload.items, .{}) catch |e| {
log.err("Unexpected Json response from IMDS endpoint: {s}", .{resp_payload.items});
log.err("Error parsing json: {}", .{e});
if (@errorReturnTrace()) |trace| {
std.debug.dumpStackTrace(trace.*);
@ -581,32 +568,13 @@ fn getDefaultPath(allocator: std.mem.Allocator, home_dir: ?[]const u8, dir: []co
fn getHomeDir(allocator: std.mem.Allocator) ![]const u8 {
switch (builtin.os.tag) {
.windows => {
var dir_path_ptr: [*:0]u16 = undefined;
// https://docs.microsoft.com/en-us/windows/win32/shell/knownfolderid
const FOLDERID_Profile = std.os.windows.GUID.parse("{5E6C858F-0E22-4760-9AFE-EA3317B67173}");
switch (std.os.windows.shell32.SHGetKnownFolderPath(
&FOLDERID_Profile,
std.os.windows.KF_FLAG_CREATE,
null,
&dir_path_ptr,
)) {
std.os.windows.S_OK => {
defer std.os.windows.ole32.CoTaskMemFree(@as(*anyopaque, @ptrCast(dir_path_ptr)));
const global_dir = std.unicode.utf16leToUtf8Alloc(allocator, std.mem.sliceTo(dir_path_ptr, 0)) catch |err| switch (err) {
error.UnexpectedSecondSurrogateHalf => return error.HomeDirUnavailable,
error.ExpectedSecondSurrogateHalf => return error.HomeDirUnavailable,
error.DanglingSurrogateHalf => return error.HomeDirUnavailable,
error.OutOfMemory => return error.OutOfMemory,
};
return global_dir;
// defer allocator.free(global_dir);
},
std.os.windows.E_OUTOFMEMORY => return error.OutOfMemory,
return std.process.getEnvVarOwned(allocator, "USERPROFILE") catch |err| switch (err) {
error.OutOfMemory => |e| return e,
else => return error.HomeDirUnavailable,
}
};
},
.macos, .linux, .freebsd, .netbsd, .dragonfly, .openbsd, .solaris => {
const home_dir = std.os.getenv("HOME") orelse {
const home_dir = std.posix.getenv("HOME") orelse {
// TODO look in /etc/passwd
return error.HomeDirUnavailable;
};

View File

@ -44,7 +44,7 @@ pub const Options = struct {
signing_time: ?i64 = null,
};
pub const Header = base.Header;
pub const Header = std.http.Header;
pub const HttpRequest = base.Request;
pub const HttpResult = base.Result;
@ -64,11 +64,11 @@ const EndPoint = struct {
};
pub const AwsHttp = struct {
allocator: std.mem.Allocator,
proxy: ?std.http.Client.HttpProxy,
proxy: ?std.http.Client.Proxy,
const Self = @This();
pub fn init(allocator: std.mem.Allocator, proxy: ?std.http.Client.HttpProxy) Self {
pub fn init(allocator: std.mem.Allocator, proxy: ?std.http.Client.Proxy) Self {
return Self{
.allocator = allocator,
.proxy = proxy,
@ -149,7 +149,7 @@ pub const AwsHttp = struct {
// We will use endpoint instead
request_cp.path = endpoint.path;
var request_headers = std.ArrayList(base.Header).init(self.allocator);
var request_headers = std.ArrayList(std.http.Header).init(self.allocator);
defer request_headers.deinit();
const len = try addHeaders(self.allocator, &request_headers, endpoint.host, request_cp.body, request_cp.content_type, request_cp.headers);
@ -163,108 +163,75 @@ pub const AwsHttp = struct {
}
}
var headers = std.http.Headers.init(self.allocator);
var headers = std.ArrayList(std.http.Header).init(self.allocator);
defer headers.deinit();
for (request_cp.headers) |header|
try headers.append(header.name, header.value);
try headers.append(.{ .name = header.name, .value = header.value });
log.debug("All Request Headers:", .{});
for (headers.list.items) |h| {
for (headers.items) |h| {
log.debug("\t{s}: {s}", .{ h.name, h.value });
}
const url = try std.fmt.allocPrint(self.allocator, "{s}{s}{s}", .{ endpoint.uri, request_cp.path, request_cp.query });
defer self.allocator.free(url);
log.debug("Request url: {s}", .{url});
var cl = std.http.Client{ .allocator = self.allocator, .proxy = self.proxy };
// TODO: Fix this proxy stuff. This is all a kludge just to compile, but std.http.Client has it all built in now
var cl = std.http.Client{ .allocator = self.allocator, .https_proxy = if (self.proxy) |*p| @constCast(p) else null };
defer cl.deinit(); // TODO: Connection pooling
//
// var req = try zfetch.Request.init(self.allocator, url, self.trust_chain);
// defer req.deinit();
const method = std.meta.stringToEnum(std.http.Method, request_cp.method).?;
// std.Uri has a format function here that is used by start() (below)
// to escape the string we're about to send. But we don't want that...
// we need the control, because the signing above relies on the url above.
// We can't seem to have our cake and eat it too, because we need escaped
// ':' characters, but if we escape them, we'll get them double encoded.
// If we don't escape them, they won't get encoded at all. I believe the
// only answer may be to copy the Request.start function from the
// standard library and tweak the print statements such that they don't
// escape (but do still handle full uri (in proxy) vs path only (normal)
var server_header_buffer: [16 * 1024]u8 = undefined;
var resp_payload = std.ArrayList(u8).init(self.allocator);
defer resp_payload.deinit();
const req = try cl.fetch(.{
.server_header_buffer = &server_header_buffer,
.method = method,
.payload = if (request_cp.body.len > 0) request_cp.body else null,
.response_storage = .{ .dynamic = &resp_payload },
.raw_uri = true,
.location = .{ .url = url },
.extra_headers = headers.items,
});
// TODO: Need to test for payloads > 2^14. I believe one of our tests does this, but not sure
// if (request_cp.body.len > 0) {
// // Workaround for https://github.com/ziglang/zig/issues/15626
// const max_bytes: usize = 1 << 14;
// var inx: usize = 0;
// while (request_cp.body.len > inx) {
// try req.writeAll(request_cp.body[inx..@min(request_cp.body.len, inx + max_bytes)]);
// inx += max_bytes;
// }
//
// Bug report filed here:
// https://github.com/ziglang/zig/issues/17015
//
// https://github.com/ziglang/zig/blob/0.11.0/lib/std/http/Client.zig#L538-L636
//
// Look at lines 551 and 553:
// https://github.com/ziglang/zig/blob/0.11.0/lib/std/http/Client.zig#L551
//
// This ends up executing the format function here:
// https://github.com/ziglang/zig/blob/0.11.0/lib/std/http/Client.zig#L551
//
// Which is basically the what we want, without the escaping on lines
// 249, 254, and 260:
// https://github.com/ziglang/zig/blob/0.11.0/lib/std/Uri.zig#L249
//
// const unescaped_url = try std.Uri.unescapeString(self.allocator, url);
// defer self.allocator.free(unescaped_url);
var req = try cl.request(method, try std.Uri.parse(url), headers, .{});
defer req.deinit();
if (request_cp.body.len > 0)
req.transfer_encoding = .{ .content_length = request_cp.body.len };
try @import("http_client_17015_issue.zig").start(&req);
// try req.start();
if (request_cp.body.len > 0) {
// Workaround for https://github.com/ziglang/zig/issues/15626
const max_bytes: usize = 1 << 14;
var inx: usize = 0;
while (request_cp.body.len > inx) {
try req.writeAll(request_cp.body[inx..@min(request_cp.body.len, inx + max_bytes)]);
inx += max_bytes;
}
try req.finish();
}
try req.wait();
// try req.finish();
// }
// try req.wait();
// TODO: Timeout - is this now above us?
log.debug(
"Request Complete. Response code {d}: {?s}",
.{ @intFromEnum(req.response.status), req.response.status.phrase() },
.{ @intFromEnum(req.status), req.status.phrase() },
);
log.debug("Response headers:", .{});
var resp_headers = try std.ArrayList(Header).initCapacity(
var resp_headers = std.ArrayList(Header).init(
self.allocator,
req.response.headers.list.items.len,
);
defer resp_headers.deinit();
var content_length: usize = 0;
for (req.response.headers.list.items) |h| {
var it = std.http.HeaderIterator.init(server_header_buffer[0..]);
while (it.next()) |h| { // even though we don't expect to fill the buffer,
// we don't get a length, but looks via stdlib source
// it should be ok to call next on the undefined memory
log.debug(" {s}: {s}", .{ h.name, h.value });
resp_headers.appendAssumeCapacity(.{
try resp_headers.append(.{
.name = try (self.allocator.dupe(u8, h.name)),
.value = try (self.allocator.dupe(u8, h.value)),
});
if (content_length == 0 and std.ascii.eqlIgnoreCase("content-length", h.name))
content_length = std.fmt.parseInt(usize, h.value, 10) catch 0;
}
var response_data: []u8 =
if (req.response.transfer_encoding) |_| // the only value here is "chunked"
try req.reader().readAllAlloc(self.allocator, std.math.maxInt(usize))
else blk: {
// content length
const tmp_data = try self.allocator.alloc(u8, content_length);
errdefer self.allocator.free(tmp_data);
_ = try req.readAll(tmp_data);
break :blk tmp_data;
};
log.debug("raw response body:\n{s}", .{response_data});
log.debug("raw response body:\n{s}", .{resp_payload.items});
const rc = HttpResult{
.response_code = @intFromEnum(req.response.status),
.body = response_data,
.response_code = @intFromEnum(req.status),
.body = try resp_payload.toOwnedSlice(),
.headers = try resp_headers.toOwnedSlice(),
.allocator = self.allocator,
};
@ -277,7 +244,16 @@ fn getRegion(service: []const u8, region: []const u8) []const u8 {
return region;
}
fn addHeaders(allocator: std.mem.Allocator, headers: *std.ArrayList(base.Header), host: []const u8, body: []const u8, content_type: []const u8, additional_headers: []Header) !?[]const u8 {
fn addHeaders(allocator: std.mem.Allocator, headers: *std.ArrayList(std.http.Header), host: []const u8, body: []const u8, content_type: []const u8, additional_headers: []const Header) !?[]const u8 {
// We don't need allocator and body because they were to add a
// Content-Length header. But that is being added by the client send()
// function, so we don't want it on the request twice. But I also feel
// pretty strongly that send() should be providing us control, because
// I think if we don't add it here, it won't get signed, and we would
// really prefer it to be signed. So, we will wait and watch for this
// situation to change in stdlib
_ = allocator;
_ = body;
var has_content_type = false;
for (additional_headers) |h| {
if (std.ascii.eqlIgnoreCase(h.name, "Content-Type")) {
@ -291,11 +267,6 @@ fn addHeaders(allocator: std.mem.Allocator, headers: *std.ArrayList(base.Header)
if (!has_content_type)
try headers.append(.{ .name = "Content-Type", .value = content_type });
try headers.appendSlice(additional_headers);
if (body.len > 0) {
const len = try std.fmt.allocPrint(allocator, "{d}", .{body.len});
try headers.append(.{ .name = "Content-Length", .value = len });
return len;
}
return null;
}

View File

@ -7,12 +7,12 @@ pub const Request = struct {
body: []const u8 = "",
method: []const u8 = "POST",
content_type: []const u8 = "application/json", // Can we get away with this?
headers: []Header = &[_]Header{},
headers: []const std.http.Header = &.{},
};
pub const Result = struct {
response_code: u16, // actually 3 digits can fit in u10
body: []const u8,
headers: []Header,
headers: []const std.http.Header,
allocator: std.mem.Allocator,
pub fn deinit(self: Result) void {
@ -26,8 +26,3 @@ pub const Result = struct {
return;
}
};
pub const Header = struct {
name: []const u8,
value: []const u8,
};

View File

@ -169,19 +169,19 @@ pub fn signRequest(allocator: std.mem.Allocator, request: base.Request, config:
additional_header_count += 1;
if (config.signed_body_header == .none)
additional_header_count -= 1;
const newheaders = try allocator.alloc(base.Header, rc.headers.len + additional_header_count);
const newheaders = try allocator.alloc(std.http.Header, rc.headers.len + additional_header_count);
errdefer allocator.free(newheaders);
const oldheaders = rc.headers;
if (config.credentials.session_token) |t| {
newheaders[newheaders.len - additional_header_count] = base.Header{
newheaders[newheaders.len - additional_header_count] = std.http.Header{
.name = "X-Amz-Security-Token",
.value = try allocator.dupe(u8, t),
};
additional_header_count -= 1;
}
errdefer freeSignedRequest(allocator, &rc, config);
std.mem.copy(base.Header, newheaders, oldheaders);
newheaders[newheaders.len - additional_header_count] = base.Header{
@memcpy(newheaders[0..oldheaders.len], oldheaders);
newheaders[newheaders.len - additional_header_count] = std.http.Header{
.name = "X-Amz-Date",
.value = signing_iso8601,
};
@ -200,7 +200,7 @@ pub fn signRequest(allocator: std.mem.Allocator, request: base.Request, config:
// may not add this header
// This will be freed in freeSignedRequest
// defer allocator.free(payload_hash);
newheaders[newheaders.len - additional_header_count] = base.Header{
newheaders[newheaders.len - additional_header_count] = std.http.Header{
.name = "x-amz-content-sha256",
.value = payload_hash,
};
@ -259,7 +259,7 @@ pub fn signRequest(allocator: std.mem.Allocator, request: base.Request, config:
const signature = try hmac(allocator, signing_key, string_to_sign);
defer allocator.free(signature);
newheaders[newheaders.len - 1] = base.Header{
newheaders[newheaders.len - 1] = std.http.Header{
.name = "Authorization",
.value = try std.fmt.allocPrint(
allocator,
@ -299,27 +299,51 @@ pub fn freeSignedRequest(allocator: std.mem.Allocator, request: *base.Request, c
pub const credentialsFn = *const fn ([]const u8) ?Credentials;
pub fn verifyServerRequest(allocator: std.mem.Allocator, request: std.http.Server.Request, request_body_reader: anytype, credentials_fn: credentialsFn) !bool {
const unverified_request = UnverifiedRequest{
.headers = request.headers,
.target = request.target,
.method = request.method,
};
pub fn verifyServerRequest(allocator: std.mem.Allocator, request: *std.http.Server.Request, request_body_reader: anytype, credentials_fn: credentialsFn) !bool {
var unverified_request = try UnverifiedRequest.init(allocator, request);
defer unverified_request.deinit();
return verify(allocator, unverified_request, request_body_reader, credentials_fn);
}
pub const UnverifiedRequest = struct {
headers: std.http.Headers,
headers: []std.http.Header,
target: []const u8,
method: std.http.Method,
allocator: std.mem.Allocator,
pub fn init(allocator: std.mem.Allocator, request: *std.http.Server.Request) !UnverifiedRequest {
var al = std.ArrayList(std.http.Header).init(allocator);
defer al.deinit();
var it = request.iterateHeaders();
while (it.next()) |h| try al.append(h);
return .{
.target = request.head.target,
.method = request.head.method,
.headers = try al.toOwnedSlice(),
.allocator = allocator,
};
}
pub fn getFirstHeaderValue(self: UnverifiedRequest, name: []const u8) ?[]const u8 {
for (self.headers) |*h| {
if (std.ascii.eqlIgnoreCase(name, h.name))
return h.value; // I don't think this is the whole story here, but should suffice for now
// We need to return the value before the first ';' IIRC
}
return null;
}
pub fn deinit(self: *UnverifiedRequest) void {
self.allocator.free(self.headers);
}
};
pub fn verify(allocator: std.mem.Allocator, request: UnverifiedRequest, request_body_reader: anytype, credentials_fn: credentialsFn) !bool {
var arena = std.heap.ArenaAllocator.init(allocator);
defer arena.deinit();
var aa = arena.allocator();
const aa = arena.allocator();
// Authorization: AWS4-HMAC-SHA256 Credential=ACCESS/20230908/us-west-2/s3/aws4_request, SignedHeaders=accept;content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-storage-class, Signature=fcc43ce73a34c9bd1ddf17e8a435f46a859812822f944f9eeb2aabcd64b03523
const auth_header_or_null = request.headers.getFirstValue("Authorization");
const auth_header_or_null = request.getFirstHeaderValue("Authorization");
const auth_header = if (auth_header_or_null) |a| a else return error.AuthorizationHeaderMissing;
if (!std.mem.startsWith(u8, auth_header, "AWS4-HMAC-SHA256")) return error.UnsupportedAuthorizationType;
var credential: ?[]const u8 = null;
@ -373,8 +397,8 @@ fn verifyParsedAuthorization(
const credentials = credentials_fn(access_key) orelse return error.CredentialsNotFound;
// TODO: https://stackoverflow.com/questions/29276609/aws-authentication-requires-a-valid-date-or-x-amz-date-header-curl
// For now I want to see this test pass
const normalized_iso_date = request.headers.getFirstValue("x-amz-date") orelse
request.headers.getFirstValue("Date").?;
const normalized_iso_date = request.getFirstHeaderValue("x-amz-date") orelse
request.getFirstHeaderValue("Date").?;
log.debug("Got date: {s}", .{normalized_iso_date});
_ = credential_iterator.next().?; // skip the date...I don't think we need this
const region = credential_iterator.next().?;
@ -392,7 +416,7 @@ fn verifyParsedAuthorization(
.signing_time = try date.dateTimeToTimestamp(try date.parseIso8601ToDateTime(normalized_iso_date)),
};
var headers = try allocator.alloc(base.Header, std.mem.count(u8, signed_headers, ";") + 1);
var headers = try allocator.alloc(std.http.Header, std.mem.count(u8, signed_headers, ";") + 1);
defer allocator.free(headers);
var signed_headers_iterator = std.mem.splitSequence(u8, signed_headers, ";");
var inx: usize = 0;
@ -409,7 +433,7 @@ fn verifyParsedAuthorization(
if (is_forbidden) continue;
headers[inx] = .{
.name = signed_header,
.value = request.headers.getFirstValue(signed_header).?,
.value = request.getFirstHeaderValue(signed_header).?,
};
inx += 1;
}
@ -418,7 +442,7 @@ fn verifyParsedAuthorization(
.path = target_iterator.first(),
.headers = headers[0..inx],
.method = @tagName(request.method),
.content_type = request.headers.getFirstValue("content-type").?,
.content_type = request.getFirstHeaderValue("content-type").?,
};
signed_request.query = request.target[signed_request.path.len..]; // TODO: should this be +1? query here would include '?'
signed_request.body = try request_body_reader.readAllAlloc(allocator, std.math.maxInt(usize));
@ -780,7 +804,7 @@ const CanonicalHeaders = struct {
str: []const u8,
signed_headers: []const u8,
};
fn canonicalHeaders(allocator: std.mem.Allocator, headers: []base.Header, service: []const u8) !CanonicalHeaders {
fn canonicalHeaders(allocator: std.mem.Allocator, headers: []const std.http.Header, service: []const u8) !CanonicalHeaders {
//
// Doc example. Original:
//
@ -796,7 +820,7 @@ fn canonicalHeaders(allocator: std.mem.Allocator, headers: []base.Header, servic
// my-header1:a b c\n
// my-header2:"a b c"\n
// x-amz-date:20150830T123600Z\n
var dest = try std.ArrayList(base.Header).initCapacity(allocator, headers.len);
var dest = try std.ArrayList(std.http.Header).initCapacity(allocator, headers.len);
defer {
for (dest.items) |h| {
allocator.free(h.name);
@ -835,7 +859,7 @@ fn canonicalHeaders(allocator: std.mem.Allocator, headers: []base.Header, servic
try dest.append(.{ .name = n, .value = v });
}
std.sort.pdq(base.Header, dest.items, {}, lessThan);
std.sort.pdq(std.http.Header, dest.items, {}, lessThan);
var dest_str = try std.ArrayList(u8).initCapacity(allocator, total_len);
defer dest_str.deinit();
@ -883,7 +907,7 @@ fn canonicalHeaderValue(allocator: std.mem.Allocator, value: []const u8) ![]cons
_ = allocator.resize(rc, rc_inx);
return rc[0..rc_inx];
}
fn lessThan(context: void, lhs: base.Header, rhs: base.Header) bool {
fn lessThan(context: void, lhs: std.http.Header, rhs: std.http.Header) bool {
_ = context;
return std.ascii.lessThanIgnoreCase(lhs.name, rhs.name);
}
@ -935,7 +959,7 @@ test "canonical query" {
}
test "canonical headers" {
const allocator = std.testing.allocator;
var headers = try std.ArrayList(base.Header).initCapacity(allocator, 5);
var headers = try std.ArrayList(std.http.Header).initCapacity(allocator, 5);
defer headers.deinit();
try headers.append(.{ .name = "Host", .value = "iam.amazonaws.com" });
try headers.append(.{ .name = "Content-Type", .value = "application/x-www-form-urlencoded; charset=utf-8" });
@ -960,7 +984,7 @@ test "canonical headers" {
test "canonical request" {
const allocator = std.testing.allocator;
var headers = try std.ArrayList(base.Header).initCapacity(allocator, 5);
var headers = try std.ArrayList(std.http.Header).initCapacity(allocator, 5);
defer headers.deinit();
try headers.append(.{ .name = "User-agent", .value = "c sdk v1.0" });
// In contrast to AWS CRT (aws-c-auth), we add the date as part of the
@ -1020,7 +1044,7 @@ test "can sign" {
// [debug] (awshttp): Content-Length: 43
const allocator = std.testing.allocator;
var headers = try std.ArrayList(base.Header).initCapacity(allocator, 5);
var headers = try std.ArrayList(std.http.Header).initCapacity(allocator, 5);
defer headers.deinit();
try headers.append(.{ .name = "Content-Type", .value = "application/x-www-form-urlencoded; charset=utf-8" });
try headers.append(.{ .name = "Content-Length", .value = "13" });
@ -1077,34 +1101,39 @@ test "can verify server request" {
test_credential = Credentials.init(allocator, access_key, secret_key, null);
defer test_credential.?.deinit();
var headers = std.http.Headers.init(allocator);
defer headers.deinit();
try headers.append("Connection", "keep-alive");
try headers.append("Accept-Encoding", "gzip, deflate, zstd");
try headers.append("TE", "gzip, deflate, trailers");
try headers.append("Accept", "application/json");
try headers.append("Host", "127.0.0.1");
try headers.append("User-Agent", "zig-aws 1.0");
try headers.append("Content-Type", "text/plain");
try headers.append("x-amz-storage-class", "STANDARD");
try headers.append("Content-Length", "3");
try headers.append("X-Amz-Date", "20230908T170252Z");
try headers.append("x-amz-content-sha256", "fcde2b2edba56bf408601fb721fe9b5c338d10ee429ea04fae5511b68fbf8fb9");
try headers.append("Authorization", "AWS4-HMAC-SHA256 Credential=ACCESS/20230908/us-west-2/s3/aws4_request, SignedHeaders=accept;content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-storage-class, Signature=fcc43ce73a34c9bd1ddf17e8a435f46a859812822f944f9eeb2aabcd64b03523");
var buf = "bar".*;
var fis = std.io.fixedBufferStream(&buf);
const request = std.http.Server.Request{
.method = std.http.Method.PUT,
.target = "/mysfitszj3t6webstack-hostingbucketa91a61fe-1ep3ezkgwpxr0/i/am/a/teapot/foo?x-id=PutObject",
.version = .@"HTTP/1.1",
.content_length = 3,
.headers = headers,
.parser = std.http.protocol.HeadersParser.initDynamic(std.math.maxInt(usize)),
const req =
"PUT /mysfitszj3t6webstack-hostingbucketa91a61fe-1ep3ezkgwpxr0/i/am/a/teapot/foo?x-id=PutObject HTTP/1.1\r\n" ++
"Connection: keep-alive\r\n" ++
"Accept-Encoding: gzip, deflate, zstd\r\n" ++
"TE: gzip, deflate, trailers\r\n" ++
"Accept: application/json\r\n" ++
"Host: 127.0.0.1\r\n" ++
"User-Agent: zig-aws 1.0\r\n" ++
"Content-Type: text/plain\r\n" ++
"x-amz-storage-class: STANDARD\r\n" ++
"Content-Length: 3\r\n" ++
"X-Amz-Date: 20230908T170252Z\r\n" ++
"x-amz-content-sha256: fcde2b2edba56bf408601fb721fe9b5c338d10ee429ea04fae5511b68fbf8fb9\r\n" ++
"Authorization: AWS4-HMAC-SHA256 Credential=ACCESS/20230908/us-west-2/s3/aws4_request, SignedHeaders=accept;content-length;content-type;host;x-amz-content-sha256;x-amz-date;x-amz-storage-class, Signature=fcc43ce73a34c9bd1ddf17e8a435f46a859812822f944f9eeb2aabcd64b03523\r\n\r\nbar";
var read_buffer: [1024]u8 = undefined;
@memcpy(read_buffer[0..req.len], req);
var server: std.http.Server = .{
.connection = undefined,
.state = .ready,
.read_buffer = &read_buffer,
.read_buffer_len = req.len,
.next_request_start = 0,
};
var request: std.http.Server.Request = .{
.server = &server,
.head_end = req.len - 3,
.head = try std.http.Server.Request.Head.parse(read_buffer[0 .. req.len - 3]),
.reader_state = undefined,
};
// std.testing.log_level = .debug;
try std.testing.expect(try verifyServerRequest(allocator, request, fis.reader(), struct {
var fbs = std.io.fixedBufferStream("bar");
try std.testing.expect(try verifyServerRequest(allocator, &request, fbs.reader(), struct {
cred: Credentials,
const Self = @This();
@ -1122,34 +1151,51 @@ test "can verify server request without x-amz-content-sha256" {
test_credential = Credentials.init(allocator, access_key, secret_key, null);
defer test_credential.?.deinit();
var headers = std.http.Headers.init(allocator);
defer headers.deinit();
try headers.append("Connection", "keep-alive");
try headers.append("Accept-Encoding", "gzip, deflate, zstd");
try headers.append("TE", "gzip, deflate, trailers");
try headers.append("Accept", "application/json");
try headers.append("X-Amz-Target", "DynamoDB_20120810.CreateTable");
try headers.append("Host", "dynamodb.us-west-2.amazonaws.com");
try headers.append("User-Agent", "zig-aws 1.0");
try headers.append("Content-Type", "application/x-amz-json-1.0");
try headers.append("Content-Length", "403");
try headers.append("X-Amz-Date", "20240224T154944Z");
try headers.append("x-amz-content-sha256", "fcde2b2edba56bf408601fb721fe9b5c338d10ee429ea04fae5511b68fbf8fb9");
try headers.append("Authorization", "AWS4-HMAC-SHA256 Credential=ACCESS/20240224/us-west-2/dynamodb/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=8fd23dc7dbcb36c4aa54207a7118f8b9fcd680da73a0590b498e9577ff68ec33");
const head =
"POST / HTTP/1.1\r\n" ++
"Connection: keep-alive\r\n" ++
"Accept-Encoding: gzip, deflate, zstd\r\n" ++
"TE: gzip, deflate, trailers\r\n" ++
"Accept: application/json\r\n" ++
"X-Amz-Target: DynamoDB_20120810.CreateTable\r\n" ++
"Host: dynamodb.us-west-2.amazonaws.com\r\n" ++
"User-Agent: zig-aws 1.0\r\n" ++
"Content-Type: application/x-amz-json-1.0\r\n" ++
"Content-Length: 403\r\n" ++
"X-Amz-Date: 20240224T154944Z\r\n" ++
"x-amz-content-sha256: fcde2b2edba56bf408601fb721fe9b5c338d10ee429ea04fae5511b68fbf8fb9\r\n" ++
"Authorization: AWS4-HMAC-SHA256 Credential=ACCESS/20240224/us-west-2/dynamodb/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=8fd23dc7dbcb36c4aa54207a7118f8b9fcd680da73a0590b498e9577ff68ec33\r\n\r\n";
const body =
\\{"AttributeDefinitions": [{"AttributeName": "Artist", "AttributeType": "S"}, {"AttributeName": "SongTitle", "AttributeType": "S"}], "TableName": "MusicCollection", "KeySchema": [{"AttributeName": "Artist", "KeyType": "HASH"}, {"AttributeName": "SongTitle", "KeyType": "RANGE"}], "ProvisionedThroughput": {"ReadCapacityUnits": 5, "WriteCapacityUnits": 5}, "Tags": [{"Key": "Owner", "Value": "blueTeam"}]}
;
const req_data = head ++ body;
var read_buffer: [2048]u8 = undefined;
@memcpy(read_buffer[0..req_data.len], req_data);
var server: std.http.Server = .{
.connection = undefined,
.state = .ready,
.read_buffer = &read_buffer,
.read_buffer_len = req_data.len,
.next_request_start = 0,
};
var request: std.http.Server.Request = .{
.server = &server,
.head_end = head.len,
.head = try std.http.Server.Request.Head.parse(read_buffer[0..head.len]),
.reader_state = undefined,
};
{
var h = try std.ArrayList(base.Header).initCapacity(allocator, headers.list.items.len);
var h = std.ArrayList(std.http.Header).init(allocator);
defer h.deinit();
const signed_headers = &[_][]const u8{ "content-type", "host", "x-amz-date", "x-amz-target" };
for (headers.list.items) |source| {
var it = request.iterateHeaders();
while (it.next()) |source| {
var match = false;
for (signed_headers) |s| {
match = std.ascii.eqlIgnoreCase(s, source.name);
if (match) break;
}
if (match) h.appendAssumeCapacity(.{ .name = source.name, .value = source.value });
if (match) try h.append(.{ .name = source.name, .value = source.value });
}
const req = base.Request{
.path = "/",
@ -1187,16 +1233,8 @@ test "can verify server request without x-amz-content-sha256" {
{ // verification
var fis = std.io.fixedBufferStream(body[0..]);
const request = std.http.Server.Request{
.method = std.http.Method.POST,
.target = "/",
.version = .@"HTTP/1.1",
.content_length = 403,
.headers = headers,
.parser = std.http.protocol.HeadersParser.initDynamic(std.math.maxInt(usize)),
};
try std.testing.expect(try verifyServerRequest(allocator, request, fis.reader(), struct {
try std.testing.expect(try verifyServerRequest(allocator, &request, fis.reader(), struct {
cred: Credentials,
const Self = @This();

View File

@ -1,155 +0,0 @@
const std = @import("std");
const Uri = std.Uri;
///////////////////////////////////////////////////////////////////////////
/// This function imported from:
/// https://github.com/ziglang/zig/blob/0.11.0/lib/std/http/Client.zig#L538-L636
///
/// The first commit of this file will be unchanged from 0.11.0 to more
/// clearly indicate changes moving forward. The plan is to change
/// only the two w.print lines for req.uri 16 and 18 lines down from this comment
///////////////////////////////////////////////////////////////////////////
/// Send the request to the server.
pub fn start(req: *std.http.Client.Request) std.http.Client.Request.StartError!void {
var buffered = std.io.bufferedWriter(req.connection.?.data.writer());
const w = buffered.writer();
try w.writeAll(@tagName(req.method));
try w.writeByte(' ');
if (req.method == .CONNECT) {
try w.writeAll(req.uri.host.?);
try w.writeByte(':');
try w.print("{}", .{req.uri.port.?});
} else if (req.connection.?.data.proxied) {
// proxied connections require the full uri
try format(req.uri, "+/", .{}, w);
} else {
try format(req.uri, "/", .{}, w);
}
try w.writeByte(' ');
try w.writeAll(@tagName(req.version));
try w.writeAll("\r\n");
if (!req.headers.contains("host")) {
try w.writeAll("Host: ");
try w.writeAll(req.uri.host.?);
try w.writeAll("\r\n");
}
if (!req.headers.contains("user-agent")) {
try w.writeAll("User-Agent: zig/");
try w.writeAll(@import("builtin").zig_version_string);
try w.writeAll(" (std.http)\r\n");
}
if (!req.headers.contains("connection")) {
try w.writeAll("Connection: keep-alive\r\n");
}
if (!req.headers.contains("accept-encoding")) {
try w.writeAll("Accept-Encoding: gzip, deflate, zstd\r\n");
}
if (!req.headers.contains("te")) {
try w.writeAll("TE: gzip, deflate, trailers\r\n");
}
const has_transfer_encoding = req.headers.contains("transfer-encoding");
const has_content_length = req.headers.contains("content-length");
if (!has_transfer_encoding and !has_content_length) {
switch (req.transfer_encoding) {
.chunked => try w.writeAll("Transfer-Encoding: chunked\r\n"),
.content_length => |content_length| try w.print("Content-Length: {d}\r\n", .{content_length}),
.none => {},
}
} else {
if (has_content_length) {
const content_length = std.fmt.parseInt(u64, req.headers.getFirstValue("content-length").?, 10) catch return error.InvalidContentLength;
req.transfer_encoding = .{ .content_length = content_length };
} else if (has_transfer_encoding) {
const transfer_encoding = req.headers.getFirstValue("transfer-encoding").?;
if (std.mem.eql(u8, transfer_encoding, "chunked")) {
req.transfer_encoding = .chunked;
} else {
return error.UnsupportedTransferEncoding;
}
} else {
req.transfer_encoding = .none;
}
}
try w.print("{}", .{req.headers});
try w.writeAll("\r\n");
try buffered.flush();
}
///////////////////////////////////////////////////////////////////////////
/// This function imported from:
/// https://github.com/ziglang/zig/blob/0.11.0/lib/std/Uri.zig#L209-L264
///
/// The first commit of this file will be unchanged from 0.11.0 to more
/// clearly indicate changes moving forward. The plan is to change
/// only the writeEscapedPath call 42 lines down from this comment
///////////////////////////////////////////////////////////////////////////
pub fn format(
uri: Uri,
comptime fmt: []const u8,
options: std.fmt.FormatOptions,
writer: anytype,
) @TypeOf(writer).Error!void {
_ = options;
const needs_absolute = comptime std.mem.indexOf(u8, fmt, "+") != null;
const needs_path = comptime std.mem.indexOf(u8, fmt, "/") != null or fmt.len == 0;
const needs_fragment = comptime std.mem.indexOf(u8, fmt, "#") != null;
if (needs_absolute) {
try writer.writeAll(uri.scheme);
try writer.writeAll(":");
if (uri.host) |host| {
try writer.writeAll("//");
if (uri.user) |user| {
try writer.writeAll(user);
if (uri.password) |password| {
try writer.writeAll(":");
try writer.writeAll(password);
}
try writer.writeAll("@");
}
try writer.writeAll(host);
if (uri.port) |port| {
try writer.writeAll(":");
try std.fmt.formatInt(port, 10, .lower, .{}, writer);
}
}
}
if (needs_path) {
if (uri.path.len == 0) {
try writer.writeAll("/");
} else {
try writer.writeAll(uri.path); // do not mess with our path
}
if (uri.query) |q| {
try writer.writeAll("?");
try Uri.writeEscapedQuery(writer, q);
}
if (needs_fragment) {
if (uri.fragment) |f| {
try writer.writeAll("#");
try Uri.writeEscapedQuery(writer, f);
}
}
}
}

View File

@ -1361,7 +1361,7 @@ test "Value.jsonStringify" {
var buffer: [10]u8 = undefined;
var fbs = std.io.fixedBufferStream(&buffer);
try (Value{ .Float = 42 }).jsonStringify(.{}, fbs.writer());
try testing.expectEqualSlices(u8, fbs.getWritten(), "4.2e+01");
try testing.expectEqualSlices(u8, fbs.getWritten(), "4.2e1");
}
{
var buffer: [10]u8 = undefined;
@ -1762,7 +1762,7 @@ fn parseInternal(comptime T: type, token: Token, tokens: *TokenStream, options:
var r: T = undefined;
const source_slice = stringToken.slice(tokens.slice, tokens.i - 1);
switch (stringToken.escapes) {
.None => mem.copy(u8, &r, source_slice),
.None => @memcpy(&r, source_slice),
.Some => try unescapeValidString(&r, source_slice),
}
return r;
@ -2019,7 +2019,7 @@ test "parse into tagged union" {
}
{ // failing allocations should be bubbled up instantly without trying next member
var fail_alloc = testing.FailingAllocator.init(testing.allocator, 0);
var fail_alloc = testing.FailingAllocator.init(testing.allocator, .{ .fail_index = 0 });
const options = ParseOptions{ .allocator = fail_alloc.allocator() };
const T = union(enum) {
// both fields here match the input
@ -2067,7 +2067,7 @@ test "parse union bubbles up AllocatorRequired" {
}
test "parseFree descends into tagged union" {
var fail_alloc = testing.FailingAllocator.init(testing.allocator, 1);
var fail_alloc = testing.FailingAllocator.init(testing.allocator, .{ .fail_index = 1 });
const options = ParseOptions{ .allocator = fail_alloc.allocator() };
const T = union(enum) {
int: i32,
@ -2808,7 +2808,7 @@ pub fn stringify(
const T = @TypeOf(value);
switch (@typeInfo(T)) {
.Float, .ComptimeFloat => {
return std.fmt.formatFloatScientific(value, std.fmt.FormatOptions{}, out_stream);
return std.fmt.format(out_stream, "{e}", .{value});
},
.Int, .ComptimeInt => {
return std.fmt.formatIntValue(value, "", std.fmt.FormatOptions{}, out_stream);
@ -2827,14 +2827,14 @@ pub fn stringify(
}
},
.Enum => {
if (comptime std.meta.trait.hasFn("jsonStringify")(T)) {
if (comptime std.meta.hasFn(T, "jsonStringify")) {
return value.jsonStringify(options, out_stream);
}
@compileError("Unable to stringify enum '" ++ @typeName(T) ++ "'");
},
.Union => {
if (comptime std.meta.trait.hasFn("jsonStringify")(T)) {
if (comptime std.meta.hasFn(T, "jsonStringify")) {
return value.jsonStringify(options, out_stream);
}
@ -2850,7 +2850,7 @@ pub fn stringify(
}
},
.Struct => |S| {
if (comptime std.meta.trait.hasFn("jsonStringify")(T)) {
if (comptime std.meta.hasFn(T, "jsonStringify")) {
return value.jsonStringify(options, out_stream);
}
@ -2874,11 +2874,11 @@ pub fn stringify(
try child_whitespace.outputIndent(out_stream);
}
var field_written = false;
if (comptime std.meta.trait.hasFn("jsonStringifyField")(T))
if (comptime std.meta.hasFn(T, "jsonStringifyField"))
field_written = try value.jsonStringifyField(Field.name, child_options, out_stream);
if (!field_written) {
if (comptime std.meta.trait.hasFn("fieldNameFor")(T)) {
if (comptime std.meta.hasFn(T, "fieldNameFor")) {
const name = value.fieldNameFor(Field.name);
try stringify(name, options, out_stream);
} else {
@ -3057,11 +3057,11 @@ test "stringify basic types" {
try teststringify("null", @as(?u8, null), StringifyOptions{});
try teststringify("null", @as(?*u32, null), StringifyOptions{});
try teststringify("42", 42, StringifyOptions{});
try teststringify("4.2e+01", 42.0, StringifyOptions{});
try teststringify("4.2e1", 42.0, StringifyOptions{});
try teststringify("42", @as(u8, 42), StringifyOptions{});
try teststringify("42", @as(u128, 42), StringifyOptions{});
try teststringify("4.2e+01", @as(f32, 42), StringifyOptions{});
try teststringify("4.2e+01", @as(f64, 42), StringifyOptions{});
try teststringify("4.2e1", @as(f32, 42), StringifyOptions{});
try teststringify("4.2e1", @as(f64, 42), StringifyOptions{});
try teststringify("\"ItBroke\"", @as(anyerror, error.ItBroke), StringifyOptions{});
}

View File

@ -38,8 +38,8 @@ pub fn log(
nosuspend stderr.print(prefix ++ format ++ "\n", args) catch return;
}
pub const std_options = struct {
pub const logFn = log;
pub const std_options = std.Options{
.logFn = log,
};
const Tests = enum {
query_no_input,
@ -71,7 +71,7 @@ pub fn main() anyerror!void {
defer bw.flush() catch unreachable;
const stdout = bw.writer();
var arg0: ?[]const u8 = null;
var proxy: ?std.http.Client.HttpProxy = null;
var proxy: ?std.http.Client.Proxy = null;
while (args.next()) |arg| {
if (arg0 == null) arg0 = arg;
if (std.mem.eql(u8, "-h", arg) or std.mem.eql(u8, "--help", arg)) {
@ -353,17 +353,22 @@ pub fn main() anyerror!void {
std.log.info("===== Tests complete =====", .{});
}
fn proxyFromString(string: []const u8) !std.http.Client.HttpProxy {
var rc = std.http.Client.HttpProxy{
fn proxyFromString(string: []const u8) !std.http.Client.Proxy {
var rc = std.http.Client.Proxy{
.protocol = undefined,
.host = undefined,
.authorization = null,
.port = undefined,
.supports_connect = true, // TODO: Is this a good default?
};
var remaining: []const u8 = string;
if (std.mem.startsWith(u8, string, "http://")) {
remaining = remaining["http://".len..];
rc.protocol = .plain;
rc.port = 80;
} else if (std.mem.startsWith(u8, string, "https://")) {
remaining = remaining["https://".len..];
rc.port = 443;
rc.protocol = .tls;
} else return error.InvalidScheme;
var split_iterator = std.mem.split(u8, remaining, ":");

View File

@ -21,7 +21,7 @@ pub fn Services(comptime service_imports: anytype) type {
// finally, generate the type
return @Type(.{
.Struct = .{
.layout = .Auto,
.layout = .auto,
.fields = &fields,
.decls = &[_]std.builtin.Type.Declaration{},
.is_tuple = false,

View File

@ -1,15 +1,14 @@
const std = @import("std");
fn defaultTransformer(allocator: std.mem.Allocator, field_name: []const u8, options: EncodingOptions) anyerror![]const u8 {
_ = options;
fn defaultTransformer(allocator: std.mem.Allocator, field_name: []const u8) anyerror![]const u8 {
_ = allocator;
return field_name;
}
pub const fieldNameTransformerFn = *const fn (std.mem.Allocator, []const u8, EncodingOptions) anyerror![]const u8;
pub const fieldNameTransformerFn = *const fn (std.mem.Allocator, []const u8) anyerror![]const u8;
pub const EncodingOptions = struct {
field_name_transformer: fieldNameTransformerFn = &defaultTransformer,
field_name_transformer: fieldNameTransformerFn = defaultTransformer,
};
pub fn encode(allocator: std.mem.Allocator, obj: anytype, writer: anytype, comptime options: EncodingOptions) !void {
@ -26,7 +25,7 @@ fn encodeStruct(
) !bool {
var rc = first;
inline for (@typeInfo(@TypeOf(obj)).Struct.fields) |field| {
const field_name = try options.field_name_transformer(allocator, field.name, options);
const field_name = try options.field_name_transformer(allocator, field.name);
defer if (options.field_name_transformer.* != defaultTransformer)
allocator.free(field_name);
// @compileLog(@typeInfo(field.field_type).Pointer);

View File

@ -219,9 +219,9 @@ fn parseInternal(comptime T: type, element: *xml.Element, options: ParseOptions)
log.debug("Processing fields in struct: {s}", .{@typeName(T)});
inline for (struct_info.fields, 0..) |field, i| {
var name = field.name;
var name: []const u8 = field.name;
var found_value = false;
if (comptime std.meta.trait.hasFn("fieldNameFor")(T))
if (comptime std.meta.hasFn(T, "fieldNameFor"))
name = r.fieldNameFor(field.name);
log.debug("Field name: {s}, Element: {s}, Adjusted field name: {s}", .{ field.name, element.tag, name });
var iterator = element.findChildrenByTag(name);