Compare commits
No commits in common. "5cad3926d8de9018f218f5eff6779e339af21a2a" and "5f1b1a52beea841e130ea4d878437f9488da0eb7" have entirely different histories.
5cad3926d8
...
5f1b1a52be
69
README.md
69
README.md
|
@ -1,7 +1,7 @@
|
||||||
"Univeral Lambda" for Zig
|
"Univeral Lambda" for Zig
|
||||||
=========================
|
=========================
|
||||||
|
|
||||||
This a Zig 0.12 project intended to be used as a package to turn a zig program
|
This a Zig 0.11 project intended to be used as a package to turn a zig program
|
||||||
into a function that can be run as:
|
into a function that can be run as:
|
||||||
|
|
||||||
* A command line executable
|
* A command line executable
|
||||||
|
@ -14,57 +14,69 @@ into a function that can be run as:
|
||||||
Usage - Development
|
Usage - Development
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
From an empty directory, with Zig 0.12 installed:
|
From an empty directory, with Zig 0.11 installed:
|
||||||
|
|
||||||
|
`zig init-exe`
|
||||||
|
|
||||||
|
|
||||||
|
Create a `build.zig.zon` with the following contents:
|
||||||
|
|
||||||
```sh
|
|
||||||
zig init-exe
|
|
||||||
zig fetch --save https://git.lerch.org/lobo/universal-lambda-zig/archive/9b4e1cb5bc0513f0a0037b76a3415a357e8db427.tar.gz
|
|
||||||
```
|
```
|
||||||
|
.{
|
||||||
|
.name = "univeral-zig-example",
|
||||||
|
.version = "0.0.1",
|
||||||
|
|
||||||
|
.dependencies = .{
|
||||||
|
.universal_lambda_build = .{
|
||||||
|
.url = "https://git.lerch.org/lobo/universal-lambda-zig/archive/07366606696081f324591b66ab7a9a176a38424c.tar.gz",
|
||||||
|
.hash = "122049daa19f61d778a79ffb82c64775ca5132ee5c4797d7f7d76667ab82593917cd",
|
||||||
|
},
|
||||||
|
.flexilib = .{
|
||||||
|
.url = "https://git.lerch.org/lobo/flexilib/archive/c44ad2ba84df735421bef23a2ad612968fb50f06.tar.gz",
|
||||||
|
.hash = "122051fdfeefdd75653d3dd678c8aa297150c2893f5fad0728e0d953481383690dbc",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Due to limitations in the build apis related to relative file paths, the
|
||||||
|
dependency name currently must be "universal_lambda_build". Also, note that
|
||||||
|
the flexilib dependency is required at all times. This requirement may go away
|
||||||
|
with zig 0.12 (see [#17135](https://github.com/ziglang/zig/issues/17135))
|
||||||
|
and/or changes to this library.
|
||||||
|
|
||||||
**Build.zig:**
|
**Build.zig:**
|
||||||
|
|
||||||
* Add an import at the top:
|
* Add an import at the top:
|
||||||
|
|
||||||
```zig
|
```zig
|
||||||
const universal_lambda = @import("universal-lambda-zig");
|
const universal_lambda = @import("universal_lambda_build");
|
||||||
```
|
```
|
||||||
|
|
||||||
* Set the return of the build function to return `!void` rather than `void`
|
* Set the return of the build function to return `!void` rather than `void`
|
||||||
* Add a line to the build script, after any modules are used, but otherwise just
|
* Add a line to the build script, after any modules are used, but otherwise just
|
||||||
after adding the exe is fine. Imports will also be added through universal_lambda:
|
after adding the exe is fine:
|
||||||
|
|
||||||
```zig
|
```zig
|
||||||
const univeral_lambda_dep = b.dependency("universal-lambda-zig", .{
|
try universal_lambda.configureBuild(b, exe);
|
||||||
.target = target,
|
|
||||||
.optimize = optimize,
|
|
||||||
});
|
|
||||||
try universal_lambda.configureBuild(b, exe, univeral_lambda_dep);
|
|
||||||
_ = universal_lambda.addImports(b, exe, univeral_lambda_dep);
|
|
||||||
```
|
```
|
||||||
|
|
||||||
This will provide most of the magic functionality of the package, including
|
This will provide most of the magic functionality of the package, including
|
||||||
several new build steps to manage the system, as well as imports necessary
|
several new build steps to manage the system, and a new import to be used. For
|
||||||
for each of the providers. Note that addImports should also be called for
|
testing, it is also advisable to add the modules to your tests by adding a line
|
||||||
unit tests.
|
like so:
|
||||||
|
|
||||||
```zig
|
```zig
|
||||||
_ = universal_lambda.addImports(b, unit_tests, univeral_lambda_dep);
|
_ = try universal_lambda.addModules(b, main_tests);
|
||||||
```
|
```
|
||||||
|
|
||||||
**main.zig**
|
**main.zig**
|
||||||
|
|
||||||
`addImports` will make the following primary imports available for use:
|
The build changes above will add several modules:
|
||||||
|
|
||||||
* universal_lambda_handler: Main import, used to register your handler
|
* universal_lambda_handler: Main import, used to register your handler
|
||||||
* universal_lambda_interface: Contains the context type used in the handler function
|
* universal_lambda_interface: Contains the context type used in the handler function
|
||||||
|
* flexilib-interface: Used as a dependency of the handler. Not normally needed
|
||||||
Additional imports are available and used by the universal lambda runtime, but
|
|
||||||
should not normally be needed for direct use:
|
|
||||||
|
|
||||||
* flexilib-interface: Used as a dependency of the handler
|
|
||||||
* universal_lambda_build_options: Provides the ability to determine which provider is used
|
|
||||||
The build type is stored under a `build_type` variable.
|
|
||||||
* aws_lambda_runtime: Provides the aws lambda provider access to the underlying library
|
|
||||||
|
|
||||||
Add imports for the handler registration and interface:
|
Add imports for the handler registration and interface:
|
||||||
|
|
||||||
|
@ -73,6 +85,9 @@ const universal_lambda = @import("universal_lambda_handler");
|
||||||
const universal_lambda_interface = @import("universal_lambda_interface");
|
const universal_lambda_interface = @import("universal_lambda_interface");
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Another module `universal_lambda_build_options` is available if access to the
|
||||||
|
environment is needed. The build type is stored under a `build_type` variable.
|
||||||
|
|
||||||
Add a handler to be executed. The handler must follow this signature:
|
Add a handler to be executed. The handler must follow this signature:
|
||||||
|
|
||||||
```zig
|
```zig
|
||||||
|
@ -85,7 +100,7 @@ Your main function should return `!u8`. Let the package know about your handler
|
||||||
return try universal_lambda.run(null, handler);
|
return try universal_lambda.run(null, handler);
|
||||||
```
|
```
|
||||||
|
|
||||||
The first parameter above is an allocator. If you have a specific allocator you
|
The first parameter above is an allocator. If you have a specific handler you
|
||||||
would like to use, you may specify it. Otherwise, an appropriate allocator
|
would like to use, you may specify it. Otherwise, an appropriate allocator
|
||||||
will be created and used. Currently this is an ArenaAllocator wrapped around
|
will be created and used. Currently this is an ArenaAllocator wrapped around
|
||||||
an appropriate base allocator, so your handler does not require deallocation.
|
an appropriate base allocator, so your handler does not require deallocation.
|
||||||
|
|
66
build.zig
66
build.zig
|
@ -58,66 +58,28 @@ pub fn build(b: *std.Build) !void {
|
||||||
});
|
});
|
||||||
const universal_lambda = @import("src/universal_lambda_build.zig");
|
const universal_lambda = @import("src/universal_lambda_build.zig");
|
||||||
universal_lambda.module_root = b.build_root.path;
|
universal_lambda.module_root = b.build_root.path;
|
||||||
|
_ = try universal_lambda.addModules(b, lib);
|
||||||
// re-expose flexilib-interface and aws_lambda modules downstream
|
|
||||||
const flexilib_dep = b.dependency("flexilib", .{
|
|
||||||
.target = target,
|
|
||||||
.optimize = optimize,
|
|
||||||
});
|
|
||||||
const flexilib_module = flexilib_dep.module("flexilib-interface");
|
|
||||||
_ = b.addModule("flexilib-interface", .{
|
|
||||||
.root_source_file = flexilib_module.root_source_file,
|
|
||||||
.target = target,
|
|
||||||
.optimize = optimize,
|
|
||||||
});
|
|
||||||
const aws_lambda_dep = b.dependency("lambda-zig", .{
|
|
||||||
.target = target,
|
|
||||||
.optimize = optimize,
|
|
||||||
});
|
|
||||||
const aws_lambda_module = aws_lambda_dep.module("lambda_runtime");
|
|
||||||
_ = b.addModule("aws_lambda_runtime", .{
|
|
||||||
.root_source_file = aws_lambda_module.root_source_file,
|
|
||||||
.target = target,
|
|
||||||
.optimize = optimize,
|
|
||||||
});
|
|
||||||
|
|
||||||
// Expose our own modules downstream
|
|
||||||
_ = b.addModule("universal_lambda_interface", .{
|
|
||||||
.root_source_file = b.path("src/interface.zig"),
|
|
||||||
.target = target,
|
|
||||||
.optimize = optimize,
|
|
||||||
});
|
|
||||||
|
|
||||||
_ = b.addModule("universal_lambda_handler", .{
|
|
||||||
.root_source_file = b.path("src/universal_lambda.zig"),
|
|
||||||
.target = target,
|
|
||||||
.optimize = optimize,
|
|
||||||
});
|
|
||||||
universal_lambda.addImports(b, lib, null);
|
|
||||||
|
|
||||||
// This declares intent for the library to be installed into the standard
|
// This declares intent for the library to be installed into the standard
|
||||||
// location when the user invokes the "install" step (the default step when
|
// location when the user invokes the "install" step (the default step when
|
||||||
// running `zig build`).
|
// running `zig build`).
|
||||||
b.installArtifact(lib);
|
b.installArtifact(lib);
|
||||||
|
|
||||||
const test_step = b.step("test", "Run tests (all architectures)");
|
const test_step = b.step("test", "Run library tests");
|
||||||
const native_test_step = b.step("test-native", "Run tests (native only)");
|
|
||||||
|
|
||||||
for (test_targets) |t| {
|
for (test_targets) |t| {
|
||||||
// Creates steps for unit testing. This only builds the test executable
|
// Creates steps for unit testing. This only builds the test executable
|
||||||
// but does not run it.
|
// but does not run it.
|
||||||
const exe_tests = b.addTest(.{
|
const exe_tests = b.addTest(.{
|
||||||
.name = "test: as executable",
|
|
||||||
.root_source_file = .{ .path = "src/test.zig" },
|
.root_source_file = .{ .path = "src/test.zig" },
|
||||||
.target = b.resolveTargetQuery(t),
|
.target = t,
|
||||||
.optimize = optimize,
|
.optimize = optimize,
|
||||||
});
|
});
|
||||||
universal_lambda.addImports(b, exe_tests, null);
|
_ = try universal_lambda.addModules(b, exe_tests);
|
||||||
|
|
||||||
var run_exe_tests = b.addRunArtifact(exe_tests);
|
var run_exe_tests = b.addRunArtifact(exe_tests);
|
||||||
run_exe_tests.skip_foreign_checks = true;
|
run_exe_tests.skip_foreign_checks = true;
|
||||||
test_step.dependOn(&run_exe_tests.step);
|
test_step.dependOn(&run_exe_tests.step);
|
||||||
if (t.cpu_arch == null) native_test_step.dependOn(&run_exe_tests.step);
|
|
||||||
|
|
||||||
// Universal lambda can end up as an exe or a lib. When it is a library,
|
// Universal lambda can end up as an exe or a lib. When it is a library,
|
||||||
// we end up changing the root source file away from downstream so we can
|
// we end up changing the root source file away from downstream so we can
|
||||||
|
@ -127,26 +89,24 @@ pub fn build(b: *std.Build) !void {
|
||||||
// in the future. Scaleway, for instance, is another system that works
|
// in the future. Scaleway, for instance, is another system that works
|
||||||
// via shared library
|
// via shared library
|
||||||
const lib_tests = b.addTest(.{
|
const lib_tests = b.addTest(.{
|
||||||
.name = "test: as library",
|
|
||||||
.root_source_file = .{ .path = "src/flexilib.zig" },
|
.root_source_file = .{ .path = "src/flexilib.zig" },
|
||||||
.target = b.resolveTargetQuery(t),
|
.target = t,
|
||||||
.optimize = optimize,
|
.optimize = optimize,
|
||||||
});
|
});
|
||||||
universal_lambda.addImports(b, lib_tests, null);
|
_ = try universal_lambda.addModules(b, lib_tests);
|
||||||
|
|
||||||
var run_lib_tests = b.addRunArtifact(lib_tests);
|
var run_lib_tests = b.addRunArtifact(lib_tests);
|
||||||
run_lib_tests.skip_foreign_checks = true;
|
run_lib_tests.skip_foreign_checks = true;
|
||||||
|
// This creates a build step. It will be visible in the `zig build --help` menu,
|
||||||
|
// and can be selected like this: `zig build test`
|
||||||
|
// This will evaluate the `test` step rather than the default, which is "install".
|
||||||
test_step.dependOn(&run_lib_tests.step);
|
test_step.dependOn(&run_lib_tests.step);
|
||||||
if (t.cpu_arch == null) native_test_step.dependOn(&run_lib_tests.step);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn configureBuild(b: *std.Build, cs: *std.Build.Step.Compile, universal_lambda_zig_dep: *std.Build.Dependency) !void {
|
pub fn configureBuild(b: *std.Build, cs: *std.Build.Step.Compile) !void {
|
||||||
try @import("src/universal_lambda_build.zig").configureBuild(b, cs, universal_lambda_zig_dep);
|
try @import("src/universal_lambda_build.zig").configureBuild(b, cs);
|
||||||
}
|
}
|
||||||
pub fn addImports(b: *std.Build, cs: *std.Build.Step.Compile, universal_lambda_zig_dep: *std.Build.Dependency) void {
|
pub fn addModules(b: *std.Build, cs: *std.Build.Step.Compile) ![]const u8 {
|
||||||
// The underlying call has an optional dependency here, but we do not.
|
return try @import("src/universal_lambda_build.zig").addModules(b, cs);
|
||||||
// Downstream must provide the dependency, which will ensure that the
|
|
||||||
// modules we have exposed above do, in fact, get exposed
|
|
||||||
return @import("src/universal_lambda_build.zig").addImports(b, cs, universal_lambda_zig_dep);
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,35 +4,8 @@
|
||||||
|
|
||||||
.dependencies = .{
|
.dependencies = .{
|
||||||
.flexilib = .{
|
.flexilib = .{
|
||||||
.url = "https://git.lerch.org/lobo/FlexiLib/archive/48c0e893da20c9821f6fe55516861e08b9cb6c77.tar.gz",
|
.url = "https://git.lerch.org/lobo/flexilib/archive/3d3dab9c792651477932e2b61c9f4794ac694dcb.tar.gz",
|
||||||
.hash = "1220a4db25529543a82d199a17495aedd7766fef4f5d0122893566663721167b6168",
|
.hash = "1220fd7a614fe3c9f6006b630bba528e2ec9dca9c66f5ff10f7e471ad2bdd41b6c89",
|
||||||
},
|
},
|
||||||
.@"lambda-zig" = .{
|
|
||||||
.url = "https://git.lerch.org/lobo/lambda-zig/archive/9d1108697282d24509472d7019c97d1afe06ba1c.tar.gz",
|
|
||||||
.hash = "12205b8a2be03b63a1b07bde373e8293c96be53a68a58054abe8689f80d691633862",
|
|
||||||
},
|
|
||||||
.@"cloudflare-worker-deploy" = .{
|
|
||||||
.url = "https://git.lerch.org/lobo/cloudflare-worker-deploy/archive/0581221c11cf32dce48abc5b950d974e570afb0d.tar.gz",
|
|
||||||
.hash = "12204e7befb1691ac40c124fcb63edf3c792cae7b4b1974edc1da23a36ba6f089d02",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
|
|
||||||
// This field is optional.
|
|
||||||
// This is currently advisory only; Zig does not yet do anything
|
|
||||||
// with this value.
|
|
||||||
.minimum_zig_version = "0.12.0",
|
|
||||||
|
|
||||||
// Specifies the set of files and directories that are included in this package.
|
|
||||||
// Only files and directories listed here are included in the `hash` that
|
|
||||||
// is computed for this package.
|
|
||||||
// Paths are relative to the build root. Use the empty string (`""`) to refer to
|
|
||||||
// the build root itself.
|
|
||||||
// A directory listed here means that all files within, recursively, are included.
|
|
||||||
.paths = .{
|
|
||||||
"build.zig",
|
|
||||||
"build.zig.zon",
|
|
||||||
"src",
|
|
||||||
"LICENSE",
|
|
||||||
"README.md",
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
100
src/CloudflareDeployStep.zig
Normal file
100
src/CloudflareDeployStep.zig
Normal file
|
@ -0,0 +1,100 @@
|
||||||
|
const std = @import("std");
|
||||||
|
const cloudflare = @import("cloudflaredeploy.zig");
|
||||||
|
const CloudflareDeployStep = @This();
|
||||||
|
|
||||||
|
pub const base_id: std.Build.Step.Id = .custom;
|
||||||
|
|
||||||
|
step: std.Build.Step,
|
||||||
|
primary_javascript_file: std.Build.LazyPath,
|
||||||
|
worker_name: []const u8,
|
||||||
|
options: Options,
|
||||||
|
|
||||||
|
pub const Options = struct {
|
||||||
|
/// if set, the primary file will not be read (and may not exist). This data
|
||||||
|
/// will be used instead
|
||||||
|
primary_file_data: ?[]const u8 = null,
|
||||||
|
|
||||||
|
/// When set, the Javascript file will be searched/replaced with the target
|
||||||
|
/// file name for import
|
||||||
|
wasm_name: ?struct {
|
||||||
|
search: []const u8,
|
||||||
|
replace: []const u8,
|
||||||
|
} = null,
|
||||||
|
|
||||||
|
/// When set, the directory specified will be used rather than the current directory
|
||||||
|
wasm_dir: ?[]const u8 = null,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub fn create(
|
||||||
|
owner: *std.Build,
|
||||||
|
worker_name: []const u8,
|
||||||
|
primary_javascript_file: std.Build.LazyPath,
|
||||||
|
options: Options,
|
||||||
|
) *CloudflareDeployStep {
|
||||||
|
const self = owner.allocator.create(CloudflareDeployStep) catch @panic("OOM");
|
||||||
|
self.* = CloudflareDeployStep{
|
||||||
|
.step = std.Build.Step.init(.{
|
||||||
|
.id = base_id,
|
||||||
|
.name = owner.fmt("cloudflare deploy {s}", .{primary_javascript_file.getDisplayName()}),
|
||||||
|
.owner = owner,
|
||||||
|
.makeFn = make,
|
||||||
|
}),
|
||||||
|
.primary_javascript_file = primary_javascript_file,
|
||||||
|
.worker_name = worker_name,
|
||||||
|
.options = options,
|
||||||
|
};
|
||||||
|
if (options.primary_file_data == null)
|
||||||
|
primary_javascript_file.addStepDependencies(&self.step);
|
||||||
|
return self;
|
||||||
|
}
|
||||||
|
|
||||||
|
fn make(step: *std.Build.Step, prog_node: *std.Progress.Node) !void {
|
||||||
|
_ = prog_node;
|
||||||
|
const b = step.owner;
|
||||||
|
const self = @fieldParentPtr(CloudflareDeployStep, "step", step);
|
||||||
|
|
||||||
|
var client = std.http.Client{ .allocator = b.allocator };
|
||||||
|
defer client.deinit();
|
||||||
|
var proxy_text = std.os.getenv("https_proxy") orelse std.os.getenv("HTTPS_PROXY");
|
||||||
|
if (proxy_text) |p| {
|
||||||
|
client.deinit();
|
||||||
|
const proxy = try std.Uri.parse(p);
|
||||||
|
client = std.http.Client{
|
||||||
|
.allocator = b.allocator,
|
||||||
|
.proxy = .{
|
||||||
|
.protocol = if (std.ascii.eqlIgnoreCase(proxy.scheme, "http")) .plain else .tls,
|
||||||
|
.host = proxy.host.?,
|
||||||
|
.port = proxy.port,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
const script = self.options.primary_file_data orelse
|
||||||
|
try std.fs.cwd().readFileAlloc(b.allocator, self.primary_javascript_file.path, std.math.maxInt(usize));
|
||||||
|
defer if (self.options.primary_file_data == null) b.allocator.free(script);
|
||||||
|
|
||||||
|
var final_script = script;
|
||||||
|
if (self.options.wasm_name) |n| {
|
||||||
|
final_script = try std.mem.replaceOwned(u8, b.allocator, script, n.search, n.replace);
|
||||||
|
if (self.options.primary_file_data == null) b.allocator.free(script);
|
||||||
|
}
|
||||||
|
defer if (self.options.wasm_name) |_| b.allocator.free(final_script);
|
||||||
|
|
||||||
|
var al = std.ArrayList(u8).init(b.allocator);
|
||||||
|
defer al.deinit();
|
||||||
|
try cloudflare.pushWorker(
|
||||||
|
b.allocator,
|
||||||
|
&client,
|
||||||
|
self.worker_name,
|
||||||
|
self.options.wasm_dir orelse ".",
|
||||||
|
final_script,
|
||||||
|
al.writer(),
|
||||||
|
std.io.getStdErr().writer(),
|
||||||
|
);
|
||||||
|
const start = std.mem.lastIndexOf(u8, al.items, "http").?;
|
||||||
|
step.name = try std.fmt.allocPrint(
|
||||||
|
b.allocator,
|
||||||
|
"cloudflare deploy {s} to {s}",
|
||||||
|
.{ self.primary_javascript_file.getDisplayName(), al.items[start .. al.items.len - 1] },
|
||||||
|
);
|
||||||
|
}
|
|
@ -1,83 +0,0 @@
|
||||||
const std = @import("std");
|
|
||||||
const interface = @import("universal_lambda_interface");
|
|
||||||
const lambda_zig = @import("aws_lambda_runtime");
|
|
||||||
|
|
||||||
const log = std.log.scoped(.awslambda);
|
|
||||||
|
|
||||||
threadlocal var universal_handler: interface.HandlerFn = undefined;
|
|
||||||
|
|
||||||
/// This is called by the aws lambda runtime (our import), and must
|
|
||||||
/// call the universal handler (our downstream client). The main job here
|
|
||||||
/// is to convert signatures, ore more specifically, build out the context
|
|
||||||
/// that the universal handler is expecting
|
|
||||||
fn lambdaHandler(arena: std.mem.Allocator, event_data: []const u8) ![]const u8 {
|
|
||||||
const response_body: std.ArrayList(u8) = std.ArrayList(u8).init(arena);
|
|
||||||
// Marshal lambda_zig data into a context
|
|
||||||
// TODO: Maybe this should parse API Gateway data into a proper response
|
|
||||||
// TODO: environment variables -> Headers?
|
|
||||||
var response = interface.Response{
|
|
||||||
.allocator = arena,
|
|
||||||
.headers = &.{},
|
|
||||||
.body = response_body,
|
|
||||||
.request = .{
|
|
||||||
.headers = &.{},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
// Call back to the handler that we were given
|
|
||||||
const result = universal_handler(arena, event_data, &response);
|
|
||||||
|
|
||||||
// TODO: If our universal handler writes to the response body, we should
|
|
||||||
// handle that in a consistent way
|
|
||||||
|
|
||||||
// Return result from our handler back to AWS lambda via the lambda module
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
// AWS lambda zig handler: const HandlerFn = *const fn (std.mem.Allocator, []const u8) anyerror![]const u8;
|
|
||||||
// Our handler: pub const HandlerFn = *const fn (std.mem.Allocator, []const u8, Context) anyerror![]const u8;
|
|
||||||
pub fn run(allocator: ?std.mem.Allocator, event_handler: interface.HandlerFn) !u8 {
|
|
||||||
universal_handler = event_handler;
|
|
||||||
|
|
||||||
// pub fn run(allocator: ?std.mem.Allocator, event_handler: HandlerFn) !void { // TODO: remove inferred error set?
|
|
||||||
try lambda_zig.run(allocator, lambdaHandler);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
fn handler(allocator: std.mem.Allocator, event_data: []const u8) ![]const u8 {
|
|
||||||
_ = allocator;
|
|
||||||
return event_data;
|
|
||||||
}
|
|
||||||
|
|
||||||
////////////////////////////////////////////////////////////////////////
|
|
||||||
// All code below this line is for testing
|
|
||||||
////////////////////////////////////////////////////////////////////////
|
|
||||||
test {
|
|
||||||
std.testing.refAllDecls(lambda_zig);
|
|
||||||
}
|
|
||||||
test "basic request" {
|
|
||||||
// std.testing.log_level = .debug;
|
|
||||||
const allocator = std.testing.allocator;
|
|
||||||
const request =
|
|
||||||
\\{"foo": "bar", "baz": "qux"}
|
|
||||||
;
|
|
||||||
|
|
||||||
// This is what's actually coming back. Is this right?
|
|
||||||
const expected_response =
|
|
||||||
\\nothing but net
|
|
||||||
;
|
|
||||||
const TestHandler = struct {
|
|
||||||
pub fn handler(alloc: std.mem.Allocator, event_data: []const u8, context: interface.Context) ![]const u8 {
|
|
||||||
_ = alloc;
|
|
||||||
_ = event_data;
|
|
||||||
_ = context;
|
|
||||||
log.debug("in handler", .{});
|
|
||||||
return "nothing but net";
|
|
||||||
}
|
|
||||||
};
|
|
||||||
universal_handler = TestHandler.handler;
|
|
||||||
const lambda_response = try lambda_zig.test_lambda_request(allocator, request, 1, lambdaHandler);
|
|
||||||
defer lambda_zig.deinit();
|
|
||||||
defer allocator.free(lambda_response);
|
|
||||||
try std.testing.expectEqualStrings(expected_response, lambda_response);
|
|
||||||
}
|
|
26
src/cloudflare_build.zig
Normal file
26
src/cloudflare_build.zig
Normal file
|
@ -0,0 +1,26 @@
|
||||||
|
const std = @import("std");
|
||||||
|
const builtin = @import("builtin");
|
||||||
|
const CloudflareDeployStep = @import("CloudflareDeployStep.zig");
|
||||||
|
|
||||||
|
const script = @embedFile("index.js");
|
||||||
|
|
||||||
|
pub fn configureBuild(b: *std.build.Builder, cs: *std.Build.Step.Compile, function_name: []const u8) !void {
|
||||||
|
const wasm_name = try std.fmt.allocPrint(b.allocator, "{s}.wasm", .{cs.name});
|
||||||
|
const deploy_cmd = CloudflareDeployStep.create(
|
||||||
|
b,
|
||||||
|
function_name,
|
||||||
|
.{ .path = "index.js" },
|
||||||
|
.{
|
||||||
|
.primary_file_data = script,
|
||||||
|
.wasm_name = .{
|
||||||
|
.search = "custom.wasm",
|
||||||
|
.replace = wasm_name,
|
||||||
|
},
|
||||||
|
.wasm_dir = b.getInstallPath(.bin, "."),
|
||||||
|
},
|
||||||
|
);
|
||||||
|
deploy_cmd.step.dependOn(b.getInstallStep());
|
||||||
|
|
||||||
|
const deploy_step = b.step("cloudflare", "Deploy as Cloudflare worker (must be compiled with -Dtarget=wasm32-wasi)");
|
||||||
|
deploy_step.dependOn(&deploy_cmd.step);
|
||||||
|
}
|
377
src/cloudflaredeploy.zig
Normal file
377
src/cloudflaredeploy.zig
Normal file
|
@ -0,0 +1,377 @@
|
||||||
|
const std = @import("std");
|
||||||
|
|
||||||
|
var x_auth_token: ?[:0]const u8 = undefined;
|
||||||
|
var x_auth_email: ?[:0]const u8 = undefined;
|
||||||
|
var x_auth_key: ?[:0]const u8 = undefined;
|
||||||
|
var initialized = false;
|
||||||
|
|
||||||
|
const cf_api_base = "https://api.cloudflare.com/client/v4";
|
||||||
|
|
||||||
|
pub fn main() !u8 {
|
||||||
|
var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
|
||||||
|
defer arena.deinit();
|
||||||
|
const allocator = arena.allocator();
|
||||||
|
var client = std.http.Client{ .allocator = allocator };
|
||||||
|
defer client.deinit();
|
||||||
|
var proxy_text = std.os.getenv("https_proxy") orelse std.os.getenv("HTTPS_PROXY");
|
||||||
|
if (proxy_text) |p| {
|
||||||
|
client.deinit();
|
||||||
|
const proxy = try std.Uri.parse(p);
|
||||||
|
client = std.http.Client{
|
||||||
|
.allocator = allocator,
|
||||||
|
.proxy = .{
|
||||||
|
.protocol = if (std.ascii.eqlIgnoreCase(proxy.scheme, "http")) .plain else .tls,
|
||||||
|
.host = proxy.host.?,
|
||||||
|
.port = proxy.port,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
const stdout_file = std.io.getStdOut().writer();
|
||||||
|
var bw = std.io.bufferedWriter(stdout_file);
|
||||||
|
const stdout = bw.writer();
|
||||||
|
|
||||||
|
var argIterator = try std.process.argsWithAllocator(allocator);
|
||||||
|
defer argIterator.deinit();
|
||||||
|
const exe_name = argIterator.next().?;
|
||||||
|
var maybe_name = argIterator.next();
|
||||||
|
if (maybe_name == null) {
|
||||||
|
try usage(std.io.getStdErr().writer(), exe_name);
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
const worker_name = maybe_name.?;
|
||||||
|
if (std.mem.eql(u8, worker_name, "-h")) {
|
||||||
|
try usage(stdout, exe_name);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
var maybe_script_name = argIterator.next();
|
||||||
|
if (maybe_script_name == null) {
|
||||||
|
try usage(std.io.getStdErr().writer(), exe_name);
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
const script = std.fs.cwd().readFileAlloc(allocator, maybe_script_name.?, std.math.maxInt(usize)) catch |err| {
|
||||||
|
try usage(std.io.getStdErr().writer(), exe_name);
|
||||||
|
return err;
|
||||||
|
};
|
||||||
|
|
||||||
|
pushWorker(allocator, &client, worker_name, script, ".", stdout, std.io.getStdErr().writer()) catch return 1;
|
||||||
|
try bw.flush(); // don't forget to flush!
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
fn usage(writer: anytype, this: []const u8) !void {
|
||||||
|
try writer.print("usage: {s} <worker name> <script file>\n", .{this});
|
||||||
|
}
|
||||||
|
const Wasm = struct {
|
||||||
|
allocator: std.mem.Allocator,
|
||||||
|
name: []const u8,
|
||||||
|
data: []const u8,
|
||||||
|
|
||||||
|
const Self = @This();
|
||||||
|
|
||||||
|
pub fn deinit(self: *Self) void {
|
||||||
|
self.allocator.free(self.name);
|
||||||
|
self.allocator.free(self.data);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
pub fn pushWorker(
|
||||||
|
allocator: std.mem.Allocator,
|
||||||
|
client: *std.http.Client,
|
||||||
|
worker_name: []const u8,
|
||||||
|
wasm_dir: []const u8,
|
||||||
|
script: []const u8,
|
||||||
|
writer: anytype,
|
||||||
|
err_writer: anytype,
|
||||||
|
) !void {
|
||||||
|
var wasm = try loadWasm(allocator, script, wasm_dir);
|
||||||
|
defer wasm.deinit();
|
||||||
|
|
||||||
|
var accountid = std.os.getenv("CLOUDFLARE_ACCOUNT_ID");
|
||||||
|
const account_id_free = accountid == null;
|
||||||
|
if (accountid == null) accountid = try getAccountId(allocator, client);
|
||||||
|
defer if (account_id_free) allocator.free(accountid.?);
|
||||||
|
|
||||||
|
try writer.print("Using Cloudflare account: {s}\n", .{accountid.?});
|
||||||
|
|
||||||
|
// Determine if worker exists. This lets us know if we need to enable it later
|
||||||
|
const worker_exists = try workerExists(allocator, client, accountid.?, worker_name);
|
||||||
|
try writer.print(
|
||||||
|
"{s}\n",
|
||||||
|
.{if (worker_exists) "Worker exists, will not re-enable" else "Worker is new. Will enable after code update"},
|
||||||
|
);
|
||||||
|
|
||||||
|
var worker = Worker{
|
||||||
|
.account_id = accountid.?,
|
||||||
|
.name = worker_name,
|
||||||
|
.wasm = wasm,
|
||||||
|
.main_module = script,
|
||||||
|
};
|
||||||
|
putNewWorker(allocator, client, &worker) catch |err| {
|
||||||
|
if (worker.errors == null) return err;
|
||||||
|
try err_writer.print("{d} errors returned from CloudFlare:\n\n", .{worker.errors.?.len});
|
||||||
|
for (worker.errors.?) |cf_err| {
|
||||||
|
try err_writer.print("{s}\n", .{cf_err});
|
||||||
|
allocator.free(cf_err);
|
||||||
|
}
|
||||||
|
return error.CloudFlareErrorResponse;
|
||||||
|
};
|
||||||
|
const subdomain = try getSubdomain(allocator, client, accountid.?);
|
||||||
|
defer allocator.free(subdomain);
|
||||||
|
try writer.print("Worker available at: https://{s}.{s}.workers.dev/\n", .{ worker_name, subdomain });
|
||||||
|
if (!worker_exists)
|
||||||
|
try enableWorker(allocator, client, accountid.?, worker_name);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn loadWasm(allocator: std.mem.Allocator, script: []const u8, wasm_dir: []const u8) !Wasm {
|
||||||
|
// Looking for a string like this: import demoWasm from "demo.wasm"
|
||||||
|
// JavaScript may or may not have ; characters. We're not doing
|
||||||
|
// a full JS parsing here, so this may not be the most robust
|
||||||
|
|
||||||
|
var inx: usize = 0;
|
||||||
|
|
||||||
|
var name: ?[]const u8 = null;
|
||||||
|
while (true) {
|
||||||
|
inx = std.mem.indexOf(u8, script[inx..], "import ") orelse if (inx == 0) return error.NoImportFound else break;
|
||||||
|
inx += "import ".len;
|
||||||
|
|
||||||
|
// oh god, we're not doing this: https://262.ecma-international.org/5.1/#sec-7.5
|
||||||
|
// advance to next token - we don't care what the name is
|
||||||
|
while (inx < script.len and script[inx] != ' ') inx += 1;
|
||||||
|
// continue past space(s)
|
||||||
|
while (inx < script.len and script[inx] == ' ') inx += 1;
|
||||||
|
// We expect "from " to be next
|
||||||
|
if (!std.mem.startsWith(u8, script[inx..], "from ")) continue;
|
||||||
|
inx += "from ".len;
|
||||||
|
// continue past space(s)
|
||||||
|
while (inx < script.len and script[inx] == ' ') inx += 1;
|
||||||
|
// We now expect the name of our file...
|
||||||
|
if (script[inx] != '"' and script[inx] != '\'') continue; // we're not where we think we are
|
||||||
|
const quote = script[inx]; // there are two possibilities here
|
||||||
|
// we don't need to advance inx any more, as we're on the name, and if
|
||||||
|
// we loop, that's ok
|
||||||
|
inx += 1; // move off the quote onto the name
|
||||||
|
const end_quote_inx = std.mem.indexOfScalar(u8, script[inx..], quote);
|
||||||
|
if (end_quote_inx == null) continue;
|
||||||
|
const candidate_name = script[inx .. inx + end_quote_inx.?];
|
||||||
|
if (std.mem.endsWith(u8, candidate_name, ".wasm")) {
|
||||||
|
if (name != null) // we are importing two wasm files, and we are now lost
|
||||||
|
return error.MultipleWasmImportsUnsupported;
|
||||||
|
name = candidate_name;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (name == null) return error.NoWasmImportFound;
|
||||||
|
|
||||||
|
const nm = try allocator.dupe(u8, name.?);
|
||||||
|
errdefer allocator.free(nm);
|
||||||
|
const path = try std.fs.path.join(allocator, &[_][]const u8{ wasm_dir, nm });
|
||||||
|
defer allocator.free(path);
|
||||||
|
const data = try std.fs.cwd().readFileAlloc(allocator, path, std.math.maxInt(usize));
|
||||||
|
return Wasm{
|
||||||
|
.allocator = allocator,
|
||||||
|
.name = nm,
|
||||||
|
.data = data,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
fn getAccountId(allocator: std.mem.Allocator, client: *std.http.Client) ![:0]const u8 {
|
||||||
|
const url = cf_api_base ++ "/accounts/";
|
||||||
|
var headers = std.http.Headers.init(allocator);
|
||||||
|
defer headers.deinit();
|
||||||
|
try addAuthHeaders(&headers);
|
||||||
|
var req = try client.request(.GET, try std.Uri.parse(url), headers, .{});
|
||||||
|
defer req.deinit();
|
||||||
|
try req.start();
|
||||||
|
try req.wait();
|
||||||
|
if (req.response.status != .ok) {
|
||||||
|
std.debug.print("Status is {}\n", .{req.response.status});
|
||||||
|
return error.RequestFailed;
|
||||||
|
}
|
||||||
|
var json_reader = std.json.reader(allocator, req.reader());
|
||||||
|
defer json_reader.deinit();
|
||||||
|
var body = try std.json.parseFromTokenSource(std.json.Value, allocator, &json_reader, .{});
|
||||||
|
defer body.deinit();
|
||||||
|
const arr = body.value.object.get("result").?.array.items;
|
||||||
|
if (arr.len == 0) return error.NoAccounts;
|
||||||
|
if (arr.len > 1) return error.TooManyAccounts;
|
||||||
|
return try allocator.dupeZ(u8, arr[0].object.get("id").?.string);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn enableWorker(allocator: std.mem.Allocator, client: *std.http.Client, account_id: []const u8, name: []const u8) !void {
|
||||||
|
const enable_script = cf_api_base ++ "/accounts/{s}/workers/scripts/{s}/subdomain";
|
||||||
|
const url = try std.fmt.allocPrint(allocator, enable_script, .{ account_id, name });
|
||||||
|
defer allocator.free(url);
|
||||||
|
var headers = std.http.Headers.init(allocator);
|
||||||
|
defer headers.deinit();
|
||||||
|
try addAuthHeaders(&headers);
|
||||||
|
try headers.append("Content-Type", "application/json; charset=UTF-8");
|
||||||
|
var req = try client.request(.POST, try std.Uri.parse(url), headers, .{});
|
||||||
|
defer req.deinit();
|
||||||
|
|
||||||
|
const request_payload =
|
||||||
|
\\{ "enabled": true }
|
||||||
|
;
|
||||||
|
req.transfer_encoding = .{ .content_length = @as(u64, request_payload.len) };
|
||||||
|
try req.start();
|
||||||
|
try req.writeAll(request_payload);
|
||||||
|
try req.finish();
|
||||||
|
try req.wait();
|
||||||
|
if (req.response.status != .ok) {
|
||||||
|
std.debug.print("Status is {}\n", .{req.response.status});
|
||||||
|
return error.RequestFailed;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Gets the subdomain for a worker. Caller owns memory
|
||||||
|
fn getSubdomain(allocator: std.mem.Allocator, client: *std.http.Client, account_id: []const u8) ![]const u8 {
|
||||||
|
const get_subdomain = cf_api_base ++ "/accounts/{s}/workers/subdomain";
|
||||||
|
const url = try std.fmt.allocPrint(allocator, get_subdomain, .{account_id});
|
||||||
|
defer allocator.free(url);
|
||||||
|
|
||||||
|
var headers = std.http.Headers.init(allocator);
|
||||||
|
defer headers.deinit();
|
||||||
|
try addAuthHeaders(&headers);
|
||||||
|
var req = try client.request(.GET, try std.Uri.parse(url), headers, .{});
|
||||||
|
defer req.deinit();
|
||||||
|
try req.start();
|
||||||
|
try req.wait();
|
||||||
|
if (req.response.status != .ok) return error.RequestNotOk;
|
||||||
|
var json_reader = std.json.reader(allocator, req.reader());
|
||||||
|
defer json_reader.deinit();
|
||||||
|
var body = try std.json.parseFromTokenSource(std.json.Value, allocator, &json_reader, .{});
|
||||||
|
defer body.deinit();
|
||||||
|
return try allocator.dupe(u8, body.value.object.get("result").?.object.get("subdomain").?.string);
|
||||||
|
}
|
||||||
|
|
||||||
|
const Worker = struct {
|
||||||
|
account_id: []const u8,
|
||||||
|
name: []const u8,
|
||||||
|
main_module: []const u8,
|
||||||
|
wasm: Wasm,
|
||||||
|
errors: ?[][]const u8 = null,
|
||||||
|
};
|
||||||
|
fn putNewWorker(allocator: std.mem.Allocator, client: *std.http.Client, worker: *Worker) !void {
|
||||||
|
const put_script = cf_api_base ++ "/accounts/{s}/workers/scripts/{s}?include_subdomain_availability=true&excludeScript=true";
|
||||||
|
const url = try std.fmt.allocPrint(allocator, put_script, .{ worker.account_id, worker.name });
|
||||||
|
defer allocator.free(url);
|
||||||
|
const memfs = @embedFile("dist/memfs.wasm");
|
||||||
|
const outer_script_shell = @embedFile("script_harness.js");
|
||||||
|
const script = try std.fmt.allocPrint(allocator, "{s}{s}", .{ outer_script_shell, worker.main_module });
|
||||||
|
defer allocator.free(script);
|
||||||
|
const deploy_request =
|
||||||
|
"------formdata-undici-032998177938\r\n" ++
|
||||||
|
"Content-Disposition: form-data; name=\"metadata\"\r\n\r\n" ++
|
||||||
|
"{{\"main_module\":\"index.js\",\"bindings\":[],\"compatibility_date\":\"2023-10-02\",\"compatibility_flags\":[]}}\r\n" ++
|
||||||
|
"------formdata-undici-032998177938\r\n" ++
|
||||||
|
"Content-Disposition: form-data; name=\"index.js\"; filename=\"index.js\"\r\n" ++
|
||||||
|
"Content-Type: application/javascript+module\r\n" ++
|
||||||
|
"\r\n" ++
|
||||||
|
"{[script]s}\r\n" ++
|
||||||
|
"------formdata-undici-032998177938\r\n" ++
|
||||||
|
"Content-Disposition: form-data; name=\"./{[wasm_name]s}\"; filename=\"./{[wasm_name]s}\"\r\n" ++
|
||||||
|
"Content-Type: application/wasm\r\n" ++
|
||||||
|
"\r\n" ++
|
||||||
|
"{[wasm]s}\r\n" ++
|
||||||
|
"------formdata-undici-032998177938\r\n" ++
|
||||||
|
"Content-Disposition: form-data; name=\"./c5f1acc97ad09df861eff9ef567c2186d4e38de3-memfs.wasm\"; filename=\"./c5f1acc97ad09df861eff9ef567c2186d4e38de3-memfs.wasm\"\r\n" ++
|
||||||
|
"Content-Type: application/wasm\r\n" ++
|
||||||
|
"\r\n" ++
|
||||||
|
"{[memfs]s}\r\n" ++
|
||||||
|
"------formdata-undici-032998177938--";
|
||||||
|
|
||||||
|
var headers = std.http.Headers.init(allocator);
|
||||||
|
defer headers.deinit();
|
||||||
|
try addAuthHeaders(&headers);
|
||||||
|
// TODO: fix this
|
||||||
|
try headers.append("Content-Type", "multipart/form-data; boundary=----formdata-undici-032998177938");
|
||||||
|
const request_payload = try std.fmt.allocPrint(allocator, deploy_request, .{
|
||||||
|
.script = script,
|
||||||
|
.wasm_name = worker.wasm.name,
|
||||||
|
.wasm = worker.wasm.data,
|
||||||
|
.memfs = memfs,
|
||||||
|
});
|
||||||
|
defer allocator.free(request_payload);
|
||||||
|
// Get content length. For some reason it's forcing a chunked transfer type without this.
|
||||||
|
// That's not entirely a bad idea, but for now I want to match what I see
|
||||||
|
// coming through wrangler
|
||||||
|
const cl = try std.fmt.allocPrint(allocator, "{d}", .{request_payload.len});
|
||||||
|
defer allocator.free(cl);
|
||||||
|
try headers.append("Content-Length", cl);
|
||||||
|
var req = try client.request(.PUT, try std.Uri.parse(url), headers, .{});
|
||||||
|
defer req.deinit();
|
||||||
|
|
||||||
|
req.transfer_encoding = .{ .content_length = @as(u64, request_payload.len) };
|
||||||
|
try req.start();
|
||||||
|
|
||||||
|
// Workaround for https://github.com/ziglang/zig/issues/15626
|
||||||
|
const max_bytes: usize = 1 << 14;
|
||||||
|
var inx: usize = 0;
|
||||||
|
while (request_payload.len > inx) {
|
||||||
|
try req.writeAll(request_payload[inx..@min(request_payload.len, inx + max_bytes)]);
|
||||||
|
inx += max_bytes;
|
||||||
|
}
|
||||||
|
try req.finish();
|
||||||
|
try req.wait();
|
||||||
|
// std.debug.print("Status is {}\n", .{req.response.status});
|
||||||
|
// std.debug.print("Url is {s}\n", .{url});
|
||||||
|
if (req.response.status != .ok) {
|
||||||
|
std.debug.print("Status is {}\n", .{req.response.status});
|
||||||
|
if (req.response.status == .bad_request)
|
||||||
|
worker.errors = getErrors(allocator, &req) catch null;
|
||||||
|
return error.RequestFailed;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn getErrors(allocator: std.mem.Allocator, req: *std.http.Client.Request) !?[][]const u8 {
|
||||||
|
var json_reader = std.json.reader(allocator, req.reader());
|
||||||
|
defer json_reader.deinit();
|
||||||
|
var body = try std.json.parseFromTokenSource(std.json.Value, allocator, &json_reader, .{});
|
||||||
|
defer body.deinit();
|
||||||
|
const arr = body.value.object.get("errors").?.array.items;
|
||||||
|
if (arr.len == 0) return null;
|
||||||
|
var error_list = try std.ArrayList([]const u8).initCapacity(allocator, arr.len);
|
||||||
|
defer error_list.deinit();
|
||||||
|
for (arr) |item| {
|
||||||
|
error_list.appendAssumeCapacity(item.object.get("message").?.string);
|
||||||
|
}
|
||||||
|
return try error_list.toOwnedSlice();
|
||||||
|
}
|
||||||
|
|
||||||
|
fn workerExists(allocator: std.mem.Allocator, client: *std.http.Client, account_id: []const u8, name: []const u8) !bool {
|
||||||
|
const existence_check = cf_api_base ++ "/accounts/{s}/workers/services/{s}";
|
||||||
|
const url = try std.fmt.allocPrint(allocator, existence_check, .{ account_id, name });
|
||||||
|
defer allocator.free(url);
|
||||||
|
var headers = std.http.Headers.init(allocator);
|
||||||
|
defer headers.deinit();
|
||||||
|
try addAuthHeaders(&headers);
|
||||||
|
var req = try client.request(.GET, try std.Uri.parse(url), headers, .{});
|
||||||
|
defer req.deinit();
|
||||||
|
try req.start();
|
||||||
|
try req.wait();
|
||||||
|
// std.debug.print("Status is {}\n", .{req.response.status});
|
||||||
|
// std.debug.print("Url is {s}\n", .{url});
|
||||||
|
return req.response.status == .ok;
|
||||||
|
}
|
||||||
|
|
||||||
|
threadlocal var auth_buf: [1024]u8 = undefined;
|
||||||
|
|
||||||
|
fn addAuthHeaders(headers: *std.http.Headers) !void {
|
||||||
|
if (!initialized) {
|
||||||
|
x_auth_email = std.os.getenv("CLOUDFLARE_EMAIL");
|
||||||
|
x_auth_key = std.os.getenv("CLOUDFLARE_API_KEY");
|
||||||
|
x_auth_token = std.os.getenv("CLOUDFLARE_API_TOKEN");
|
||||||
|
initialized = true;
|
||||||
|
}
|
||||||
|
if (x_auth_token) |tok| {
|
||||||
|
var auth = try std.fmt.bufPrint(auth_buf[0..], "Bearer {s}", .{tok});
|
||||||
|
try headers.append("Authorization", auth);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (x_auth_email) |email| {
|
||||||
|
if (x_auth_key == null)
|
||||||
|
return error.MissingCloudflareApiKeyEnvironmentVariable;
|
||||||
|
try headers.append("X-Auth-Email", email);
|
||||||
|
try headers.append("X-Auth-Key", x_auth_key.?);
|
||||||
|
}
|
||||||
|
return error.NoCloudflareAuthenticationEnvironmentVariablesSet;
|
||||||
|
}
|
|
@ -31,7 +31,7 @@ pub fn run(allocator: ?std.mem.Allocator, event_handler: interface.HandlerFn) !u
|
||||||
defer response.deinit();
|
defer response.deinit();
|
||||||
var headers = try findHeaders(aa);
|
var headers = try findHeaders(aa);
|
||||||
defer headers.deinit();
|
defer headers.deinit();
|
||||||
response.request.headers = headers.http_headers;
|
response.request.headers = headers.http_headers.*;
|
||||||
response.request.headers_owned = false;
|
response.request.headers_owned = false;
|
||||||
response.request.target = try findTarget(aa);
|
response.request.target = try findTarget(aa);
|
||||||
response.request.method = try findMethod(aa);
|
response.request.method = try findMethod(aa);
|
||||||
|
@ -107,7 +107,7 @@ fn findTarget(allocator: std.mem.Allocator) ![]const u8 {
|
||||||
return error.CommandLineError;
|
return error.CommandLineError;
|
||||||
if (is_target_option)
|
if (is_target_option)
|
||||||
return arg;
|
return arg;
|
||||||
return (try std.Uri.parse(arg)).path.raw;
|
return (try std.Uri.parse(arg)).path;
|
||||||
}
|
}
|
||||||
if (std.mem.startsWith(u8, arg, "-" ++ target_option.short) or
|
if (std.mem.startsWith(u8, arg, "-" ++ target_option.short) or
|
||||||
std.mem.startsWith(u8, arg, "--" ++ target_option.long))
|
std.mem.startsWith(u8, arg, "--" ++ target_option.long))
|
||||||
|
@ -126,7 +126,7 @@ fn findTarget(allocator: std.mem.Allocator) ![]const u8 {
|
||||||
var split = std.mem.splitSequence(u8, arg, "=");
|
var split = std.mem.splitSequence(u8, arg, "=");
|
||||||
_ = split.next();
|
_ = split.next();
|
||||||
const rest = split.rest();
|
const rest = split.rest();
|
||||||
if (split.next()) |_| return (try std.Uri.parse(rest)).path.raw; // found it
|
if (split.next()) |_| return (try std.Uri.parse(rest)).path; // found it
|
||||||
is_url_option = true;
|
is_url_option = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -134,13 +134,13 @@ fn findTarget(allocator: std.mem.Allocator) ![]const u8 {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub const Headers = struct {
|
pub const Headers = struct {
|
||||||
http_headers: []std.http.Header,
|
http_headers: *std.http.Headers,
|
||||||
owned: bool,
|
owned: bool,
|
||||||
allocator: std.mem.Allocator,
|
allocator: std.mem.Allocator,
|
||||||
|
|
||||||
const Self = @This();
|
const Self = @This();
|
||||||
|
|
||||||
pub fn init(allocator: std.mem.Allocator, headers: []std.http.Header, owned: bool) Self {
|
pub fn init(allocator: std.mem.Allocator, headers: *std.http.Headers, owned: bool) Self {
|
||||||
return .{
|
return .{
|
||||||
.http_headers = headers,
|
.http_headers = headers,
|
||||||
.owned = owned,
|
.owned = owned,
|
||||||
|
@ -150,7 +150,8 @@ pub const Headers = struct {
|
||||||
|
|
||||||
pub fn deinit(self: *Self) void {
|
pub fn deinit(self: *Self) void {
|
||||||
if (self.owned) {
|
if (self.owned) {
|
||||||
self.allocator.free(self.http_headers);
|
self.http_headers.deinit();
|
||||||
|
self.allocator.destroy(self.http_headers);
|
||||||
self.http_headers = undefined;
|
self.http_headers = undefined;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -159,8 +160,13 @@ pub const Headers = struct {
|
||||||
// Get headers from request. Headers will be gathered from the command line
|
// Get headers from request. Headers will be gathered from the command line
|
||||||
// and include all environment variables
|
// and include all environment variables
|
||||||
pub fn findHeaders(allocator: std.mem.Allocator) !Headers {
|
pub fn findHeaders(allocator: std.mem.Allocator) !Headers {
|
||||||
var headers = std.ArrayList(std.http.Header).init(allocator);
|
var headers = try allocator.create(std.http.Headers);
|
||||||
defer headers.deinit();
|
errdefer allocator.destroy(headers);
|
||||||
|
headers.allocator = allocator;
|
||||||
|
headers.list = .{};
|
||||||
|
headers.index = .{};
|
||||||
|
headers.owned = true;
|
||||||
|
errdefer headers.deinit();
|
||||||
|
|
||||||
// without context, we have environment variables
|
// without context, we have environment variables
|
||||||
// possibly event data (API Gateway does this if so configured),
|
// possibly event data (API Gateway does this if so configured),
|
||||||
|
@ -179,7 +185,7 @@ pub fn findHeaders(allocator: std.mem.Allocator) !Headers {
|
||||||
is_header_option = false;
|
is_header_option = false;
|
||||||
var split = std.mem.splitSequence(u8, arg, "=");
|
var split = std.mem.splitSequence(u8, arg, "=");
|
||||||
const name = split.next().?;
|
const name = split.next().?;
|
||||||
try headers.append(.{ .name = name, .value = split.rest() });
|
try headers.append(name, split.rest());
|
||||||
}
|
}
|
||||||
if (std.mem.startsWith(u8, arg, "-" ++ header_option.short) or
|
if (std.mem.startsWith(u8, arg, "-" ++ header_option.short) or
|
||||||
std.mem.startsWith(u8, arg, "--" ++ header_option.long))
|
std.mem.startsWith(u8, arg, "--" ++ header_option.long))
|
||||||
|
@ -190,30 +196,21 @@ pub fn findHeaders(allocator: std.mem.Allocator) !Headers {
|
||||||
is_header_option = true;
|
is_header_option = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (is_test) return Headers.init(allocator, try headers.toOwnedSlice(), true);
|
if (is_test) return Headers.init(allocator, headers, true);
|
||||||
|
|
||||||
// not found on command line. Let's check environment
|
// not found on command line. Let's check environment
|
||||||
// TODO: Get this under test - fake an environment map with deterministic values
|
|
||||||
var map = try std.process.getEnvMap(allocator);
|
var map = try std.process.getEnvMap(allocator);
|
||||||
defer map.deinit();
|
defer map.deinit();
|
||||||
var it = map.iterator();
|
var it = map.iterator();
|
||||||
while (it.next()) |kvp| {
|
while (it.next()) |kvp| {
|
||||||
// Do not allow environment variables to interfere with command line
|
// Do not allow environment variables to interfere with command line
|
||||||
var found = false;
|
if (headers.getFirstValue(kvp.key_ptr.*) == null)
|
||||||
const key = kvp.key_ptr.*;
|
try headers.append(
|
||||||
for (headers.items) |h| {
|
kvp.key_ptr.*,
|
||||||
if (std.ascii.eqlIgnoreCase(h.name, key)) {
|
kvp.value_ptr.*,
|
||||||
found = true;
|
);
|
||||||
break;
|
|
||||||
}
|
}
|
||||||
}
|
return Headers.init(allocator, headers, true);
|
||||||
if (!found)
|
|
||||||
try headers.append(.{
|
|
||||||
.name = try allocator.dupe(u8, key),
|
|
||||||
.value = try allocator.dupe(u8, kvp.value_ptr.*),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
return Headers.init(allocator, try headers.toOwnedSlice(), true);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
test {
|
test {
|
||||||
|
@ -236,13 +233,13 @@ test "can get headers" {
|
||||||
try test_args.append(allocator, "X-Foo=Bar");
|
try test_args.append(allocator, "X-Foo=Bar");
|
||||||
var headers = try findHeaders(allocator);
|
var headers = try findHeaders(allocator);
|
||||||
defer headers.deinit();
|
defer headers.deinit();
|
||||||
try std.testing.expectEqual(@as(usize, 1), headers.http_headers.len);
|
try std.testing.expectEqual(@as(usize, 1), headers.http_headers.list.items.len);
|
||||||
}
|
}
|
||||||
|
|
||||||
fn testHandler(allocator: std.mem.Allocator, event_data: []const u8, context: interface.Context) ![]const u8 {
|
fn testHandler(allocator: std.mem.Allocator, event_data: []const u8, context: interface.Context) ![]const u8 {
|
||||||
context.headers = &.{.{ .name = "X-custom-foo", .value = "bar" }};
|
try context.headers.append("X-custom-foo", "bar");
|
||||||
try context.writeAll(event_data);
|
try context.writeAll(event_data);
|
||||||
return std.fmt.allocPrint(allocator, "{d}", .{context.request.headers.len});
|
return std.fmt.allocPrint(allocator, "{d}", .{context.request.headers.list.items.len});
|
||||||
}
|
}
|
||||||
|
|
||||||
var test_args: std.SegmentedList([]const u8, 8) = undefined;
|
var test_args: std.SegmentedList([]const u8, 8) = undefined;
|
||||||
|
@ -252,7 +249,7 @@ var test_output: std.ArrayList(u8) = undefined;
|
||||||
test "handle_request" {
|
test "handle_request" {
|
||||||
var arena = std.heap.ArenaAllocator.init(std.testing.allocator);
|
var arena = std.heap.ArenaAllocator.init(std.testing.allocator);
|
||||||
defer arena.deinit();
|
defer arena.deinit();
|
||||||
const aa = arena.allocator();
|
var aa = arena.allocator();
|
||||||
test_args = .{};
|
test_args = .{};
|
||||||
defer test_args.deinit(aa);
|
defer test_args.deinit(aa);
|
||||||
try test_args.append(aa, "mainexe");
|
try test_args.append(aa, "mainexe");
|
||||||
|
|
BIN
src/dist/memfs.wasm
vendored
Executable file
BIN
src/dist/memfs.wasm
vendored
Executable file
Binary file not shown.
|
@ -27,7 +27,7 @@ const Application = if (@import("builtin").is_test) @This() else @import("flexil
|
||||||
// }
|
// }
|
||||||
//
|
//
|
||||||
comptime {
|
comptime {
|
||||||
@export(interface.zigInit, .{ .name = "zigInit", .linkage = .strong });
|
@export(interface.zigInit, .{ .name = "zigInit", .linkage = .Strong });
|
||||||
}
|
}
|
||||||
|
|
||||||
/// handle_request will be called on a single request, but due to the preservation
|
/// handle_request will be called on a single request, but due to the preservation
|
||||||
|
@ -73,17 +73,17 @@ fn handleRequest(allocator: std.mem.Allocator, response: *interface.ZigResponse)
|
||||||
ul_response.request.headers = response.request.headers;
|
ul_response.request.headers = response.request.headers;
|
||||||
ul_response.request.method = std.meta.stringToEnum(std.http.Method, response.request.method) orelse std.http.Method.GET;
|
ul_response.request.method = std.meta.stringToEnum(std.http.Method, response.request.method) orelse std.http.Method.GET;
|
||||||
const builtin = @import("builtin");
|
const builtin = @import("builtin");
|
||||||
const supports_getrusage = builtin.os.tag != .windows and @hasDecl(std.posix.system, "rusage"); // Is Windows it?
|
const supports_getrusage = builtin.os.tag != .windows and @hasDecl(std.os.system, "rusage"); // Is Windows it?
|
||||||
var rss: if (supports_getrusage) std.posix.rusage else void = undefined;
|
var rss: if (supports_getrusage) std.os.rusage else void = undefined;
|
||||||
if (supports_getrusage and builtin.mode == .Debug)
|
if (supports_getrusage and builtin.mode == .Debug)
|
||||||
rss = std.posix.getrusage(std.posix.rusage.SELF);
|
rss = std.os.getrusage(std.os.rusage.SELF);
|
||||||
const response_content = try handler.?(
|
const response_content = try handler.?(
|
||||||
allocator,
|
allocator,
|
||||||
response.request.content,
|
response.request.content,
|
||||||
&ul_response,
|
&ul_response,
|
||||||
);
|
);
|
||||||
if (supports_getrusage and builtin.mode == .Debug) { // and debug mode) {
|
if (supports_getrusage and builtin.mode == .Debug) { // and debug mode) {
|
||||||
const rusage = std.posix.getrusage(std.posix.rusage.SELF);
|
const rusage = std.os.getrusage(std.os.rusage.SELF);
|
||||||
log.debug(
|
log.debug(
|
||||||
"Request complete, max RSS of process: {d}M. Incremental: {d}K, User: {d}μs, System: {d}μs",
|
"Request complete, max RSS of process: {d}M. Incremental: {d}K, User: {d}μs, System: {d}μs",
|
||||||
.{
|
.{
|
||||||
|
@ -96,8 +96,10 @@ fn handleRequest(allocator: std.mem.Allocator, response: *interface.ZigResponse)
|
||||||
},
|
},
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
response.headers = try allocator.dupe(std.http.Header, ul_response.headers);
|
// Copy any headers
|
||||||
// response.headers = ul_response.headers;
|
for (ul_response.headers.list.items) |entry| {
|
||||||
|
try response.headers.append(entry.name, entry.value);
|
||||||
|
}
|
||||||
// Anything manually written goes first
|
// Anything manually written goes first
|
||||||
try response_writer.writeAll(ul_response.body.items);
|
try response_writer.writeAll(ul_response.body.items);
|
||||||
// Now we right the official body (response from handler)
|
// Now we right the official body (response from handler)
|
||||||
|
@ -128,10 +130,10 @@ pub fn main() !u8 {
|
||||||
register(testHandler);
|
register(testHandler);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
fn testHandler(allocator: std.mem.Allocator, event_data: []const u8, context: universal_lambda_interface.Context) ![]const u8 {
|
fn testHandler(allocator: std.mem.Allocator, event_data: []const u8, context: @import("universal_lambda_interface").Context) ![]const u8 {
|
||||||
context.headers = &.{.{ .name = "X-custom-foo", .value = "bar" }};
|
try context.headers.append("X-custom-foo", "bar");
|
||||||
try context.writeAll(event_data);
|
try context.writeAll(event_data);
|
||||||
return std.fmt.allocPrint(allocator, "{d}", .{context.request.headers.len});
|
return std.fmt.allocPrint(allocator, "{d}", .{context.request.headers.list.items.len});
|
||||||
}
|
}
|
||||||
// Need to figure out how tests would work
|
// Need to figure out how tests would work
|
||||||
test "handle_request" {
|
test "handle_request" {
|
||||||
|
@ -139,7 +141,7 @@ test "handle_request" {
|
||||||
defer arena.deinit();
|
defer arena.deinit();
|
||||||
var aa = arena.allocator();
|
var aa = arena.allocator();
|
||||||
interface.zigInit(&aa);
|
interface.zigInit(&aa);
|
||||||
const headers: []interface.Header = @constCast(&[_]interface.Header{.{
|
var headers: []interface.Header = @constCast(&[_]interface.Header{.{
|
||||||
.name_ptr = @ptrCast(@constCast("GET".ptr)),
|
.name_ptr = @ptrCast(@constCast("GET".ptr)),
|
||||||
.name_len = 3,
|
.name_len = 3,
|
||||||
.value_ptr = @ptrCast(@constCast("GET".ptr)),
|
.value_ptr = @ptrCast(@constCast("GET".ptr)),
|
||||||
|
|
|
@ -3,43 +3,36 @@ const builtin = @import("builtin");
|
||||||
|
|
||||||
/// flexilib will create a dynamic library for use with flexilib.
|
/// flexilib will create a dynamic library for use with flexilib.
|
||||||
/// Flexilib will need to get the exe compiled as a library
|
/// Flexilib will need to get the exe compiled as a library
|
||||||
pub fn configureBuild(b: *std.Build, cs: *std.Build.Step.Compile, universal_lambda_zig_dep: *std.Build.Dependency) !void {
|
pub fn configureBuild(b: *std.build.Builder, cs: *std.Build.Step.Compile, build_root_src: []const u8) !void {
|
||||||
const package_step = b.step("flexilib", "Create a flexilib dynamic library");
|
const package_step = b.step("flexilib", "Create a flexilib dynamic library");
|
||||||
|
|
||||||
const lib = b.addSharedLibrary(.{
|
const lib = b.addSharedLibrary(.{
|
||||||
.name = cs.name,
|
.name = cs.name,
|
||||||
.root_source_file = .{
|
.root_source_file = .{ .path = b.pathJoin(&[_][]const u8{ build_root_src, "flexilib.zig" }) },
|
||||||
.path = b.pathJoin(&[_][]const u8{
|
.target = cs.target,
|
||||||
// root path comes from our dependency, which should be us,
|
.optimize = cs.optimize,
|
||||||
// and if it's not, we'll just blow up here but it's not our fault ;-)
|
|
||||||
universal_lambda_zig_dep.builder.build_root.path.?,
|
|
||||||
"src",
|
|
||||||
"flexilib.zig",
|
|
||||||
}),
|
|
||||||
},
|
|
||||||
.target = cs.root_module.resolved_target.?,
|
|
||||||
.optimize = cs.root_module.optimize.?,
|
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// We will not free this, as the rest of the build system will use it.
|
||||||
|
// This should be ok because our allocator is, I believe, an arena
|
||||||
|
var module_dependencies = try b.allocator.alloc(std.Build.ModuleDependency, cs.modules.count());
|
||||||
|
var iterator = cs.modules.iterator();
|
||||||
|
|
||||||
|
var i: usize = 0;
|
||||||
|
while (iterator.next()) |entry| : (i += 1) {
|
||||||
|
module_dependencies[i] = .{
|
||||||
|
.name = entry.key_ptr.*,
|
||||||
|
.module = entry.value_ptr.*,
|
||||||
|
};
|
||||||
|
lib.addModule(entry.key_ptr.*, entry.value_ptr.*);
|
||||||
|
}
|
||||||
|
|
||||||
// Add the downstream root source file back into the build as a module
|
// Add the downstream root source file back into the build as a module
|
||||||
// that our new root source file can import
|
// that our new root source file can import
|
||||||
const flexilib_handler = b.createModule(.{
|
lib.addAnonymousModule("flexilib_handler", .{
|
||||||
// Source file can be anywhere on disk, does not need to be a subdirectory
|
// Source file can be anywhere on disk, does not need to be a subdirectory
|
||||||
.root_source_file = cs.root_module.root_source_file,
|
.source_file = cs.root_src.?,
|
||||||
|
.dependencies = module_dependencies,
|
||||||
});
|
});
|
||||||
|
|
||||||
lib.root_module.addImport("flexilib_handler", flexilib_handler);
|
|
||||||
|
|
||||||
// Now we need to get our imports added. Rather than reinvent the wheel, we'll
|
|
||||||
// utilize our addImports function, but tell it to work on our library
|
|
||||||
@import("universal_lambda_build.zig").addImports(b, lib, universal_lambda_zig_dep);
|
|
||||||
|
|
||||||
// flexilib_handler module needs imports to work...we are not in control
|
|
||||||
// of this file, so it could expect anything that's already imported. So
|
|
||||||
// we'll walk through the import table and simply add all the imports back in
|
|
||||||
var iterator = lib.root_module.import_table.iterator();
|
|
||||||
while (iterator.next()) |entry|
|
|
||||||
flexilib_handler.addImport(entry.key_ptr.*, entry.value_ptr.*);
|
|
||||||
|
|
||||||
package_step.dependOn(&b.addInstallArtifact(lib, .{}).step);
|
package_step.dependOn(&b.addInstallArtifact(lib, .{}).step);
|
||||||
}
|
}
|
||||||
|
|
31
src/index.js
Normal file
31
src/index.js
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
import customWasm from "custom.wasm";
|
||||||
|
export default {
|
||||||
|
async fetch(request, _env2, ctx) {
|
||||||
|
const stdout = new TransformStream();
|
||||||
|
console.log(request);
|
||||||
|
console.log(_env2);
|
||||||
|
console.log(ctx);
|
||||||
|
let env = {};
|
||||||
|
request.headers.forEach((value, key) => {
|
||||||
|
env[key] = value;
|
||||||
|
});
|
||||||
|
const wasi = new WASI({
|
||||||
|
args: [
|
||||||
|
"./custom.wasm",
|
||||||
|
// In a CLI, the first arg is the name of the exe
|
||||||
|
"--url=" + request.url,
|
||||||
|
// this contains the target but is the full url, so we will use a different arg for this
|
||||||
|
"--method=" + request.method,
|
||||||
|
'-request="' + JSON.stringify(request) + '"'
|
||||||
|
],
|
||||||
|
env,
|
||||||
|
stdin: request.body,
|
||||||
|
stdout: stdout.writable
|
||||||
|
});
|
||||||
|
const instance = new WebAssembly.Instance(customWasm, {
|
||||||
|
wasi_snapshot_preview1: wasi.wasiImport
|
||||||
|
});
|
||||||
|
ctx.waitUntil(wasi.start(instance));
|
||||||
|
return new Response(stdout.readable);
|
||||||
|
}
|
||||||
|
};
|
|
@ -4,7 +4,7 @@ pub const HandlerFn = *const fn (std.mem.Allocator, []const u8, Context) anyerro
|
||||||
|
|
||||||
pub const Response = struct {
|
pub const Response = struct {
|
||||||
allocator: std.mem.Allocator,
|
allocator: std.mem.Allocator,
|
||||||
headers: []const std.http.Header,
|
headers: std.http.Headers,
|
||||||
headers_owned: bool = true,
|
headers_owned: bool = true,
|
||||||
status: std.http.Status = .ok,
|
status: std.http.Status = .ok,
|
||||||
reason: ?[]const u8 = null,
|
reason: ?[]const u8 = null,
|
||||||
|
@ -15,7 +15,7 @@ pub const Response = struct {
|
||||||
/// that here
|
/// that here
|
||||||
request: struct {
|
request: struct {
|
||||||
target: []const u8 = "/",
|
target: []const u8 = "/",
|
||||||
headers: []const std.http.Header,
|
headers: std.http.Headers,
|
||||||
headers_owned: bool = true,
|
headers_owned: bool = true,
|
||||||
method: std.http.Method = .GET,
|
method: std.http.Method = .GET,
|
||||||
},
|
},
|
||||||
|
@ -37,9 +37,9 @@ pub const Response = struct {
|
||||||
pub fn init(allocator: std.mem.Allocator) Response {
|
pub fn init(allocator: std.mem.Allocator) Response {
|
||||||
return .{
|
return .{
|
||||||
.allocator = allocator,
|
.allocator = allocator,
|
||||||
.headers = &.{},
|
.headers = .{ .allocator = allocator },
|
||||||
.request = .{
|
.request = .{
|
||||||
.headers = &.{},
|
.headers = .{ .allocator = allocator },
|
||||||
},
|
},
|
||||||
.body = std.ArrayList(u8).init(allocator),
|
.body = std.ArrayList(u8).init(allocator),
|
||||||
};
|
};
|
||||||
|
@ -61,8 +61,8 @@ pub const Response = struct {
|
||||||
}
|
}
|
||||||
pub fn deinit(res: *Response) void {
|
pub fn deinit(res: *Response) void {
|
||||||
res.body.deinit();
|
res.body.deinit();
|
||||||
if (res.headers_owned) res.allocator.free(res.headers);
|
if (res.headers_owned) res.headers.deinit();
|
||||||
if (res.request.headers_owned) res.allocator.free(res.request.headers);
|
if (res.request.headers_owned) res.request.headers.deinit();
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
516
src/lambda.zig
Normal file
516
src/lambda.zig
Normal file
|
@ -0,0 +1,516 @@
|
||||||
|
const std = @import("std");
|
||||||
|
const builtin = @import("builtin");
|
||||||
|
|
||||||
|
const universal_lambda_interface = @import("universal_lambda_interface");
|
||||||
|
const HandlerFn = universal_lambda_interface.HandlerFn;
|
||||||
|
const Context = universal_lambda_interface.Context;
|
||||||
|
const UniversalLambdaResponse = universal_lambda_interface.Response;
|
||||||
|
|
||||||
|
const log = std.log.scoped(.lambda);
|
||||||
|
|
||||||
|
var empty_headers: std.http.Headers = undefined;
|
||||||
|
var client: ?std.http.Client = null;
|
||||||
|
|
||||||
|
const prefix = "http://";
|
||||||
|
const postfix = "/2018-06-01/runtime/invocation";
|
||||||
|
|
||||||
|
pub fn deinit() void {
|
||||||
|
if (client) |*c| c.deinit();
|
||||||
|
client = null;
|
||||||
|
}
|
||||||
|
/// Starts the lambda framework. Handler will be called when an event is processing
|
||||||
|
/// If an allocator is not provided, an approrpriate allocator will be selected and used
|
||||||
|
/// This function is intended to loop infinitely. If not used in this manner,
|
||||||
|
/// make sure to call the deinit() function
|
||||||
|
pub fn run(allocator: ?std.mem.Allocator, event_handler: HandlerFn) !u8 { // TODO: remove inferred error set?
|
||||||
|
const lambda_runtime_uri = std.os.getenv("AWS_LAMBDA_RUNTIME_API") orelse test_lambda_runtime_uri.?;
|
||||||
|
// TODO: If this is null, go into single use command line mode
|
||||||
|
|
||||||
|
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
|
||||||
|
defer _ = gpa.deinit();
|
||||||
|
const alloc = allocator orelse gpa.allocator();
|
||||||
|
|
||||||
|
const url = try std.fmt.allocPrint(alloc, "{s}{s}{s}/next", .{ prefix, lambda_runtime_uri, postfix });
|
||||||
|
defer alloc.free(url);
|
||||||
|
const uri = try std.Uri.parse(url);
|
||||||
|
|
||||||
|
// TODO: Simply adding this line without even using the client is enough
|
||||||
|
// to cause seg faults!?
|
||||||
|
// client = client orelse .{ .allocator = alloc };
|
||||||
|
// so we'll do this instead
|
||||||
|
if (client != null) return error.MustDeInitBeforeCallingRunAgain;
|
||||||
|
client = .{ .allocator = alloc };
|
||||||
|
empty_headers = std.http.Headers.init(alloc);
|
||||||
|
defer empty_headers.deinit();
|
||||||
|
log.info("tid {d} (lambda): Bootstrap initializing with event url: {s}", .{ std.Thread.getCurrentId(), url });
|
||||||
|
|
||||||
|
while (lambda_remaining_requests == null or lambda_remaining_requests.? > 0) {
|
||||||
|
if (lambda_remaining_requests) |*r| {
|
||||||
|
// we're under test
|
||||||
|
log.debug("lambda remaining requests: {d}", .{r.*});
|
||||||
|
r.* -= 1;
|
||||||
|
}
|
||||||
|
var req_alloc = std.heap.ArenaAllocator.init(alloc);
|
||||||
|
defer req_alloc.deinit();
|
||||||
|
const req_allocator = req_alloc.allocator();
|
||||||
|
|
||||||
|
// Fundamentally we're doing 3 things:
|
||||||
|
// 1. Get the next event from Lambda (event data and request id)
|
||||||
|
// 2. Call our handler to get the response
|
||||||
|
// 3. Post the response back to Lambda
|
||||||
|
var ev = getEvent(req_allocator, uri) catch |err| {
|
||||||
|
// Well, at this point all we can do is shout at the void
|
||||||
|
log.err("Error fetching event details: {}", .{err});
|
||||||
|
std.os.exit(1);
|
||||||
|
// continue;
|
||||||
|
};
|
||||||
|
if (ev == null) continue; // this gets logged in getEvent, and without
|
||||||
|
// a request id, we still can't do anything
|
||||||
|
// reasonable to report back
|
||||||
|
const event = ev.?;
|
||||||
|
defer ev.?.deinit();
|
||||||
|
// Lambda does not have context, just environment variables. API Gateway
|
||||||
|
// might be configured to pass in lots of context, but this comes through
|
||||||
|
// event data, not context. In this case, we lose:
|
||||||
|
//
|
||||||
|
// request headers
|
||||||
|
// request method
|
||||||
|
// request target
|
||||||
|
var response = UniversalLambdaResponse.init(req_allocator);
|
||||||
|
defer response.deinit();
|
||||||
|
const body_writer = std.io.getStdOut();
|
||||||
|
const event_response = event_handler(req_allocator, event.event_data, &response) catch |err| {
|
||||||
|
body_writer.writeAll(response.body.items) catch unreachable;
|
||||||
|
event.reportError(@errorReturnTrace(), err, lambda_runtime_uri) catch unreachable;
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
// No error during handler. Write anything sent to body to stdout instead
|
||||||
|
// I'm not totally sure this is the right behavior as it is a little inconsistent
|
||||||
|
// (flexilib and console both write to the same io stream as the main output)
|
||||||
|
body_writer.writeAll(response.body.items) catch unreachable;
|
||||||
|
event.postResponse(lambda_runtime_uri, event_response) catch |err| {
|
||||||
|
event.reportError(@errorReturnTrace(), err, lambda_runtime_uri) catch unreachable;
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
const Event = struct {
|
||||||
|
allocator: std.mem.Allocator,
|
||||||
|
event_data: []u8,
|
||||||
|
request_id: []u8,
|
||||||
|
|
||||||
|
const Self = @This();
|
||||||
|
|
||||||
|
pub fn init(allocator: std.mem.Allocator, event_data: []u8, request_id: []u8) Self {
|
||||||
|
return .{
|
||||||
|
.allocator = allocator,
|
||||||
|
.event_data = event_data,
|
||||||
|
.request_id = request_id,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
pub fn deinit(self: *Self) void {
|
||||||
|
self.allocator.free(self.event_data);
|
||||||
|
self.allocator.free(self.request_id);
|
||||||
|
}
|
||||||
|
fn reportError(
|
||||||
|
self: Self,
|
||||||
|
return_trace: ?*std.builtin.StackTrace,
|
||||||
|
err: anytype,
|
||||||
|
lambda_runtime_uri: []const u8,
|
||||||
|
) !void {
|
||||||
|
// If we fail in this function, we're pretty hosed up
|
||||||
|
if (return_trace) |rt|
|
||||||
|
log.err("Caught error: {}. Return Trace: {any}", .{ err, rt })
|
||||||
|
else
|
||||||
|
log.err("Caught error: {}. No return trace available", .{err});
|
||||||
|
const err_url = try std.fmt.allocPrint(
|
||||||
|
self.allocator,
|
||||||
|
"{s}{s}{s}/{s}/error",
|
||||||
|
.{ prefix, lambda_runtime_uri, postfix, self.request_id },
|
||||||
|
);
|
||||||
|
defer self.allocator.free(err_url);
|
||||||
|
const err_uri = try std.Uri.parse(err_url);
|
||||||
|
const content =
|
||||||
|
\\{{
|
||||||
|
\\ "errorMessage": "{s}",
|
||||||
|
\\ "errorType": "HandlerReturnedError",
|
||||||
|
\\ "stackTrace": [ "{any}" ]
|
||||||
|
\\}}
|
||||||
|
;
|
||||||
|
const content_fmt = if (return_trace) |rt|
|
||||||
|
try std.fmt.allocPrint(self.allocator, content, .{ @errorName(err), rt })
|
||||||
|
else
|
||||||
|
try std.fmt.allocPrint(self.allocator, content, .{ @errorName(err), "no return trace available" });
|
||||||
|
defer self.allocator.free(content_fmt);
|
||||||
|
log.err("Posting to {s}: Data {s}", .{ err_url, content_fmt });
|
||||||
|
|
||||||
|
var err_headers = std.http.Headers.init(self.allocator);
|
||||||
|
defer err_headers.deinit();
|
||||||
|
err_headers.append(
|
||||||
|
"Lambda-Runtime-Function-Error-Type",
|
||||||
|
"HandlerReturned",
|
||||||
|
) catch |append_err| {
|
||||||
|
log.err("Error appending error header to post response for request id {s}: {}", .{ self.request_id, append_err });
|
||||||
|
std.os.exit(1);
|
||||||
|
};
|
||||||
|
// TODO: There is something up with using a shared client in this way
|
||||||
|
// so we're taking a perf hit in favor of stability. In a practical
|
||||||
|
// sense, without making HTTPS connections (lambda environment is
|
||||||
|
// non-ssl), this shouldn't be a big issue
|
||||||
|
var cl = std.http.Client{ .allocator = self.allocator };
|
||||||
|
defer cl.deinit();
|
||||||
|
var req = try cl.request(.POST, err_uri, empty_headers, .{});
|
||||||
|
// var req = try client.?.request(.POST, err_uri, empty_headers, .{});
|
||||||
|
// defer req.deinit();
|
||||||
|
req.transfer_encoding = .{ .content_length = content_fmt.len };
|
||||||
|
req.start() catch |post_err| { // Well, at this point all we can do is shout at the void
|
||||||
|
log.err("Error posting response (start) for request id {s}: {}", .{ self.request_id, post_err });
|
||||||
|
std.os.exit(1);
|
||||||
|
};
|
||||||
|
try req.writeAll(content_fmt);
|
||||||
|
try req.finish();
|
||||||
|
req.wait() catch |post_err| { // Well, at this point all we can do is shout at the void
|
||||||
|
log.err("Error posting response (wait) for request id {s}: {}", .{ self.request_id, post_err });
|
||||||
|
std.os.exit(1);
|
||||||
|
};
|
||||||
|
// TODO: Determine why this post is not returning
|
||||||
|
if (req.response.status != .ok) {
|
||||||
|
// Documentation says something about "exit immediately". The
|
||||||
|
// Lambda infrastrucutre restarts, so it's unclear if that's necessary.
|
||||||
|
// It seems as though a continue should be fine, and slightly faster
|
||||||
|
log.err("Get fail: {} {s}", .{
|
||||||
|
@intFromEnum(req.response.status),
|
||||||
|
req.response.status.phrase() orelse "",
|
||||||
|
});
|
||||||
|
std.os.exit(1);
|
||||||
|
}
|
||||||
|
log.err("Error reporting post complete", .{});
|
||||||
|
}
|
||||||
|
|
||||||
|
fn postResponse(self: Self, lambda_runtime_uri: []const u8, event_response: []const u8) !void {
|
||||||
|
const response_url = try std.fmt.allocPrint(
|
||||||
|
self.allocator,
|
||||||
|
"{s}{s}{s}/{s}/response",
|
||||||
|
.{ prefix, lambda_runtime_uri, postfix, self.request_id },
|
||||||
|
);
|
||||||
|
defer self.allocator.free(response_url);
|
||||||
|
const response_uri = try std.Uri.parse(response_url);
|
||||||
|
var cl = std.http.Client{ .allocator = self.allocator };
|
||||||
|
defer cl.deinit();
|
||||||
|
var req = try cl.request(.POST, response_uri, empty_headers, .{});
|
||||||
|
// var req = try client.?.request(.POST, response_uri, empty_headers, .{});
|
||||||
|
defer req.deinit();
|
||||||
|
// Lambda does different things, depending on the runtime. Go 1.x takes
|
||||||
|
// any return value but escapes double quotes. Custom runtimes can
|
||||||
|
// do whatever they want. node I believe wraps as a json object. We're
|
||||||
|
// going to leave the return value up to the handler, and they can
|
||||||
|
// use a seperate API for normalization so we're explicit.
|
||||||
|
const response_content = try std.fmt.allocPrint(
|
||||||
|
self.allocator,
|
||||||
|
"{s}",
|
||||||
|
.{event_response},
|
||||||
|
);
|
||||||
|
defer self.allocator.free(response_content);
|
||||||
|
|
||||||
|
req.transfer_encoding = .{ .content_length = response_content.len };
|
||||||
|
try req.start();
|
||||||
|
try req.writeAll(response_content);
|
||||||
|
try req.finish();
|
||||||
|
try req.wait();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
fn getEvent(allocator: std.mem.Allocator, event_data_uri: std.Uri) !?Event {
|
||||||
|
// TODO: There is something up with using a shared client in this way
|
||||||
|
// so we're taking a perf hit in favor of stability. In a practical
|
||||||
|
// sense, without making HTTPS connections (lambda environment is
|
||||||
|
// non-ssl), this shouldn't be a big issue
|
||||||
|
var cl = std.http.Client{ .allocator = allocator };
|
||||||
|
defer cl.deinit();
|
||||||
|
var req = try cl.request(.GET, event_data_uri, empty_headers, .{});
|
||||||
|
// var req = try client.?.request(.GET, event_data_uri, empty_headers, .{});
|
||||||
|
// defer req.deinit();
|
||||||
|
|
||||||
|
try req.start();
|
||||||
|
try req.finish();
|
||||||
|
// Lambda freezes the process at this line of code. During warm start,
|
||||||
|
// the process will unfreeze and data will be sent in response to client.get
|
||||||
|
try req.wait();
|
||||||
|
if (req.response.status != .ok) {
|
||||||
|
// Documentation says something about "exit immediately". The
|
||||||
|
// Lambda infrastrucutre restarts, so it's unclear if that's necessary.
|
||||||
|
// It seems as though a continue should be fine, and slightly faster
|
||||||
|
// std.os.exit(1);
|
||||||
|
log.err("Lambda server event response returned bad error code: {} {s}", .{
|
||||||
|
@intFromEnum(req.response.status),
|
||||||
|
req.response.status.phrase() orelse "",
|
||||||
|
});
|
||||||
|
return error.EventResponseNotOkResponse;
|
||||||
|
}
|
||||||
|
|
||||||
|
var request_id: ?[]const u8 = null;
|
||||||
|
var content_length: ?usize = null;
|
||||||
|
for (req.response.headers.list.items) |h| {
|
||||||
|
if (std.ascii.eqlIgnoreCase(h.name, "Lambda-Runtime-Aws-Request-Id"))
|
||||||
|
request_id = h.value;
|
||||||
|
if (std.ascii.eqlIgnoreCase(h.name, "Content-Length")) {
|
||||||
|
content_length = std.fmt.parseUnsigned(usize, h.value, 10) catch null;
|
||||||
|
if (content_length == null)
|
||||||
|
log.warn("Error parsing content length value: '{s}'", .{h.value});
|
||||||
|
}
|
||||||
|
// TODO: XRay uses an environment variable to do its magic. It's our
|
||||||
|
// responsibility to set this, but no zig-native setenv(3)/putenv(3)
|
||||||
|
// exists. I would kind of rather not link in libc for this,
|
||||||
|
// so we'll hold for now and think on this
|
||||||
|
// if (std.mem.indexOf(u8, h.name.value, "Lambda-Runtime-Trace-Id")) |_|
|
||||||
|
// std.process.
|
||||||
|
// std.os.setenv("AWS_LAMBDA_RUNTIME_API");
|
||||||
|
}
|
||||||
|
if (request_id == null) {
|
||||||
|
// We can't report back an issue because the runtime error reporting endpoint
|
||||||
|
// uses request id in its path. So the best we can do is log the error and move
|
||||||
|
// on here.
|
||||||
|
log.err("Could not find request id: skipping request", .{});
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
if (content_length == null) {
|
||||||
|
// We can't report back an issue because the runtime error reporting endpoint
|
||||||
|
// uses request id in its path. So the best we can do is log the error and move
|
||||||
|
// on here.
|
||||||
|
log.err("No content length provided for event data", .{});
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
const req_id = request_id.?;
|
||||||
|
log.debug("got lambda request with id {s}", .{req_id});
|
||||||
|
|
||||||
|
var response_data: []u8 =
|
||||||
|
if (req.response.transfer_encoding) |_| // the only value here is "chunked"
|
||||||
|
try req.reader().readAllAlloc(allocator, std.math.maxInt(usize))
|
||||||
|
else blk: {
|
||||||
|
// content length
|
||||||
|
var tmp_data = try allocator.alloc(u8, content_length.?);
|
||||||
|
errdefer allocator.free(tmp_data);
|
||||||
|
_ = try req.readAll(tmp_data);
|
||||||
|
break :blk tmp_data;
|
||||||
|
};
|
||||||
|
|
||||||
|
return Event.init(
|
||||||
|
allocator,
|
||||||
|
response_data,
|
||||||
|
try allocator.dupe(u8, req_id),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
////////////////////////////////////////////////////////////////////////
|
||||||
|
// All code below this line is for testing
|
||||||
|
////////////////////////////////////////////////////////////////////////
|
||||||
|
var server_port: ?u16 = null;
|
||||||
|
var server_remaining_requests: usize = 0;
|
||||||
|
var lambda_remaining_requests: ?usize = null;
|
||||||
|
var server_response: []const u8 = "unset";
|
||||||
|
var server_request_aka_lambda_response: []u8 = "";
|
||||||
|
var test_lambda_runtime_uri: ?[]u8 = null;
|
||||||
|
|
||||||
|
var server_ready = false;
|
||||||
|
/// This starts a test server. We're not testing the server itself,
|
||||||
|
/// so the main tests will start this thing up and create an arena around the
|
||||||
|
/// whole thing so we can just deallocate everything at once at the end,
|
||||||
|
/// leaks be damned
|
||||||
|
fn startServer(allocator: std.mem.Allocator) !std.Thread {
|
||||||
|
return try std.Thread.spawn(
|
||||||
|
.{},
|
||||||
|
threadMain,
|
||||||
|
.{allocator},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn threadMain(allocator: std.mem.Allocator) !void {
|
||||||
|
var server = std.http.Server.init(allocator, .{ .reuse_address = true });
|
||||||
|
// defer server.deinit();
|
||||||
|
|
||||||
|
const address = try std.net.Address.parseIp("127.0.0.1", 0);
|
||||||
|
try server.listen(address);
|
||||||
|
server_port = server.socket.listen_address.in.getPort();
|
||||||
|
|
||||||
|
test_lambda_runtime_uri = try std.fmt.allocPrint(allocator, "127.0.0.1:{d}", .{server_port.?});
|
||||||
|
log.debug("server listening at {s}", .{test_lambda_runtime_uri.?});
|
||||||
|
defer server.deinit();
|
||||||
|
defer test_lambda_runtime_uri = null;
|
||||||
|
defer server_port = null;
|
||||||
|
log.info("starting server thread, tid {d}", .{std.Thread.getCurrentId()});
|
||||||
|
var arena = std.heap.ArenaAllocator.init(allocator);
|
||||||
|
defer arena.deinit();
|
||||||
|
var aa = arena.allocator();
|
||||||
|
// We're in control of all requests/responses, so this flag will tell us
|
||||||
|
// when it's time to shut down
|
||||||
|
while (server_remaining_requests > 0) {
|
||||||
|
server_remaining_requests -= 1;
|
||||||
|
// defer {
|
||||||
|
// if (!arena.reset(.{ .retain_capacity = {} })) {
|
||||||
|
// // reallocation failed, arena is degraded
|
||||||
|
// log.warn("Arena reset failed and is degraded. Resetting arena", .{});
|
||||||
|
// arena.deinit();
|
||||||
|
// arena = std.heap.ArenaAllocator.init(allocator);
|
||||||
|
// aa = arena.allocator();
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
|
||||||
|
processRequest(aa, &server) catch |e| {
|
||||||
|
log.err("Unexpected error processing request: {any}", .{e});
|
||||||
|
if (@errorReturnTrace()) |trace| {
|
||||||
|
std.debug.dumpStackTrace(trace.*);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn processRequest(allocator: std.mem.Allocator, server: *std.http.Server) !void {
|
||||||
|
server_ready = true;
|
||||||
|
errdefer server_ready = false;
|
||||||
|
log.debug(
|
||||||
|
"tid {d} (server): server waiting to accept. requests remaining: {d}",
|
||||||
|
.{ std.Thread.getCurrentId(), server_remaining_requests + 1 },
|
||||||
|
);
|
||||||
|
var res = try server.accept(.{ .allocator = allocator });
|
||||||
|
server_ready = false;
|
||||||
|
defer res.deinit();
|
||||||
|
defer _ = res.reset();
|
||||||
|
try res.wait(); // wait for client to send a complete request head
|
||||||
|
|
||||||
|
const errstr = "Internal Server Error\n";
|
||||||
|
var errbuf: [errstr.len]u8 = undefined;
|
||||||
|
@memcpy(&errbuf, errstr);
|
||||||
|
var response_bytes: []const u8 = errbuf[0..];
|
||||||
|
|
||||||
|
if (res.request.content_length) |l|
|
||||||
|
server_request_aka_lambda_response = try res.reader().readAllAlloc(allocator, @as(usize, @intCast(l)));
|
||||||
|
|
||||||
|
log.debug(
|
||||||
|
"tid {d} (server): {d} bytes read from request",
|
||||||
|
.{ std.Thread.getCurrentId(), server_request_aka_lambda_response.len },
|
||||||
|
);
|
||||||
|
|
||||||
|
// try response.headers.append("content-type", "text/plain");
|
||||||
|
response_bytes = serve(allocator, &res) catch |e| brk: {
|
||||||
|
res.status = .internal_server_error;
|
||||||
|
// TODO: more about this particular request
|
||||||
|
log.err("Unexpected error from executor processing request: {any}", .{e});
|
||||||
|
if (@errorReturnTrace()) |trace| {
|
||||||
|
std.debug.dumpStackTrace(trace.*);
|
||||||
|
}
|
||||||
|
break :brk "Unexpected error generating request to lambda";
|
||||||
|
};
|
||||||
|
res.transfer_encoding = .{ .content_length = response_bytes.len };
|
||||||
|
try res.do();
|
||||||
|
_ = try res.writer().writeAll(response_bytes);
|
||||||
|
try res.finish();
|
||||||
|
log.debug(
|
||||||
|
"tid {d} (server): sent response",
|
||||||
|
.{std.Thread.getCurrentId()},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn serve(allocator: std.mem.Allocator, res: *std.http.Server.Response) ![]const u8 {
|
||||||
|
_ = allocator;
|
||||||
|
// try res.headers.append("content-length", try std.fmt.allocPrint(allocator, "{d}", .{server_response.len}));
|
||||||
|
try res.headers.append("Lambda-Runtime-Aws-Request-Id", "69");
|
||||||
|
return server_response;
|
||||||
|
}
|
||||||
|
|
||||||
|
fn handler(allocator: std.mem.Allocator, event_data: []const u8, context: Context) ![]const u8 {
|
||||||
|
_ = allocator;
|
||||||
|
_ = context;
|
||||||
|
return event_data;
|
||||||
|
}
|
||||||
|
fn thread_run(allocator: ?std.mem.Allocator, event_handler: HandlerFn) !void {
|
||||||
|
_ = try run(allocator, event_handler);
|
||||||
|
}
|
||||||
|
fn test_run(allocator: std.mem.Allocator, event_handler: HandlerFn) !std.Thread {
|
||||||
|
return try std.Thread.spawn(
|
||||||
|
.{},
|
||||||
|
thread_run,
|
||||||
|
.{ allocator, event_handler },
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn lambda_request(allocator: std.mem.Allocator, request: []const u8, request_count: usize) ![]u8 {
|
||||||
|
var arena = std.heap.ArenaAllocator.init(allocator);
|
||||||
|
defer arena.deinit();
|
||||||
|
var aa = arena.allocator();
|
||||||
|
// Setup our server to run, and set the response for the server to the
|
||||||
|
// request. There is a cognitive disconnect here between mental model and
|
||||||
|
// physical model.
|
||||||
|
//
|
||||||
|
// Mental model:
|
||||||
|
//
|
||||||
|
// Lambda request -> λ -> Lambda response
|
||||||
|
//
|
||||||
|
// Physcial Model:
|
||||||
|
//
|
||||||
|
// 1. λ requests instructions from server
|
||||||
|
// 2. server provides "Lambda request"
|
||||||
|
// 3. λ posts response back to server
|
||||||
|
//
|
||||||
|
// So here we are setting up our server, then our lambda request loop,
|
||||||
|
// but it all needs to be in seperate threads so we can control startup
|
||||||
|
// and shut down. Both server and Lambda are set up to watch global variable
|
||||||
|
// booleans to know when to shut down. This function is designed for a
|
||||||
|
// single request/response pair only
|
||||||
|
|
||||||
|
lambda_remaining_requests = request_count;
|
||||||
|
server_remaining_requests = lambda_remaining_requests.? * 2; // Lambda functions
|
||||||
|
// fetch from the server,
|
||||||
|
// then post back. Always
|
||||||
|
// 2, no more, no less
|
||||||
|
server_response = request; // set our instructions to lambda, which in our
|
||||||
|
// physical model above, is the server response
|
||||||
|
defer server_response = "unset"; // set it back so we don't get confused later
|
||||||
|
// when subsequent tests fail
|
||||||
|
const server_thread = try startServer(aa); // start the server, get it ready
|
||||||
|
while (!server_ready)
|
||||||
|
std.time.sleep(100);
|
||||||
|
|
||||||
|
log.debug("tid {d} (main): server reports ready", .{std.Thread.getCurrentId()});
|
||||||
|
// we aren't testing the server,
|
||||||
|
// so we'll use the arena allocator
|
||||||
|
defer server_thread.join(); // we'll be shutting everything down before we exit
|
||||||
|
|
||||||
|
// Now we need to start the lambda framework, following a siimilar pattern
|
||||||
|
const lambda_thread = try test_run(allocator, handler); // We want our function under test to report leaks
|
||||||
|
lambda_thread.join();
|
||||||
|
return try allocator.dupe(u8, server_request_aka_lambda_response);
|
||||||
|
}
|
||||||
|
|
||||||
|
test "basic request" {
|
||||||
|
// std.testing.log_level = .debug;
|
||||||
|
const allocator = std.testing.allocator;
|
||||||
|
const request =
|
||||||
|
\\{"foo": "bar", "baz": "qux"}
|
||||||
|
;
|
||||||
|
|
||||||
|
const expected_response =
|
||||||
|
\\{"foo": "bar", "baz": "qux"}
|
||||||
|
;
|
||||||
|
const lambda_response = try lambda_request(allocator, request, 1);
|
||||||
|
defer deinit();
|
||||||
|
defer allocator.free(lambda_response);
|
||||||
|
try std.testing.expectEqualStrings(expected_response, lambda_response);
|
||||||
|
}
|
||||||
|
|
||||||
|
test "several requests do not fail" {
|
||||||
|
// std.testing.log_level = .debug;
|
||||||
|
const allocator = std.testing.allocator;
|
||||||
|
const request =
|
||||||
|
\\{"foo": "bar", "baz": "qux"}
|
||||||
|
;
|
||||||
|
|
||||||
|
const expected_response =
|
||||||
|
\\{"foo": "bar", "baz": "qux"}
|
||||||
|
;
|
||||||
|
const lambda_response = try lambda_request(allocator, request, 5);
|
||||||
|
defer deinit();
|
||||||
|
defer allocator.free(lambda_response);
|
||||||
|
try std.testing.expectEqualStrings(expected_response, lambda_response);
|
||||||
|
}
|
198
src/lambda_build.zig
Normal file
198
src/lambda_build.zig
Normal file
|
@ -0,0 +1,198 @@
|
||||||
|
const std = @import("std");
|
||||||
|
const builtin = @import("builtin");
|
||||||
|
|
||||||
|
fn fileExists(file_name: []const u8) bool {
|
||||||
|
const file = std.fs.openFileAbsolute(file_name, .{}) catch return false;
|
||||||
|
defer file.close();
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
fn addArgs(allocator: std.mem.Allocator, original: []const u8, args: [][]const u8) ![]const u8 {
|
||||||
|
var rc = original;
|
||||||
|
for (args) |arg| {
|
||||||
|
rc = try std.mem.concat(allocator, u8, &.{ rc, " ", arg });
|
||||||
|
}
|
||||||
|
return rc;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// lambdaBuildSteps will add four build steps to the build (if compiling
|
||||||
|
/// the code on a Linux host):
|
||||||
|
///
|
||||||
|
/// * awslambda_package: Packages the function for deployment to Lambda
|
||||||
|
/// (dependencies are the zip executable and a shell)
|
||||||
|
/// * awslambda_iam: Gets an IAM role for the Lambda function, and creates it if it does not exist
|
||||||
|
/// (dependencies are the AWS CLI, grep and a shell)
|
||||||
|
/// * awslambda_deploy: Deploys the lambda function to a live AWS environment
|
||||||
|
/// (dependencies are the AWS CLI, and a shell)
|
||||||
|
/// * awslambda_run: Runs the lambda function in a live AWS environment
|
||||||
|
/// (dependencies are the AWS CLI, and a shell)
|
||||||
|
///
|
||||||
|
/// awslambda_run depends on deploy
|
||||||
|
/// awslambda_deploy depends on iam and package
|
||||||
|
///
|
||||||
|
/// iam and package do not have any dependencies
|
||||||
|
pub fn configureBuild(b: *std.build.Builder, exe: *std.Build.Step.Compile, function_name: []const u8) !void {
|
||||||
|
// The rest of this function is currently reliant on the use of Linux
|
||||||
|
// system being used to build the lambda function
|
||||||
|
//
|
||||||
|
// It is likely that much of this will work on other Unix-like OSs, but
|
||||||
|
// we will work this out later
|
||||||
|
//
|
||||||
|
// TODO: support other host OSs
|
||||||
|
if (builtin.os.tag != .linux) return;
|
||||||
|
|
||||||
|
// Package step
|
||||||
|
const package_step = b.step("awslambda_package", "Package the function");
|
||||||
|
const function_zip = b.getInstallPath(.bin, "function.zip");
|
||||||
|
|
||||||
|
// TODO: Avoid use of system-installed zip, maybe using something like
|
||||||
|
// https://github.com/hdorio/hwzip.zig/blob/master/src/hwzip.zig
|
||||||
|
const zip = if (std.mem.eql(u8, "bootstrap", exe.out_filename))
|
||||||
|
try std.fmt.allocPrint(b.allocator,
|
||||||
|
\\zip -qj9 {s} {s}
|
||||||
|
, .{
|
||||||
|
function_zip,
|
||||||
|
b.getInstallPath(.bin, "bootstrap"),
|
||||||
|
})
|
||||||
|
else
|
||||||
|
// We need to copy stuff around
|
||||||
|
try std.fmt.allocPrint(b.allocator,
|
||||||
|
\\cp {s} {s} && \
|
||||||
|
\\zip -qj9 {s} {s} && \
|
||||||
|
\\rm {s}
|
||||||
|
, .{
|
||||||
|
b.getInstallPath(.bin, exe.out_filename),
|
||||||
|
b.getInstallPath(.bin, "bootstrap"),
|
||||||
|
function_zip,
|
||||||
|
b.getInstallPath(.bin, "bootstrap"),
|
||||||
|
b.getInstallPath(.bin, "bootstrap"),
|
||||||
|
});
|
||||||
|
// std.debug.print("\nzip cmdline: {s}", .{zip});
|
||||||
|
defer b.allocator.free(zip);
|
||||||
|
var zip_cmd = b.addSystemCommand(&.{ "/bin/sh", "-c", zip });
|
||||||
|
zip_cmd.step.dependOn(b.getInstallStep());
|
||||||
|
package_step.dependOn(&zip_cmd.step);
|
||||||
|
|
||||||
|
// Deployment
|
||||||
|
const deploy_step = b.step("awslambda_deploy", "Deploy the function");
|
||||||
|
|
||||||
|
const iam_role_name = b.option(
|
||||||
|
[]const u8,
|
||||||
|
"function-role",
|
||||||
|
"IAM role name for function (will create if it does not exist) [lambda_basic_execution]",
|
||||||
|
) orelse "lambda_basic_execution";
|
||||||
|
var iam_role_arn = b.option(
|
||||||
|
[]const u8,
|
||||||
|
"function-arn",
|
||||||
|
"Preexisting IAM role arn for function",
|
||||||
|
);
|
||||||
|
|
||||||
|
const iam_step = b.step("awslambda_iam", "Create/Get IAM role for function");
|
||||||
|
deploy_step.dependOn(iam_step); // iam_step will either be a noop or all the stuff below
|
||||||
|
const iam_role_param: []u8 = blk: {
|
||||||
|
if (iam_role_arn != null)
|
||||||
|
break :blk try std.fmt.allocPrint(b.allocator, "--role {s}", .{iam_role_arn.?});
|
||||||
|
|
||||||
|
if (iam_role_name.len == 0)
|
||||||
|
@panic("Either function-role or function-arn must be specified. function-arn will allow deployment without creating a role");
|
||||||
|
|
||||||
|
// Now we have an iam role name to use, but no iam role arn. Let's go hunting
|
||||||
|
// Once this is done once, we'll have a file with the arn in "cache"
|
||||||
|
// The iam arn will reside in an 'iam_role' file in the bin directory
|
||||||
|
|
||||||
|
// Build system command to create the role if necessary and get the role arn
|
||||||
|
const iam_role_file = b.getInstallPath(.bin, "iam_role");
|
||||||
|
|
||||||
|
if (!fileExists(iam_role_file)) {
|
||||||
|
// std.debug.print("file does not exist", .{});
|
||||||
|
// Our cache file does not exist on disk, so we'll create/get the role
|
||||||
|
// arn using the AWS CLI and dump to disk here
|
||||||
|
const ifstatement_fmt =
|
||||||
|
\\ if aws iam get-role --role-name {s} 2>&1 |grep -q NoSuchEntity; then aws iam create-role --output text --query Role.Arn --role-name {s} --assume-role-policy-document '{{
|
||||||
|
\\ "Version": "2012-10-17",
|
||||||
|
\\ "Statement": [
|
||||||
|
\\ {{
|
||||||
|
\\ "Sid": "",
|
||||||
|
\\ "Effect": "Allow",
|
||||||
|
\\ "Principal": {{
|
||||||
|
\\ "Service": "lambda.amazonaws.com"
|
||||||
|
\\ }},
|
||||||
|
\\ "Action": "sts:AssumeRole"
|
||||||
|
\\ }}
|
||||||
|
\\ ]}}' > /dev/null; fi && \
|
||||||
|
\\ aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSLambdaExecute --role-name lambda_basic_execution && \
|
||||||
|
\\ aws iam get-role --role-name lambda_basic_execution --query Role.Arn --output text > {s}
|
||||||
|
;
|
||||||
|
const ifstatement = try std.fmt.allocPrint(
|
||||||
|
b.allocator,
|
||||||
|
ifstatement_fmt,
|
||||||
|
.{ iam_role_name, iam_role_name, iam_role_file },
|
||||||
|
);
|
||||||
|
iam_step.dependOn(&b.addSystemCommand(&.{ "/bin/sh", "-c", ifstatement }).step);
|
||||||
|
}
|
||||||
|
|
||||||
|
break :blk try std.fmt.allocPrint(b.allocator, "--role \"$(cat {s})\"", .{iam_role_file});
|
||||||
|
};
|
||||||
|
const function_name_file = b.getInstallPath(.bin, function_name);
|
||||||
|
const ifstatement = "if [ ! -f {s} ] || [ {s} -nt {s} ]; then if aws lambda get-function --function-name {s} 2>&1 |grep -q ResourceNotFoundException; then echo not found > /dev/null; {s}; else echo found > /dev/null; {s}; fi; fi";
|
||||||
|
// The architectures option was introduced in 2.2.43 released 2021-10-01
|
||||||
|
// We want to use arm64 here because it is both faster and cheaper for most
|
||||||
|
// Amazon Linux 2 is the only arm64 supported option
|
||||||
|
// TODO: This should determine compilation target and use x86_64 if needed
|
||||||
|
const not_found = "aws lambda create-function --architectures arm64 --runtime provided.al2 --function-name {s} --zip-file fileb://{s} --handler not_applicable {s} && touch {s}";
|
||||||
|
const not_found_fmt = try std.fmt.allocPrint(b.allocator, not_found, .{ function_name, function_zip, iam_role_param, function_name_file });
|
||||||
|
defer b.allocator.free(not_found_fmt);
|
||||||
|
const found = "aws lambda update-function-code --function-name {s} --zip-file fileb://{s} && touch {s}";
|
||||||
|
const found_fmt = try std.fmt.allocPrint(b.allocator, found, .{ function_name, function_zip, function_name_file });
|
||||||
|
defer b.allocator.free(found_fmt);
|
||||||
|
var found_final: []const u8 = undefined;
|
||||||
|
var not_found_final: []const u8 = undefined;
|
||||||
|
if (b.args) |args| {
|
||||||
|
found_final = try addArgs(b.allocator, found_fmt, args);
|
||||||
|
not_found_final = try addArgs(b.allocator, not_found_fmt, args);
|
||||||
|
} else {
|
||||||
|
found_final = found_fmt;
|
||||||
|
not_found_final = not_found_fmt;
|
||||||
|
}
|
||||||
|
const cmd = try std.fmt.allocPrint(b.allocator, ifstatement, .{
|
||||||
|
function_name_file,
|
||||||
|
b.getInstallPath(.bin, exe.out_filename),
|
||||||
|
function_name_file,
|
||||||
|
function_name,
|
||||||
|
not_found_fmt,
|
||||||
|
found_fmt,
|
||||||
|
});
|
||||||
|
|
||||||
|
defer b.allocator.free(cmd);
|
||||||
|
|
||||||
|
// std.debug.print("{s}\n", .{cmd});
|
||||||
|
deploy_step.dependOn(package_step);
|
||||||
|
deploy_step.dependOn(&b.addSystemCommand(&.{ "/bin/sh", "-c", cmd }).step);
|
||||||
|
|
||||||
|
const payload = b.option([]const u8, "payload", "Lambda payload [{\"foo\":\"bar\", \"baz\": \"qux\"}]") orelse
|
||||||
|
\\ {"foo": "bar", "baz": "qux"}"
|
||||||
|
;
|
||||||
|
|
||||||
|
const run_script =
|
||||||
|
\\ f=$(mktemp) && \
|
||||||
|
\\ logs=$(aws lambda invoke \
|
||||||
|
\\ --cli-binary-format raw-in-base64-out \
|
||||||
|
\\ --invocation-type RequestResponse \
|
||||||
|
\\ --function-name {s} \
|
||||||
|
\\ --payload '{s}' \
|
||||||
|
\\ --log-type Tail \
|
||||||
|
\\ --query LogResult \
|
||||||
|
\\ --output text "$f" |base64 -d) && \
|
||||||
|
\\ cat "$f" && rm "$f" && \
|
||||||
|
\\ echo && echo && echo "$logs"
|
||||||
|
;
|
||||||
|
const run_script_fmt = try std.fmt.allocPrint(b.allocator, run_script, .{ function_name, payload });
|
||||||
|
defer b.allocator.free(run_script_fmt);
|
||||||
|
const run_cmd = b.addSystemCommand(&.{ "/bin/sh", "-c", run_script_fmt });
|
||||||
|
run_cmd.step.dependOn(deploy_step);
|
||||||
|
if (b.args) |args| {
|
||||||
|
run_cmd.addArgs(args);
|
||||||
|
}
|
||||||
|
|
||||||
|
const run_step = b.step("awslambda_run", "Run the app in AWS lambda");
|
||||||
|
run_step.dependOn(&run_cmd.step);
|
||||||
|
}
|
777
src/script_harness.js
Normal file
777
src/script_harness.js
Normal file
|
@ -0,0 +1,777 @@
|
||||||
|
// node_modules/@cloudflare/workers-wasi/dist/index.mjs
|
||||||
|
import wasm from "./c5f1acc97ad09df861eff9ef567c2186d4e38de3-memfs.wasm";
|
||||||
|
var __accessCheck = (obj, member, msg) => {
|
||||||
|
if (!member.has(obj))
|
||||||
|
throw TypeError("Cannot " + msg);
|
||||||
|
};
|
||||||
|
var __privateGet = (obj, member, getter) => {
|
||||||
|
__accessCheck(obj, member, "read from private field");
|
||||||
|
return getter ? getter.call(obj) : member.get(obj);
|
||||||
|
};
|
||||||
|
var __privateAdd = (obj, member, value) => {
|
||||||
|
if (member.has(obj))
|
||||||
|
throw TypeError("Cannot add the same private member more than once");
|
||||||
|
member instanceof WeakSet ? member.add(obj) : member.set(obj, value);
|
||||||
|
};
|
||||||
|
var __privateSet = (obj, member, value, setter) => {
|
||||||
|
__accessCheck(obj, member, "write to private field");
|
||||||
|
setter ? setter.call(obj, value) : member.set(obj, value);
|
||||||
|
return value;
|
||||||
|
};
|
||||||
|
var __privateMethod = (obj, member, method) => {
|
||||||
|
__accessCheck(obj, member, "access private method");
|
||||||
|
return method;
|
||||||
|
};
|
||||||
|
var Result;
|
||||||
|
(function(Result2) {
|
||||||
|
Result2[Result2["SUCCESS"] = 0] = "SUCCESS";
|
||||||
|
Result2[Result2["EBADF"] = 8] = "EBADF";
|
||||||
|
Result2[Result2["EINVAL"] = 28] = "EINVAL";
|
||||||
|
Result2[Result2["ENOENT"] = 44] = "ENOENT";
|
||||||
|
Result2[Result2["ENOSYS"] = 52] = "ENOSYS";
|
||||||
|
Result2[Result2["ENOTSUP"] = 58] = "ENOTSUP";
|
||||||
|
})(Result || (Result = {}));
|
||||||
|
var Clock;
|
||||||
|
(function(Clock2) {
|
||||||
|
Clock2[Clock2["REALTIME"] = 0] = "REALTIME";
|
||||||
|
Clock2[Clock2["MONOTONIC"] = 1] = "MONOTONIC";
|
||||||
|
Clock2[Clock2["PROCESS_CPUTIME_ID"] = 2] = "PROCESS_CPUTIME_ID";
|
||||||
|
Clock2[Clock2["THREAD_CPUTIME_ID"] = 3] = "THREAD_CPUTIME_ID";
|
||||||
|
})(Clock || (Clock = {}));
|
||||||
|
var iovViews = (view, iovs_ptr, iovs_len) => {
|
||||||
|
let result = Array(iovs_len);
|
||||||
|
for (let i = 0; i < iovs_len; i++) {
|
||||||
|
const bufferPtr = view.getUint32(iovs_ptr, true);
|
||||||
|
iovs_ptr += 4;
|
||||||
|
const bufferLen = view.getUint32(iovs_ptr, true);
|
||||||
|
iovs_ptr += 4;
|
||||||
|
result[i] = new Uint8Array(view.buffer, bufferPtr, bufferLen);
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
};
|
||||||
|
var _instance;
|
||||||
|
var _hostMemory;
|
||||||
|
var _getInternalView;
|
||||||
|
var getInternalView_fn;
|
||||||
|
var _copyFrom;
|
||||||
|
var copyFrom_fn;
|
||||||
|
var MemFS = class {
|
||||||
|
constructor(preopens, fs) {
|
||||||
|
__privateAdd(this, _getInternalView);
|
||||||
|
__privateAdd(this, _copyFrom);
|
||||||
|
__privateAdd(this, _instance, void 0);
|
||||||
|
__privateAdd(this, _hostMemory, void 0);
|
||||||
|
__privateSet(this, _instance, new WebAssembly.Instance(wasm, {
|
||||||
|
internal: {
|
||||||
|
now_ms: () => Date.now(),
|
||||||
|
trace: (isError, addr, size) => {
|
||||||
|
const view = new Uint8Array(__privateMethod(this, _getInternalView, getInternalView_fn).call(this).buffer, addr, size);
|
||||||
|
const s = new TextDecoder().decode(view);
|
||||||
|
if (isError) {
|
||||||
|
throw new Error(s);
|
||||||
|
} else {
|
||||||
|
console.info(s);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
copy_out: (srcAddr, dstAddr, size) => {
|
||||||
|
const dst = new Uint8Array(__privateGet(this, _hostMemory).buffer, dstAddr, size);
|
||||||
|
const src = new Uint8Array(__privateMethod(this, _getInternalView, getInternalView_fn).call(this).buffer, srcAddr, size);
|
||||||
|
dst.set(src);
|
||||||
|
},
|
||||||
|
copy_in: (srcAddr, dstAddr, size) => {
|
||||||
|
const src = new Uint8Array(__privateGet(this, _hostMemory).buffer, srcAddr, size);
|
||||||
|
const dst = new Uint8Array(__privateMethod(this, _getInternalView, getInternalView_fn).call(this).buffer, dstAddr, size);
|
||||||
|
dst.set(src);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
wasi_snapshot_preview1: {
|
||||||
|
proc_exit: (_) => {
|
||||||
|
},
|
||||||
|
fd_seek: () => Result.ENOSYS,
|
||||||
|
fd_write: () => Result.ENOSYS,
|
||||||
|
fd_close: () => Result.ENOSYS
|
||||||
|
}
|
||||||
|
}));
|
||||||
|
this.exports = __privateGet(this, _instance).exports;
|
||||||
|
const start = __privateGet(this, _instance).exports._start;
|
||||||
|
start();
|
||||||
|
const data = new TextEncoder().encode(JSON.stringify({ preopens, fs }));
|
||||||
|
const initialize_internal = __privateGet(this, _instance).exports.initialize_internal;
|
||||||
|
initialize_internal(__privateMethod(this, _copyFrom, copyFrom_fn).call(this, data), data.byteLength);
|
||||||
|
}
|
||||||
|
initialize(hostMemory) {
|
||||||
|
__privateSet(this, _hostMemory, hostMemory);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
_instance = /* @__PURE__ */ new WeakMap();
|
||||||
|
_hostMemory = /* @__PURE__ */ new WeakMap();
|
||||||
|
_getInternalView = /* @__PURE__ */ new WeakSet();
|
||||||
|
getInternalView_fn = function() {
|
||||||
|
const memory = __privateGet(this, _instance).exports.memory;
|
||||||
|
return new DataView(memory.buffer);
|
||||||
|
};
|
||||||
|
_copyFrom = /* @__PURE__ */ new WeakSet();
|
||||||
|
copyFrom_fn = function(src) {
|
||||||
|
const dstAddr = __privateGet(this, _instance).exports.allocate(src.byteLength);
|
||||||
|
new Uint8Array(__privateMethod(this, _getInternalView, getInternalView_fn).call(this).buffer, dstAddr, src.byteLength).set(src);
|
||||||
|
return dstAddr;
|
||||||
|
};
|
||||||
|
var DATA_ADDR = 16;
|
||||||
|
var DATA_START = DATA_ADDR + 8;
|
||||||
|
var DATA_END = 1024;
|
||||||
|
var WRAPPED_EXPORTS = /* @__PURE__ */ new WeakMap();
|
||||||
|
var State = {
|
||||||
|
None: 0,
|
||||||
|
Unwinding: 1,
|
||||||
|
Rewinding: 2
|
||||||
|
};
|
||||||
|
function isPromise(obj) {
|
||||||
|
return !!obj && (typeof obj === "object" || typeof obj === "function") && typeof obj.then === "function";
|
||||||
|
}
|
||||||
|
function proxyGet(obj, transform) {
|
||||||
|
return new Proxy(obj, {
|
||||||
|
get: (obj2, name) => transform(obj2[name])
|
||||||
|
});
|
||||||
|
}
|
||||||
|
var Asyncify = class {
|
||||||
|
constructor() {
|
||||||
|
this.value = void 0;
|
||||||
|
this.exports = null;
|
||||||
|
}
|
||||||
|
getState() {
|
||||||
|
return this.exports.asyncify_get_state();
|
||||||
|
}
|
||||||
|
assertNoneState() {
|
||||||
|
let state = this.getState();
|
||||||
|
if (state !== State.None) {
|
||||||
|
throw new Error(`Invalid async state ${state}, expected 0.`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
wrapImportFn(fn) {
|
||||||
|
return (...args) => {
|
||||||
|
if (this.getState() === State.Rewinding) {
|
||||||
|
this.exports.asyncify_stop_rewind();
|
||||||
|
return this.value;
|
||||||
|
}
|
||||||
|
this.assertNoneState();
|
||||||
|
let value = fn(...args);
|
||||||
|
if (!isPromise(value)) {
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
this.exports.asyncify_start_unwind(DATA_ADDR);
|
||||||
|
this.value = value;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
wrapModuleImports(module) {
|
||||||
|
return proxyGet(module, (value) => {
|
||||||
|
if (typeof value === "function") {
|
||||||
|
return this.wrapImportFn(value);
|
||||||
|
}
|
||||||
|
return value;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
wrapImports(imports) {
|
||||||
|
if (imports === void 0)
|
||||||
|
return;
|
||||||
|
return proxyGet(imports, (moduleImports = /* @__PURE__ */ Object.create(null)) => this.wrapModuleImports(moduleImports));
|
||||||
|
}
|
||||||
|
wrapExportFn(fn) {
|
||||||
|
let newExport = WRAPPED_EXPORTS.get(fn);
|
||||||
|
if (newExport !== void 0) {
|
||||||
|
return newExport;
|
||||||
|
}
|
||||||
|
newExport = async (...args) => {
|
||||||
|
this.assertNoneState();
|
||||||
|
let result = fn(...args);
|
||||||
|
while (this.getState() === State.Unwinding) {
|
||||||
|
this.exports.asyncify_stop_unwind();
|
||||||
|
this.value = await this.value;
|
||||||
|
this.assertNoneState();
|
||||||
|
this.exports.asyncify_start_rewind(DATA_ADDR);
|
||||||
|
result = fn();
|
||||||
|
}
|
||||||
|
this.assertNoneState();
|
||||||
|
return result;
|
||||||
|
};
|
||||||
|
WRAPPED_EXPORTS.set(fn, newExport);
|
||||||
|
return newExport;
|
||||||
|
}
|
||||||
|
wrapExports(exports) {
|
||||||
|
let newExports = /* @__PURE__ */ Object.create(null);
|
||||||
|
for (let exportName in exports) {
|
||||||
|
let value = exports[exportName];
|
||||||
|
if (typeof value === "function" && !exportName.startsWith("asyncify_")) {
|
||||||
|
value = this.wrapExportFn(value);
|
||||||
|
}
|
||||||
|
Object.defineProperty(newExports, exportName, {
|
||||||
|
enumerable: true,
|
||||||
|
value
|
||||||
|
});
|
||||||
|
}
|
||||||
|
WRAPPED_EXPORTS.set(exports, newExports);
|
||||||
|
return newExports;
|
||||||
|
}
|
||||||
|
init(instance, imports) {
|
||||||
|
const { exports } = instance;
|
||||||
|
const memory = exports.memory || imports.env && imports.env.memory;
|
||||||
|
new Int32Array(memory.buffer, DATA_ADDR).set([DATA_START, DATA_END]);
|
||||||
|
this.exports = this.wrapExports(exports);
|
||||||
|
Object.setPrototypeOf(instance, Instance.prototype);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
var Instance = class extends WebAssembly.Instance {
|
||||||
|
constructor(module, imports) {
|
||||||
|
let state = new Asyncify();
|
||||||
|
super(module, state.wrapImports(imports));
|
||||||
|
state.init(this, imports);
|
||||||
|
}
|
||||||
|
get exports() {
|
||||||
|
return WRAPPED_EXPORTS.get(super.exports);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
Object.defineProperty(Instance.prototype, "exports", { enumerable: true });
|
||||||
|
var DevNull = class {
|
||||||
|
writev(iovs) {
|
||||||
|
return iovs.map((iov) => iov.byteLength).reduce((prev, curr) => prev + curr);
|
||||||
|
}
|
||||||
|
readv(iovs) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
close() {
|
||||||
|
}
|
||||||
|
async preRun() {
|
||||||
|
}
|
||||||
|
async postRun() {
|
||||||
|
}
|
||||||
|
};
|
||||||
|
var ReadableStreamBase = class {
|
||||||
|
writev(iovs) {
|
||||||
|
throw new Error("Attempting to call write on a readable stream");
|
||||||
|
}
|
||||||
|
close() {
|
||||||
|
}
|
||||||
|
async preRun() {
|
||||||
|
}
|
||||||
|
async postRun() {
|
||||||
|
}
|
||||||
|
};
|
||||||
|
var _pending;
|
||||||
|
var _reader;
|
||||||
|
var AsyncReadableStreamAdapter = class extends ReadableStreamBase {
|
||||||
|
constructor(reader) {
|
||||||
|
super();
|
||||||
|
__privateAdd(this, _pending, new Uint8Array());
|
||||||
|
__privateAdd(this, _reader, void 0);
|
||||||
|
__privateSet(this, _reader, reader);
|
||||||
|
}
|
||||||
|
async readv(iovs) {
|
||||||
|
let read = 0;
|
||||||
|
for (let iov of iovs) {
|
||||||
|
while (iov.byteLength > 0) {
|
||||||
|
if (__privateGet(this, _pending).byteLength === 0) {
|
||||||
|
const result = await __privateGet(this, _reader).read();
|
||||||
|
if (result.done) {
|
||||||
|
return read;
|
||||||
|
}
|
||||||
|
__privateSet(this, _pending, result.value);
|
||||||
|
}
|
||||||
|
const bytes = Math.min(iov.byteLength, __privateGet(this, _pending).byteLength);
|
||||||
|
iov.set(__privateGet(this, _pending).subarray(0, bytes));
|
||||||
|
__privateSet(this, _pending, __privateGet(this, _pending).subarray(bytes));
|
||||||
|
read += bytes;
|
||||||
|
iov = iov.subarray(bytes);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return read;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
_pending = /* @__PURE__ */ new WeakMap();
|
||||||
|
_reader = /* @__PURE__ */ new WeakMap();
|
||||||
|
var WritableStreamBase = class {
|
||||||
|
readv(iovs) {
|
||||||
|
throw new Error("Attempting to call read on a writable stream");
|
||||||
|
}
|
||||||
|
close() {
|
||||||
|
}
|
||||||
|
async preRun() {
|
||||||
|
}
|
||||||
|
async postRun() {
|
||||||
|
}
|
||||||
|
};
|
||||||
|
var _writer;
|
||||||
|
var AsyncWritableStreamAdapter = class extends WritableStreamBase {
|
||||||
|
constructor(writer) {
|
||||||
|
super();
|
||||||
|
__privateAdd(this, _writer, void 0);
|
||||||
|
__privateSet(this, _writer, writer);
|
||||||
|
}
|
||||||
|
async writev(iovs) {
|
||||||
|
let written = 0;
|
||||||
|
for (const iov of iovs) {
|
||||||
|
if (iov.byteLength === 0) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
await __privateGet(this, _writer).write(iov);
|
||||||
|
written += iov.byteLength;
|
||||||
|
}
|
||||||
|
return written;
|
||||||
|
}
|
||||||
|
async close() {
|
||||||
|
await __privateGet(this, _writer).close();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
_writer = /* @__PURE__ */ new WeakMap();
|
||||||
|
var _writer2;
|
||||||
|
var _buffer;
|
||||||
|
var _bytesWritten;
|
||||||
|
var SyncWritableStreamAdapter = class extends WritableStreamBase {
|
||||||
|
constructor(writer) {
|
||||||
|
super();
|
||||||
|
__privateAdd(this, _writer2, void 0);
|
||||||
|
__privateAdd(this, _buffer, new Uint8Array(4096));
|
||||||
|
__privateAdd(this, _bytesWritten, 0);
|
||||||
|
__privateSet(this, _writer2, writer);
|
||||||
|
}
|
||||||
|
writev(iovs) {
|
||||||
|
let written = 0;
|
||||||
|
for (const iov of iovs) {
|
||||||
|
if (iov.byteLength === 0) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
const requiredCapacity = __privateGet(this, _bytesWritten) + iov.byteLength;
|
||||||
|
if (requiredCapacity > __privateGet(this, _buffer).byteLength) {
|
||||||
|
let desiredCapacity = __privateGet(this, _buffer).byteLength;
|
||||||
|
while (desiredCapacity < requiredCapacity) {
|
||||||
|
desiredCapacity *= 1.5;
|
||||||
|
}
|
||||||
|
const oldBuffer = __privateGet(this, _buffer);
|
||||||
|
__privateSet(this, _buffer, new Uint8Array(desiredCapacity));
|
||||||
|
__privateGet(this, _buffer).set(oldBuffer);
|
||||||
|
}
|
||||||
|
__privateGet(this, _buffer).set(iov, __privateGet(this, _bytesWritten));
|
||||||
|
written += iov.byteLength;
|
||||||
|
__privateSet(this, _bytesWritten, __privateGet(this, _bytesWritten) + iov.byteLength);
|
||||||
|
}
|
||||||
|
return written;
|
||||||
|
}
|
||||||
|
async postRun() {
|
||||||
|
const slice = __privateGet(this, _buffer).subarray(0, __privateGet(this, _bytesWritten));
|
||||||
|
await __privateGet(this, _writer2).write(slice);
|
||||||
|
await __privateGet(this, _writer2).close();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
_writer2 = /* @__PURE__ */ new WeakMap();
|
||||||
|
_buffer = /* @__PURE__ */ new WeakMap();
|
||||||
|
_bytesWritten = /* @__PURE__ */ new WeakMap();
|
||||||
|
var _buffer2;
|
||||||
|
var _reader2;
|
||||||
|
var SyncReadableStreamAdapter = class extends ReadableStreamBase {
|
||||||
|
constructor(reader) {
|
||||||
|
super();
|
||||||
|
__privateAdd(this, _buffer2, void 0);
|
||||||
|
__privateAdd(this, _reader2, void 0);
|
||||||
|
__privateSet(this, _reader2, reader);
|
||||||
|
}
|
||||||
|
readv(iovs) {
|
||||||
|
let read = 0;
|
||||||
|
for (const iov of iovs) {
|
||||||
|
const bytes = Math.min(iov.byteLength, __privateGet(this, _buffer2).byteLength);
|
||||||
|
if (bytes <= 0) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
iov.set(__privateGet(this, _buffer2).subarray(0, bytes));
|
||||||
|
__privateSet(this, _buffer2, __privateGet(this, _buffer2).subarray(bytes));
|
||||||
|
read += bytes;
|
||||||
|
}
|
||||||
|
return read;
|
||||||
|
}
|
||||||
|
async preRun() {
|
||||||
|
const pending = [];
|
||||||
|
let length = 0;
|
||||||
|
for (; ; ) {
|
||||||
|
const result2 = await __privateGet(this, _reader2).read();
|
||||||
|
if (result2.done) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
const data = result2.value;
|
||||||
|
pending.push(data);
|
||||||
|
length += data.length;
|
||||||
|
}
|
||||||
|
let result = new Uint8Array(length);
|
||||||
|
let offset = 0;
|
||||||
|
pending.forEach((item) => {
|
||||||
|
result.set(item, offset);
|
||||||
|
offset += item.length;
|
||||||
|
});
|
||||||
|
__privateSet(this, _buffer2, result);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
_buffer2 = /* @__PURE__ */ new WeakMap();
|
||||||
|
_reader2 = /* @__PURE__ */ new WeakMap();
|
||||||
|
var fromReadableStream = (stream, supportsAsync) => {
|
||||||
|
if (!stream) {
|
||||||
|
return new DevNull();
|
||||||
|
}
|
||||||
|
if (supportsAsync) {
|
||||||
|
return new AsyncReadableStreamAdapter(stream.getReader());
|
||||||
|
}
|
||||||
|
return new SyncReadableStreamAdapter(stream.getReader());
|
||||||
|
};
|
||||||
|
var fromWritableStream = (stream, supportsAsync) => {
|
||||||
|
if (!stream) {
|
||||||
|
return new DevNull();
|
||||||
|
}
|
||||||
|
if (supportsAsync) {
|
||||||
|
return new AsyncWritableStreamAdapter(stream.getWriter());
|
||||||
|
}
|
||||||
|
return new SyncWritableStreamAdapter(stream.getWriter());
|
||||||
|
};
|
||||||
|
var ProcessExit = class extends Error {
|
||||||
|
constructor(code) {
|
||||||
|
super(`proc_exit=${code}`);
|
||||||
|
this.code = code;
|
||||||
|
Object.setPrototypeOf(this, ProcessExit.prototype);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
var _args;
|
||||||
|
var _env;
|
||||||
|
var _memory;
|
||||||
|
var _preopens;
|
||||||
|
var _returnOnExit;
|
||||||
|
var _streams;
|
||||||
|
var _memfs;
|
||||||
|
var _state;
|
||||||
|
var _asyncify;
|
||||||
|
var _view;
|
||||||
|
var view_fn;
|
||||||
|
var _fillValues;
|
||||||
|
var fillValues_fn;
|
||||||
|
var _fillSizes;
|
||||||
|
var fillSizes_fn;
|
||||||
|
var _args_get;
|
||||||
|
var args_get_fn;
|
||||||
|
var _args_sizes_get;
|
||||||
|
var args_sizes_get_fn;
|
||||||
|
var _clock_res_get;
|
||||||
|
var clock_res_get_fn;
|
||||||
|
var _clock_time_get;
|
||||||
|
var clock_time_get_fn;
|
||||||
|
var _environ_get;
|
||||||
|
var environ_get_fn;
|
||||||
|
var _environ_sizes_get;
|
||||||
|
var environ_sizes_get_fn;
|
||||||
|
var _fd_read;
|
||||||
|
var fd_read_fn;
|
||||||
|
var _fd_write;
|
||||||
|
var fd_write_fn;
|
||||||
|
var _poll_oneoff;
|
||||||
|
var poll_oneoff_fn;
|
||||||
|
var _proc_exit;
|
||||||
|
var proc_exit_fn;
|
||||||
|
var _proc_raise;
|
||||||
|
var proc_raise_fn;
|
||||||
|
var _random_get;
|
||||||
|
var random_get_fn;
|
||||||
|
var _sched_yield;
|
||||||
|
var sched_yield_fn;
|
||||||
|
var _sock_recv;
|
||||||
|
var sock_recv_fn;
|
||||||
|
var _sock_send;
|
||||||
|
var sock_send_fn;
|
||||||
|
var _sock_shutdown;
|
||||||
|
var sock_shutdown_fn;
|
||||||
|
var WASI = class {
|
||||||
|
constructor(options) {
|
||||||
|
__privateAdd(this, _view);
|
||||||
|
__privateAdd(this, _fillValues);
|
||||||
|
__privateAdd(this, _fillSizes);
|
||||||
|
__privateAdd(this, _args_get);
|
||||||
|
__privateAdd(this, _args_sizes_get);
|
||||||
|
__privateAdd(this, _clock_res_get);
|
||||||
|
__privateAdd(this, _clock_time_get);
|
||||||
|
__privateAdd(this, _environ_get);
|
||||||
|
__privateAdd(this, _environ_sizes_get);
|
||||||
|
__privateAdd(this, _fd_read);
|
||||||
|
__privateAdd(this, _fd_write);
|
||||||
|
__privateAdd(this, _poll_oneoff);
|
||||||
|
__privateAdd(this, _proc_exit);
|
||||||
|
__privateAdd(this, _proc_raise);
|
||||||
|
__privateAdd(this, _random_get);
|
||||||
|
__privateAdd(this, _sched_yield);
|
||||||
|
__privateAdd(this, _sock_recv);
|
||||||
|
__privateAdd(this, _sock_send);
|
||||||
|
__privateAdd(this, _sock_shutdown);
|
||||||
|
__privateAdd(this, _args, void 0);
|
||||||
|
__privateAdd(this, _env, void 0);
|
||||||
|
__privateAdd(this, _memory, void 0);
|
||||||
|
__privateAdd(this, _preopens, void 0);
|
||||||
|
__privateAdd(this, _returnOnExit, void 0);
|
||||||
|
__privateAdd(this, _streams, void 0);
|
||||||
|
__privateAdd(this, _memfs, void 0);
|
||||||
|
__privateAdd(this, _state, new Asyncify());
|
||||||
|
__privateAdd(this, _asyncify, void 0);
|
||||||
|
__privateSet(this, _args, options?.args ?? []);
|
||||||
|
const env = options?.env ?? {};
|
||||||
|
__privateSet(this, _env, Object.keys(env).map((key) => {
|
||||||
|
return `${key}=${env[key]}`;
|
||||||
|
}));
|
||||||
|
__privateSet(this, _returnOnExit, options?.returnOnExit ?? false);
|
||||||
|
__privateSet(this, _preopens, options?.preopens ?? []);
|
||||||
|
__privateSet(this, _asyncify, options?.streamStdio ?? false);
|
||||||
|
__privateSet(this, _streams, [
|
||||||
|
fromReadableStream(options?.stdin, __privateGet(this, _asyncify)),
|
||||||
|
fromWritableStream(options?.stdout, __privateGet(this, _asyncify)),
|
||||||
|
fromWritableStream(options?.stderr, __privateGet(this, _asyncify))
|
||||||
|
]);
|
||||||
|
__privateSet(this, _memfs, new MemFS(__privateGet(this, _preopens), options?.fs ?? {}));
|
||||||
|
}
|
||||||
|
async start(instance) {
|
||||||
|
__privateSet(this, _memory, instance.exports.memory);
|
||||||
|
__privateGet(this, _memfs).initialize(__privateGet(this, _memory));
|
||||||
|
try {
|
||||||
|
if (__privateGet(this, _asyncify)) {
|
||||||
|
if (!instance.exports.asyncify_get_state) {
|
||||||
|
throw new Error("streamStdio is requested but the module is missing 'Asyncify' exports, see https://github.com/GoogleChromeLabs/asyncify");
|
||||||
|
}
|
||||||
|
__privateGet(this, _state).init(instance);
|
||||||
|
}
|
||||||
|
await Promise.all(__privateGet(this, _streams).map((s) => s.preRun()));
|
||||||
|
if (__privateGet(this, _asyncify)) {
|
||||||
|
await __privateGet(this, _state).exports._start();
|
||||||
|
} else {
|
||||||
|
const entrypoint = instance.exports._start;
|
||||||
|
entrypoint();
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
if (!__privateGet(this, _returnOnExit)) {
|
||||||
|
throw e;
|
||||||
|
}
|
||||||
|
if (e.message === "unreachable") {
|
||||||
|
return 134;
|
||||||
|
} else if (e instanceof ProcessExit) {
|
||||||
|
return e.code;
|
||||||
|
} else {
|
||||||
|
throw e;
|
||||||
|
}
|
||||||
|
} finally {
|
||||||
|
await Promise.all(__privateGet(this, _streams).map((s) => s.close()));
|
||||||
|
await Promise.all(__privateGet(this, _streams).map((s) => s.postRun()));
|
||||||
|
}
|
||||||
|
return void 0;
|
||||||
|
}
|
||||||
|
get wasiImport() {
|
||||||
|
const wrap = (f, self = this) => {
|
||||||
|
const bound = f.bind(self);
|
||||||
|
if (__privateGet(this, _asyncify)) {
|
||||||
|
return __privateGet(this, _state).wrapImportFn(bound);
|
||||||
|
}
|
||||||
|
return bound;
|
||||||
|
};
|
||||||
|
return {
|
||||||
|
args_get: wrap(__privateMethod(this, _args_get, args_get_fn)),
|
||||||
|
args_sizes_get: wrap(__privateMethod(this, _args_sizes_get, args_sizes_get_fn)),
|
||||||
|
clock_res_get: wrap(__privateMethod(this, _clock_res_get, clock_res_get_fn)),
|
||||||
|
clock_time_get: wrap(__privateMethod(this, _clock_time_get, clock_time_get_fn)),
|
||||||
|
environ_get: wrap(__privateMethod(this, _environ_get, environ_get_fn)),
|
||||||
|
environ_sizes_get: wrap(__privateMethod(this, _environ_sizes_get, environ_sizes_get_fn)),
|
||||||
|
fd_advise: wrap(__privateGet(this, _memfs).exports.fd_advise),
|
||||||
|
fd_allocate: wrap(__privateGet(this, _memfs).exports.fd_allocate),
|
||||||
|
fd_close: wrap(__privateGet(this, _memfs).exports.fd_close),
|
||||||
|
fd_datasync: wrap(__privateGet(this, _memfs).exports.fd_datasync),
|
||||||
|
fd_fdstat_get: wrap(__privateGet(this, _memfs).exports.fd_fdstat_get),
|
||||||
|
fd_fdstat_set_flags: wrap(__privateGet(this, _memfs).exports.fd_fdstat_set_flags),
|
||||||
|
fd_fdstat_set_rights: wrap(__privateGet(this, _memfs).exports.fd_fdstat_set_rights),
|
||||||
|
fd_filestat_get: wrap(__privateGet(this, _memfs).exports.fd_filestat_get),
|
||||||
|
fd_filestat_set_size: wrap(__privateGet(this, _memfs).exports.fd_filestat_set_size),
|
||||||
|
fd_filestat_set_times: wrap(__privateGet(this, _memfs).exports.fd_filestat_set_times),
|
||||||
|
fd_pread: wrap(__privateGet(this, _memfs).exports.fd_pread),
|
||||||
|
fd_prestat_dir_name: wrap(__privateGet(this, _memfs).exports.fd_prestat_dir_name),
|
||||||
|
fd_prestat_get: wrap(__privateGet(this, _memfs).exports.fd_prestat_get),
|
||||||
|
fd_pwrite: wrap(__privateGet(this, _memfs).exports.fd_pwrite),
|
||||||
|
fd_read: wrap(__privateMethod(this, _fd_read, fd_read_fn)),
|
||||||
|
fd_readdir: wrap(__privateGet(this, _memfs).exports.fd_readdir),
|
||||||
|
fd_renumber: wrap(__privateGet(this, _memfs).exports.fd_renumber),
|
||||||
|
fd_seek: wrap(__privateGet(this, _memfs).exports.fd_seek),
|
||||||
|
fd_sync: wrap(__privateGet(this, _memfs).exports.fd_sync),
|
||||||
|
fd_tell: wrap(__privateGet(this, _memfs).exports.fd_tell),
|
||||||
|
fd_write: wrap(__privateMethod(this, _fd_write, fd_write_fn)),
|
||||||
|
path_create_directory: wrap(__privateGet(this, _memfs).exports.path_create_directory),
|
||||||
|
path_filestat_get: wrap(__privateGet(this, _memfs).exports.path_filestat_get),
|
||||||
|
path_filestat_set_times: wrap(__privateGet(this, _memfs).exports.path_filestat_set_times),
|
||||||
|
path_link: wrap(__privateGet(this, _memfs).exports.path_link),
|
||||||
|
path_open: wrap(__privateGet(this, _memfs).exports.path_open),
|
||||||
|
path_readlink: wrap(__privateGet(this, _memfs).exports.path_readlink),
|
||||||
|
path_remove_directory: wrap(__privateGet(this, _memfs).exports.path_remove_directory),
|
||||||
|
path_rename: wrap(__privateGet(this, _memfs).exports.path_rename),
|
||||||
|
path_symlink: wrap(__privateGet(this, _memfs).exports.path_symlink),
|
||||||
|
path_unlink_file: wrap(__privateGet(this, _memfs).exports.path_unlink_file),
|
||||||
|
poll_oneoff: wrap(__privateMethod(this, _poll_oneoff, poll_oneoff_fn)),
|
||||||
|
proc_exit: wrap(__privateMethod(this, _proc_exit, proc_exit_fn)),
|
||||||
|
proc_raise: wrap(__privateMethod(this, _proc_raise, proc_raise_fn)),
|
||||||
|
random_get: wrap(__privateMethod(this, _random_get, random_get_fn)),
|
||||||
|
sched_yield: wrap(__privateMethod(this, _sched_yield, sched_yield_fn)),
|
||||||
|
sock_recv: wrap(__privateMethod(this, _sock_recv, sock_recv_fn)),
|
||||||
|
sock_send: wrap(__privateMethod(this, _sock_send, sock_send_fn)),
|
||||||
|
sock_shutdown: wrap(__privateMethod(this, _sock_shutdown, sock_shutdown_fn))
|
||||||
|
};
|
||||||
|
}
|
||||||
|
};
|
||||||
|
_args = /* @__PURE__ */ new WeakMap();
|
||||||
|
_env = /* @__PURE__ */ new WeakMap();
|
||||||
|
_memory = /* @__PURE__ */ new WeakMap();
|
||||||
|
_preopens = /* @__PURE__ */ new WeakMap();
|
||||||
|
_returnOnExit = /* @__PURE__ */ new WeakMap();
|
||||||
|
_streams = /* @__PURE__ */ new WeakMap();
|
||||||
|
_memfs = /* @__PURE__ */ new WeakMap();
|
||||||
|
_state = /* @__PURE__ */ new WeakMap();
|
||||||
|
_asyncify = /* @__PURE__ */ new WeakMap();
|
||||||
|
_view = /* @__PURE__ */ new WeakSet();
|
||||||
|
view_fn = function() {
|
||||||
|
if (!__privateGet(this, _memory)) {
|
||||||
|
throw new Error("this.memory not set");
|
||||||
|
}
|
||||||
|
return new DataView(__privateGet(this, _memory).buffer);
|
||||||
|
};
|
||||||
|
_fillValues = /* @__PURE__ */ new WeakSet();
|
||||||
|
fillValues_fn = function(values, iter_ptr_ptr, buf_ptr) {
|
||||||
|
const encoder = new TextEncoder();
|
||||||
|
const buffer = new Uint8Array(__privateGet(this, _memory).buffer);
|
||||||
|
const view = __privateMethod(this, _view, view_fn).call(this);
|
||||||
|
for (const value of values) {
|
||||||
|
view.setUint32(iter_ptr_ptr, buf_ptr, true);
|
||||||
|
iter_ptr_ptr += 4;
|
||||||
|
const data = encoder.encode(`${value}\0`);
|
||||||
|
buffer.set(data, buf_ptr);
|
||||||
|
buf_ptr += data.length;
|
||||||
|
}
|
||||||
|
return Result.SUCCESS;
|
||||||
|
};
|
||||||
|
_fillSizes = /* @__PURE__ */ new WeakSet();
|
||||||
|
fillSizes_fn = function(values, count_ptr, buffer_size_ptr) {
|
||||||
|
const view = __privateMethod(this, _view, view_fn).call(this);
|
||||||
|
const encoder = new TextEncoder();
|
||||||
|
const len = values.reduce((acc, value) => {
|
||||||
|
return acc + encoder.encode(`${value}\0`).length;
|
||||||
|
}, 0);
|
||||||
|
view.setUint32(count_ptr, values.length, true);
|
||||||
|
view.setUint32(buffer_size_ptr, len, true);
|
||||||
|
return Result.SUCCESS;
|
||||||
|
};
|
||||||
|
_args_get = /* @__PURE__ */ new WeakSet();
|
||||||
|
args_get_fn = function(argv_ptr_ptr, argv_buf_ptr) {
|
||||||
|
return __privateMethod(this, _fillValues, fillValues_fn).call(this, __privateGet(this, _args), argv_ptr_ptr, argv_buf_ptr);
|
||||||
|
};
|
||||||
|
_args_sizes_get = /* @__PURE__ */ new WeakSet();
|
||||||
|
args_sizes_get_fn = function(argc_ptr, argv_buf_size_ptr) {
|
||||||
|
return __privateMethod(this, _fillSizes, fillSizes_fn).call(this, __privateGet(this, _args), argc_ptr, argv_buf_size_ptr);
|
||||||
|
};
|
||||||
|
_clock_res_get = /* @__PURE__ */ new WeakSet();
|
||||||
|
clock_res_get_fn = function(id, retptr0) {
|
||||||
|
switch (id) {
|
||||||
|
case Clock.REALTIME:
|
||||||
|
case Clock.MONOTONIC:
|
||||||
|
case Clock.PROCESS_CPUTIME_ID:
|
||||||
|
case Clock.THREAD_CPUTIME_ID: {
|
||||||
|
const view = __privateMethod(this, _view, view_fn).call(this);
|
||||||
|
view.setBigUint64(retptr0, BigInt(1e6), true);
|
||||||
|
return Result.SUCCESS;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return Result.EINVAL;
|
||||||
|
};
|
||||||
|
_clock_time_get = /* @__PURE__ */ new WeakSet();
|
||||||
|
clock_time_get_fn = function(id, precision, retptr0) {
|
||||||
|
switch (id) {
|
||||||
|
case Clock.REALTIME:
|
||||||
|
case Clock.MONOTONIC:
|
||||||
|
case Clock.PROCESS_CPUTIME_ID:
|
||||||
|
case Clock.THREAD_CPUTIME_ID: {
|
||||||
|
const view = __privateMethod(this, _view, view_fn).call(this);
|
||||||
|
view.setBigUint64(retptr0, BigInt(Date.now()) * BigInt(1e6), true);
|
||||||
|
return Result.SUCCESS;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return Result.EINVAL;
|
||||||
|
};
|
||||||
|
_environ_get = /* @__PURE__ */ new WeakSet();
|
||||||
|
environ_get_fn = function(env_ptr_ptr, env_buf_ptr) {
|
||||||
|
return __privateMethod(this, _fillValues, fillValues_fn).call(this, __privateGet(this, _env), env_ptr_ptr, env_buf_ptr);
|
||||||
|
};
|
||||||
|
_environ_sizes_get = /* @__PURE__ */ new WeakSet();
|
||||||
|
environ_sizes_get_fn = function(env_ptr, env_buf_size_ptr) {
|
||||||
|
return __privateMethod(this, _fillSizes, fillSizes_fn).call(this, __privateGet(this, _env), env_ptr, env_buf_size_ptr);
|
||||||
|
};
|
||||||
|
_fd_read = /* @__PURE__ */ new WeakSet();
|
||||||
|
fd_read_fn = function(fd, iovs_ptr, iovs_len, retptr0) {
|
||||||
|
if (fd < 3) {
|
||||||
|
const desc = __privateGet(this, _streams)[fd];
|
||||||
|
const view = __privateMethod(this, _view, view_fn).call(this);
|
||||||
|
const iovs = iovViews(view, iovs_ptr, iovs_len);
|
||||||
|
const result = desc.readv(iovs);
|
||||||
|
if (typeof result === "number") {
|
||||||
|
view.setUint32(retptr0, result, true);
|
||||||
|
return Result.SUCCESS;
|
||||||
|
}
|
||||||
|
const promise = result;
|
||||||
|
return promise.then((read) => {
|
||||||
|
view.setUint32(retptr0, read, true);
|
||||||
|
return Result.SUCCESS;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return __privateGet(this, _memfs).exports.fd_read(fd, iovs_ptr, iovs_len, retptr0);
|
||||||
|
};
|
||||||
|
_fd_write = /* @__PURE__ */ new WeakSet();
|
||||||
|
fd_write_fn = function(fd, ciovs_ptr, ciovs_len, retptr0) {
|
||||||
|
if (fd < 3) {
|
||||||
|
const desc = __privateGet(this, _streams)[fd];
|
||||||
|
const view = __privateMethod(this, _view, view_fn).call(this);
|
||||||
|
const iovs = iovViews(view, ciovs_ptr, ciovs_len);
|
||||||
|
const result = desc.writev(iovs);
|
||||||
|
if (typeof result === "number") {
|
||||||
|
view.setUint32(retptr0, result, true);
|
||||||
|
return Result.SUCCESS;
|
||||||
|
}
|
||||||
|
let promise = result;
|
||||||
|
return promise.then((written) => {
|
||||||
|
view.setUint32(retptr0, written, true);
|
||||||
|
return Result.SUCCESS;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return __privateGet(this, _memfs).exports.fd_write(fd, ciovs_ptr, ciovs_len, retptr0);
|
||||||
|
};
|
||||||
|
_poll_oneoff = /* @__PURE__ */ new WeakSet();
|
||||||
|
poll_oneoff_fn = function(in_ptr, out_ptr, nsubscriptions, retptr0) {
|
||||||
|
return Result.ENOSYS;
|
||||||
|
};
|
||||||
|
_proc_exit = /* @__PURE__ */ new WeakSet();
|
||||||
|
proc_exit_fn = function(code) {
|
||||||
|
throw new ProcessExit(code);
|
||||||
|
};
|
||||||
|
_proc_raise = /* @__PURE__ */ new WeakSet();
|
||||||
|
proc_raise_fn = function(signal) {
|
||||||
|
return Result.ENOSYS;
|
||||||
|
};
|
||||||
|
_random_get = /* @__PURE__ */ new WeakSet();
|
||||||
|
random_get_fn = function(buffer_ptr, buffer_len) {
|
||||||
|
const buffer = new Uint8Array(__privateGet(this, _memory).buffer, buffer_ptr, buffer_len);
|
||||||
|
crypto.getRandomValues(buffer);
|
||||||
|
return Result.SUCCESS;
|
||||||
|
};
|
||||||
|
_sched_yield = /* @__PURE__ */ new WeakSet();
|
||||||
|
sched_yield_fn = function() {
|
||||||
|
return Result.SUCCESS;
|
||||||
|
};
|
||||||
|
_sock_recv = /* @__PURE__ */ new WeakSet();
|
||||||
|
sock_recv_fn = function(fd, ri_data_ptr, ri_data_len, ri_flags, retptr0, retptr1) {
|
||||||
|
return Result.ENOSYS;
|
||||||
|
};
|
||||||
|
_sock_send = /* @__PURE__ */ new WeakSet();
|
||||||
|
sock_send_fn = function(fd, si_data_ptr, si_data_len, si_flags, retptr0) {
|
||||||
|
return Result.ENOSYS;
|
||||||
|
};
|
||||||
|
_sock_shutdown = /* @__PURE__ */ new WeakSet();
|
||||||
|
sock_shutdown_fn = function(fd, how) {
|
||||||
|
return Result.ENOSYS;
|
||||||
|
};
|
||||||
|
|
||||||
|
// src/index.js
|
|
@ -1,207 +0,0 @@
|
||||||
const builtin = @import("builtin");
|
|
||||||
const std = @import("std");
|
|
||||||
const flexilib = @import("flexilib-interface");
|
|
||||||
const interface = @import("universal_lambda_interface");
|
|
||||||
|
|
||||||
const log = std.log.scoped(.standalone_server);
|
|
||||||
|
|
||||||
/// We need to create a child process to be able to deal with panics appropriately
|
|
||||||
pub fn runStandaloneServerParent(allocator: ?std.mem.Allocator, event_handler: interface.HandlerFn) !u8 {
|
|
||||||
if (builtin.os.tag == .wasi) return error.WasiNotSupported;
|
|
||||||
const alloc = allocator orelse std.heap.page_allocator;
|
|
||||||
|
|
||||||
var arena = std.heap.ArenaAllocator.init(alloc);
|
|
||||||
defer arena.deinit();
|
|
||||||
|
|
||||||
const aa = arena.allocator();
|
|
||||||
var al = std.ArrayList([]const u8).init(aa);
|
|
||||||
defer al.deinit();
|
|
||||||
var argi = std.process.args();
|
|
||||||
// We do this first so it shows more prominently when looking at processes
|
|
||||||
// Also it will be slightly faster for whatever that is worth
|
|
||||||
const child_arg = "--child_of_standalone_server";
|
|
||||||
if (argi.next()) |a| try al.append(a);
|
|
||||||
try al.append(child_arg);
|
|
||||||
while (argi.next()) |a| {
|
|
||||||
if (std.mem.eql(u8, child_arg, a)) {
|
|
||||||
// This should never actually return
|
|
||||||
try runStandaloneServer(allocator, event_handler, 8080); // TODO: configurable port
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
try al.append(a);
|
|
||||||
}
|
|
||||||
// Parent
|
|
||||||
const stdin = std.io.getStdIn();
|
|
||||||
const stdout = std.io.getStdOut();
|
|
||||||
const stderr = std.io.getStdErr();
|
|
||||||
while (true) {
|
|
||||||
var cp = std.ChildProcess.init(al.items, alloc);
|
|
||||||
cp.stdin = stdin;
|
|
||||||
cp.stdout = stdout;
|
|
||||||
cp.stderr = stderr;
|
|
||||||
_ = try cp.spawnAndWait();
|
|
||||||
try stderr.writeAll("Caught abnormal process termination, relaunching server");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Will create a web server and marshall all requests back to our event handler
|
|
||||||
/// To keep things simple, we'll have this on a single thread, at least for now
|
|
||||||
fn runStandaloneServer(allocator: ?std.mem.Allocator, event_handler: interface.HandlerFn, port: u16) !void {
|
|
||||||
const alloc = allocator orelse std.heap.page_allocator;
|
|
||||||
|
|
||||||
var arena = std.heap.ArenaAllocator.init(alloc);
|
|
||||||
defer arena.deinit();
|
|
||||||
|
|
||||||
var aa = arena.allocator();
|
|
||||||
const address = try std.net.Address.parseIp("127.0.0.1", port);
|
|
||||||
var net_server = try address.listen(.{ .reuse_address = true });
|
|
||||||
const server_port = net_server.listen_address.in.getPort();
|
|
||||||
_ = try std.fmt.bufPrint(&server_url, "http://127.0.0.1:{d}", .{server_port});
|
|
||||||
log.info("server listening at {s}", .{server_url});
|
|
||||||
if (builtin.is_test) server_ready = true;
|
|
||||||
|
|
||||||
// No threads, maybe later
|
|
||||||
//log.info("starting server thread, tid {d}", .{std.Thread.getCurrentId()});
|
|
||||||
while (remaining_requests == null or remaining_requests.? > 0) {
|
|
||||||
defer {
|
|
||||||
if (remaining_requests) |*r| r.* -= 1;
|
|
||||||
if (!arena.reset(.{ .retain_with_limit = 1024 * 1024 })) {
|
|
||||||
// reallocation failed, arena is degraded
|
|
||||||
log.warn("Arena reset failed and is degraded. Resetting arena", .{});
|
|
||||||
arena.deinit();
|
|
||||||
arena = std.heap.ArenaAllocator.init(alloc);
|
|
||||||
aa = arena.allocator();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
const supports_getrusage = builtin.os.tag != .windows and @hasDecl(std.posix.system, "rusage"); // Is Windows it?
|
|
||||||
var rss: if (supports_getrusage) std.posix.rusage else void = undefined;
|
|
||||||
if (supports_getrusage and builtin.mode == .Debug)
|
|
||||||
rss = std.posix.getrusage(std.posix.rusage.SELF);
|
|
||||||
if (builtin.is_test) bytes_allocated = arena.queryCapacity();
|
|
||||||
processRequest(aa, &net_server, event_handler) catch |e| {
|
|
||||||
log.err("Unexpected error processing request: {any}", .{e});
|
|
||||||
if (@errorReturnTrace()) |trace| {
|
|
||||||
std.debug.dumpStackTrace(trace.*);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
if (builtin.is_test) bytes_allocated = arena.queryCapacity() - bytes_allocated;
|
|
||||||
if (supports_getrusage and builtin.mode == .Debug) { // and debug mode) {
|
|
||||||
const rusage = std.posix.getrusage(std.posix.rusage.SELF);
|
|
||||||
log.debug(
|
|
||||||
"Request complete, max RSS of process: {d}M. Incremental: {d}K, User: {d}μs, System: {d}μs",
|
|
||||||
.{
|
|
||||||
@divTrunc(rusage.maxrss, 1024),
|
|
||||||
rusage.maxrss - rss.maxrss,
|
|
||||||
(rusage.utime.tv_sec - rss.utime.tv_sec) * std.time.us_per_s +
|
|
||||||
rusage.utime.tv_usec - rss.utime.tv_usec,
|
|
||||||
(rusage.stime.tv_sec - rss.stime.tv_sec) * std.time.us_per_s +
|
|
||||||
rusage.stime.tv_usec - rss.stime.tv_usec,
|
|
||||||
},
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
fn processRequest(aa: std.mem.Allocator, server: *std.net.Server, event_handler: interface.HandlerFn) !void {
|
|
||||||
// This function is under test, but not the standalone server itself
|
|
||||||
var connection = try server.accept();
|
|
||||||
defer connection.stream.close();
|
|
||||||
|
|
||||||
var read_buffer: [1024 * 16]u8 = undefined; // TODO: Fix this
|
|
||||||
var server_connection = std.http.Server.init(connection, &read_buffer);
|
|
||||||
var req = try server_connection.receiveHead();
|
|
||||||
|
|
||||||
const request_body = try (try req.reader()).readAllAlloc(aa, @as(usize, std.math.maxInt(usize)));
|
|
||||||
var request_headers = std.ArrayList(std.http.Header).init(aa);
|
|
||||||
defer request_headers.deinit();
|
|
||||||
var hi = req.iterateHeaders();
|
|
||||||
while (hi.next()) |h| try request_headers.append(h);
|
|
||||||
|
|
||||||
var response = interface.Response.init(aa);
|
|
||||||
defer response.deinit();
|
|
||||||
response.request.headers = request_headers.items;
|
|
||||||
response.request.headers_owned = false;
|
|
||||||
response.request.target = req.head.target;
|
|
||||||
response.request.method = req.head.method;
|
|
||||||
response.headers = &.{};
|
|
||||||
response.headers_owned = false;
|
|
||||||
|
|
||||||
var respond_options = std.http.Server.Request.RespondOptions{};
|
|
||||||
const response_bytes = event_handler(aa, request_body, &response) catch |e| brk: {
|
|
||||||
respond_options.status = response.status;
|
|
||||||
respond_options.reason = response.reason;
|
|
||||||
if (respond_options.status.class() == .success) {
|
|
||||||
respond_options.status = .internal_server_error;
|
|
||||||
respond_options.reason = null;
|
|
||||||
response.body.items = "";
|
|
||||||
}
|
|
||||||
// TODO: stream body to client? or keep internal?
|
|
||||||
// TODO: more about this particular request
|
|
||||||
log.err("Unexpected error from executor processing request: {any}", .{e});
|
|
||||||
if (@errorReturnTrace()) |trace| {
|
|
||||||
std.debug.dumpStackTrace(trace.*);
|
|
||||||
}
|
|
||||||
break :brk "Unexpected error generating request to lambda";
|
|
||||||
};
|
|
||||||
|
|
||||||
const final_response = try std.mem.concat(aa, u8, &[_][]const u8{ response.body.items, response_bytes });
|
|
||||||
try req.respond(final_response, respond_options);
|
|
||||||
}
|
|
||||||
|
|
||||||
var server_ready = false;
|
|
||||||
var remaining_requests: ?usize = null;
|
|
||||||
var server_url: ["http://127.0.0.1:99999".len]u8 = undefined;
|
|
||||||
var bytes_allocated: usize = 0;
|
|
||||||
|
|
||||||
fn testRequest(method: std.http.Method, target: []const u8, event_handler: interface.HandlerFn) !void {
|
|
||||||
remaining_requests = 1;
|
|
||||||
defer remaining_requests = null;
|
|
||||||
const server_thread = try std.Thread.spawn(
|
|
||||||
.{},
|
|
||||||
runStandaloneServer,
|
|
||||||
.{ null, event_handler, 0 },
|
|
||||||
);
|
|
||||||
var client = std.http.Client{ .allocator = std.testing.allocator };
|
|
||||||
defer client.deinit();
|
|
||||||
defer server_ready = false;
|
|
||||||
while (!server_ready) std.time.sleep(1);
|
|
||||||
|
|
||||||
const url = try std.mem.concat(std.testing.allocator, u8, &[_][]const u8{ server_url[0..], target });
|
|
||||||
defer std.testing.allocator.free(url);
|
|
||||||
log.debug("fetch from url: {s}", .{url});
|
|
||||||
|
|
||||||
var response_data = std.ArrayList(u8).init(std.testing.allocator);
|
|
||||||
defer response_data.deinit();
|
|
||||||
|
|
||||||
const resp = try client.fetch(.{
|
|
||||||
.response_storage = .{ .dynamic = &response_data },
|
|
||||||
.method = method,
|
|
||||||
.location = .{ .url = url },
|
|
||||||
});
|
|
||||||
|
|
||||||
server_thread.join();
|
|
||||||
log.debug("Bytes allocated during request: {d}", .{bytes_allocated});
|
|
||||||
log.debug("Response status: {}", .{resp.status});
|
|
||||||
log.debug("Response: {s}", .{response_data.items});
|
|
||||||
}
|
|
||||||
|
|
||||||
fn testGet(comptime path: []const u8, event_handler: interface.HandlerFn) !void {
|
|
||||||
try testRequest(.GET, path, event_handler);
|
|
||||||
}
|
|
||||||
test "can make a request" {
|
|
||||||
// std.testing.log_level = .debug;
|
|
||||||
if (@import("builtin").os.tag == .wasi) return error.SkipZigTest;
|
|
||||||
const HandlerClosure = struct {
|
|
||||||
var data_received: []const u8 = undefined;
|
|
||||||
var context_received: interface.Context = undefined;
|
|
||||||
const Self = @This();
|
|
||||||
pub fn handler(allocator: std.mem.Allocator, event_data: []const u8, context: interface.Context) ![]const u8 {
|
|
||||||
_ = allocator;
|
|
||||||
data_received = event_data;
|
|
||||||
context_received = context;
|
|
||||||
return "success";
|
|
||||||
}
|
|
||||||
};
|
|
||||||
try testGet("/", HandlerClosure.handler);
|
|
||||||
}
|
|
|
@ -5,7 +5,7 @@ const builtin = @import("builtin");
|
||||||
///
|
///
|
||||||
/// * standalone_server: This will run the handler as a standalone web server
|
/// * standalone_server: This will run the handler as a standalone web server
|
||||||
///
|
///
|
||||||
pub fn configureBuild(b: *std.Build, exe: *std.Build.Step.Compile) !void {
|
pub fn configureBuild(b: *std.build.Builder, exe: *std.Build.Step.Compile) !void {
|
||||||
_ = exe;
|
_ = exe;
|
||||||
// We don't actually need to do much here. Basically we need a dummy step,
|
// We don't actually need to do much here. Basically we need a dummy step,
|
||||||
// but one which the user will select, which will allow our options mechanism
|
// but one which the user will select, which will allow our options mechanism
|
||||||
|
|
|
@ -1,13 +1,14 @@
|
||||||
const build_options = @import("universal_lambda_build_options");
|
|
||||||
const std = @import("std");
|
const std = @import("std");
|
||||||
|
const build_options = @import("universal_lambda_build_options");
|
||||||
|
const flexilib = @import("flexilib-interface");
|
||||||
const interface = @import("universal_lambda_interface");
|
const interface = @import("universal_lambda_interface");
|
||||||
|
|
||||||
const log = std.log.scoped(.universal_lambda);
|
const log = std.log.scoped(.universal_lambda);
|
||||||
|
|
||||||
const runFn = blk: {
|
const runFn = blk: {
|
||||||
switch (build_options.build_type) {
|
switch (build_options.build_type) {
|
||||||
.awslambda => break :blk @import("awslambda.zig").run,
|
.awslambda => break :blk @import("lambda.zig").run,
|
||||||
.standalone_server => break :blk @import("standalone_server.zig").runStandaloneServerParent,
|
.standalone_server => break :blk runStandaloneServerParent,
|
||||||
// In the case of flexilib, our root module is actually flexilib.zig
|
// In the case of flexilib, our root module is actually flexilib.zig
|
||||||
// so we need to import that, otherwise we risk the dreaded "file exists
|
// so we need to import that, otherwise we risk the dreaded "file exists
|
||||||
// in multiple modules" problem
|
// in multiple modules" problem
|
||||||
|
@ -26,11 +27,163 @@ pub fn run(allocator: ?std.mem.Allocator, event_handler: interface.HandlerFn) !u
|
||||||
return try runFn(allocator, event_handler);
|
return try runFn(allocator, event_handler);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// We need to create a child process to be able to deal with panics appropriately
|
||||||
|
fn runStandaloneServerParent(allocator: ?std.mem.Allocator, event_handler: interface.HandlerFn) !u8 {
|
||||||
|
const alloc = allocator orelse std.heap.page_allocator;
|
||||||
|
|
||||||
|
var arena = std.heap.ArenaAllocator.init(alloc);
|
||||||
|
defer arena.deinit();
|
||||||
|
|
||||||
|
var aa = arena.allocator();
|
||||||
|
var al = std.ArrayList([]const u8).init(aa);
|
||||||
|
defer al.deinit();
|
||||||
|
var argi = std.process.args();
|
||||||
|
// We do this first so it shows more prominently when looking at processes
|
||||||
|
// Also it will be slightly faster for whatever that is worth
|
||||||
|
const child_arg = "--child_of_standalone_server";
|
||||||
|
if (argi.next()) |a| try al.append(a);
|
||||||
|
try al.append(child_arg);
|
||||||
|
while (argi.next()) |a| {
|
||||||
|
if (std.mem.eql(u8, child_arg, a)) {
|
||||||
|
// This should never actually return
|
||||||
|
return try runStandaloneServer(allocator, event_handler);
|
||||||
|
}
|
||||||
|
try al.append(a);
|
||||||
|
}
|
||||||
|
// Parent
|
||||||
|
const stdin = std.io.getStdIn();
|
||||||
|
const stdout = std.io.getStdOut();
|
||||||
|
const stderr = std.io.getStdErr();
|
||||||
|
while (true) {
|
||||||
|
var cp = std.ChildProcess.init(al.items, alloc);
|
||||||
|
cp.stdin = stdin;
|
||||||
|
cp.stdout = stdout;
|
||||||
|
cp.stderr = stderr;
|
||||||
|
_ = try cp.spawnAndWait();
|
||||||
|
try stderr.writeAll("Caught abnormal process termination, relaunching server");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Will create a web server and marshall all requests back to our event handler
|
||||||
|
/// To keep things simple, we'll have this on a single thread, at least for now
|
||||||
|
fn runStandaloneServer(allocator: ?std.mem.Allocator, event_handler: interface.HandlerFn) !u8 {
|
||||||
|
const alloc = allocator orelse std.heap.page_allocator;
|
||||||
|
|
||||||
|
var arena = std.heap.ArenaAllocator.init(alloc);
|
||||||
|
defer arena.deinit();
|
||||||
|
|
||||||
|
var aa = arena.allocator();
|
||||||
|
var server = std.http.Server.init(aa, .{ .reuse_address = true });
|
||||||
|
defer server.deinit();
|
||||||
|
const address = try std.net.Address.parseIp("127.0.0.1", 8080); // TODO: allow config
|
||||||
|
try server.listen(address);
|
||||||
|
const server_port = server.socket.listen_address.in.getPort();
|
||||||
|
var uri: ["http://127.0.0.1:99999".len]u8 = undefined;
|
||||||
|
_ = try std.fmt.bufPrint(&uri, "http://127.0.0.1:{d}", .{server_port});
|
||||||
|
log.info("server listening at {s}", .{uri});
|
||||||
|
|
||||||
|
// No threads, maybe later
|
||||||
|
//log.info("starting server thread, tid {d}", .{std.Thread.getCurrentId()});
|
||||||
|
while (true) {
|
||||||
|
defer {
|
||||||
|
if (!arena.reset(.{ .retain_with_limit = 1024 * 1024 })) {
|
||||||
|
// reallocation failed, arena is degraded
|
||||||
|
log.warn("Arena reset failed and is degraded. Resetting arena", .{});
|
||||||
|
arena.deinit();
|
||||||
|
arena = std.heap.ArenaAllocator.init(alloc);
|
||||||
|
aa = arena.allocator();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const builtin = @import("builtin");
|
||||||
|
const supports_getrusage = builtin.os.tag != .windows and @hasDecl(std.os.system, "rusage"); // Is Windows it?
|
||||||
|
var rss: if (supports_getrusage) std.os.rusage else void = undefined;
|
||||||
|
if (supports_getrusage and builtin.mode == .Debug)
|
||||||
|
rss = std.os.getrusage(std.os.rusage.SELF);
|
||||||
|
processRequest(aa, &server, event_handler) catch |e| {
|
||||||
|
log.err("Unexpected error processing request: {any}", .{e});
|
||||||
|
if (@errorReturnTrace()) |trace| {
|
||||||
|
std.debug.dumpStackTrace(trace.*);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
if (supports_getrusage and builtin.mode == .Debug) { // and debug mode) {
|
||||||
|
const rusage = std.os.getrusage(std.os.rusage.SELF);
|
||||||
|
log.debug(
|
||||||
|
"Request complete, max RSS of process: {d}M. Incremental: {d}K, User: {d}μs, System: {d}μs",
|
||||||
|
.{
|
||||||
|
@divTrunc(rusage.maxrss, 1024),
|
||||||
|
rusage.maxrss - rss.maxrss,
|
||||||
|
(rusage.utime.tv_sec - rss.utime.tv_sec) * std.time.us_per_s +
|
||||||
|
rusage.utime.tv_usec - rss.utime.tv_usec,
|
||||||
|
(rusage.stime.tv_sec - rss.stime.tv_sec) * std.time.us_per_s +
|
||||||
|
rusage.stime.tv_usec - rss.stime.tv_usec,
|
||||||
|
},
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
fn processRequest(aa: std.mem.Allocator, server: *std.http.Server, event_handler: interface.HandlerFn) !void {
|
||||||
|
var res = try server.accept(.{ .allocator = aa });
|
||||||
|
defer {
|
||||||
|
_ = res.reset();
|
||||||
|
if (res.headers.owned and res.headers.list.items.len > 0) res.headers.deinit();
|
||||||
|
res.deinit();
|
||||||
|
}
|
||||||
|
try res.wait(); // wait for client to send a complete request head
|
||||||
|
|
||||||
|
const errstr = "Internal Server Error\n";
|
||||||
|
var errbuf: [errstr.len]u8 = undefined;
|
||||||
|
@memcpy(&errbuf, errstr);
|
||||||
|
var response_bytes: []const u8 = errbuf[0..];
|
||||||
|
|
||||||
|
var body = if (res.request.content_length) |l|
|
||||||
|
try res.reader().readAllAlloc(aa, @as(usize, @intCast(l)))
|
||||||
|
else
|
||||||
|
try aa.dupe(u8, "");
|
||||||
|
// no need to free - will be handled by arena
|
||||||
|
|
||||||
|
var response = interface.Response.init(aa);
|
||||||
|
defer response.deinit();
|
||||||
|
response.request.headers = res.request.headers;
|
||||||
|
response.request.headers_owned = false;
|
||||||
|
response.request.target = res.request.target;
|
||||||
|
response.request.method = res.request.method;
|
||||||
|
response.headers = res.headers;
|
||||||
|
response.headers_owned = false;
|
||||||
|
|
||||||
|
response_bytes = event_handler(aa, body, &response) catch |e| brk: {
|
||||||
|
res.status = response.status;
|
||||||
|
res.reason = response.reason;
|
||||||
|
if (res.status.class() == .success) {
|
||||||
|
res.status = .internal_server_error;
|
||||||
|
res.reason = null;
|
||||||
|
}
|
||||||
|
// TODO: stream body to client? or keep internal?
|
||||||
|
// TODO: more about this particular request
|
||||||
|
log.err("Unexpected error from executor processing request: {any}", .{e});
|
||||||
|
if (@errorReturnTrace()) |trace| {
|
||||||
|
std.debug.dumpStackTrace(trace.*);
|
||||||
|
}
|
||||||
|
break :brk "Unexpected error generating request to lambda";
|
||||||
|
};
|
||||||
|
res.transfer_encoding = .{ .content_length = response_bytes.len };
|
||||||
|
|
||||||
|
try res.do();
|
||||||
|
_ = try res.writer().writeAll(response.body.items);
|
||||||
|
_ = try res.writer().writeAll(response_bytes);
|
||||||
|
try res.finish();
|
||||||
|
}
|
||||||
test {
|
test {
|
||||||
std.testing.refAllDecls(@This());
|
std.testing.refAllDecls(@This());
|
||||||
|
// if (builtin.os.tag == .wasi) return error.SkipZigTest;
|
||||||
|
if (@import("builtin").os.tag != .wasi) {
|
||||||
|
// these use http
|
||||||
|
std.testing.refAllDecls(@import("lambda.zig"));
|
||||||
|
std.testing.refAllDecls(@import("cloudflaredeploy.zig"));
|
||||||
|
std.testing.refAllDecls(@import("CloudflareDeployStep.zig"));
|
||||||
|
}
|
||||||
std.testing.refAllDecls(@import("console.zig"));
|
std.testing.refAllDecls(@import("console.zig"));
|
||||||
std.testing.refAllDecls(@import("standalone_server.zig"));
|
|
||||||
std.testing.refAllDecls(@import("awslambda.zig"));
|
|
||||||
// By importing flexilib.zig, this breaks downstream any time someone
|
// By importing flexilib.zig, this breaks downstream any time someone
|
||||||
// tries to build flexilib, because flexilib.zig becomes the root module,
|
// tries to build flexilib, because flexilib.zig becomes the root module,
|
||||||
// then gets imported here again. It shouldn't be done unless doing
|
// then gets imported here again. It shouldn't be done unless doing
|
||||||
|
@ -48,3 +201,58 @@ test {
|
||||||
|
|
||||||
// TODO: Do we want build files here too?
|
// TODO: Do we want build files here too?
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn testRequest(request_bytes: []const u8, event_handler: interface.HandlerFn) !void {
|
||||||
|
const allocator = std.testing.allocator;
|
||||||
|
var arena = std.heap.ArenaAllocator.init(allocator);
|
||||||
|
defer arena.deinit();
|
||||||
|
|
||||||
|
var server = std.http.Server.init(allocator, .{ .reuse_address = true });
|
||||||
|
defer server.deinit();
|
||||||
|
|
||||||
|
const address = try std.net.Address.parseIp("127.0.0.1", 0);
|
||||||
|
try server.listen(address);
|
||||||
|
const server_port = server.socket.listen_address.in.getPort();
|
||||||
|
|
||||||
|
var al = std.ArrayList(u8).init(allocator);
|
||||||
|
defer al.deinit();
|
||||||
|
var writer = al.writer();
|
||||||
|
_ = writer;
|
||||||
|
var aa = arena.allocator();
|
||||||
|
var bytes_allocated: usize = 0;
|
||||||
|
// pre-warm
|
||||||
|
const server_thread = try std.Thread.spawn(
|
||||||
|
.{},
|
||||||
|
processRequest,
|
||||||
|
.{ aa, &server, event_handler },
|
||||||
|
);
|
||||||
|
|
||||||
|
const stream = try std.net.tcpConnectToHost(allocator, "127.0.0.1", server_port);
|
||||||
|
defer stream.close();
|
||||||
|
_ = try stream.writeAll(request_bytes[0..]);
|
||||||
|
|
||||||
|
server_thread.join();
|
||||||
|
log.debug("Bytes allocated during request: {d}", .{arena.queryCapacity() - bytes_allocated});
|
||||||
|
log.debug("Stdout: {s}", .{al.items});
|
||||||
|
}
|
||||||
|
|
||||||
|
fn testGet(comptime path: []const u8, event_handler: interface.HandlerFn) !void {
|
||||||
|
try testRequest("GET " ++ path ++ " HTTP/1.1\r\n" ++
|
||||||
|
"Accept: */*\r\n" ++
|
||||||
|
"\r\n", event_handler);
|
||||||
|
}
|
||||||
|
test "can make a request" {
|
||||||
|
if (@import("builtin").os.tag == .wasi) return error.SkipZigTest;
|
||||||
|
const HandlerClosure = struct {
|
||||||
|
var data_received: []const u8 = undefined;
|
||||||
|
var context_received: interface.Context = undefined;
|
||||||
|
const Self = @This();
|
||||||
|
pub fn handler(allocator: std.mem.Allocator, event_data: []const u8, context: interface.Context) ![]const u8 {
|
||||||
|
_ = allocator;
|
||||||
|
data_received = event_data;
|
||||||
|
context_received = context;
|
||||||
|
return "success";
|
||||||
|
}
|
||||||
|
};
|
||||||
|
try testGet("/", HandlerClosure.handler);
|
||||||
|
}
|
||||||
|
|
|
@ -14,15 +14,15 @@ pub const BuildType = enum {
|
||||||
|
|
||||||
pub var module_root: ?[]const u8 = null;
|
pub var module_root: ?[]const u8 = null;
|
||||||
|
|
||||||
pub fn configureBuild(b: *std.Build, cs: *std.Build.Step.Compile, universal_lambda_zig_dep: *std.Build.Dependency) !void {
|
pub fn configureBuild(b: *std.Build, cs: *std.Build.Step.Compile) !void {
|
||||||
const function_name = b.option([]const u8, "function-name", "Function name for Lambda [zig-fn]") orelse "zig-fn";
|
const function_name = b.option([]const u8, "function-name", "Function name for Lambda [zig-fn]") orelse "zig-fn";
|
||||||
|
|
||||||
// const file_location = try addModules(b, cs);
|
const file_location = try addModules(b, cs);
|
||||||
|
|
||||||
// Add steps
|
// Add steps
|
||||||
try @import("lambda-zig").configureBuild(b, cs, function_name);
|
try @import("lambda_build.zig").configureBuild(b, cs, function_name);
|
||||||
try @import("cloudflare-worker-deploy").configureBuild(b, cs, function_name);
|
try @import("cloudflare_build.zig").configureBuild(b, cs, function_name);
|
||||||
try @import("flexilib_build.zig").configureBuild(b, cs, universal_lambda_zig_dep);
|
try @import("flexilib_build.zig").configureBuild(b, cs, file_location);
|
||||||
try @import("standalone_server_build.zig").configureBuild(b, cs);
|
try @import("standalone_server_build.zig").configureBuild(b, cs);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -33,72 +33,127 @@ pub fn configureBuild(b: *std.Build, cs: *std.Build.Step.Compile, universal_lamb
|
||||||
/// * universal_lambda_build_options
|
/// * universal_lambda_build_options
|
||||||
/// * flexilib-interface
|
/// * flexilib-interface
|
||||||
/// * universal_lambda_handler
|
/// * universal_lambda_handler
|
||||||
pub fn addImports(b: *std.Build, cs: *std.Build.Step.Compile, universal_lambda_zig_dep: ?*std.Build.Dependency) void {
|
pub fn addModules(b: *std.Build, cs: *std.Build.Step.Compile) ![]const u8 {
|
||||||
const Modules = struct {
|
const file_location = try findFileLocation(b);
|
||||||
aws_lambda_runtime: *std.Build.Module,
|
const options_module = try createOptionsModule(b, cs);
|
||||||
flexilib_interface: *std.Build.Module,
|
|
||||||
universal_lambda_interface: *std.Build.Module,
|
|
||||||
universal_lambda_handler: *std.Build.Module,
|
|
||||||
universal_lambda_build_options: *std.Build.Module,
|
|
||||||
};
|
|
||||||
const modules =
|
|
||||||
if (universal_lambda_zig_dep) |d|
|
|
||||||
Modules{
|
|
||||||
.aws_lambda_runtime = d.module("aws_lambda_runtime"),
|
|
||||||
.flexilib_interface = d.module("flexilib-interface"),
|
|
||||||
.universal_lambda_interface = d.module("universal_lambda_interface"),
|
|
||||||
.universal_lambda_handler = d.module("universal_lambda_handler"),
|
|
||||||
.universal_lambda_build_options = createOptionsModule(d.builder, cs),
|
|
||||||
}
|
|
||||||
else
|
|
||||||
Modules{
|
|
||||||
.aws_lambda_runtime = b.modules.get("aws_lambda_runtime").?,
|
|
||||||
.flexilib_interface = b.modules.get("flexilib-interface").?,
|
|
||||||
.universal_lambda_interface = b.modules.get("universal_lambda_interface").?,
|
|
||||||
.universal_lambda_handler = b.modules.get("universal_lambda_handler").?,
|
|
||||||
.universal_lambda_build_options = createOptionsModule(b, cs),
|
|
||||||
};
|
|
||||||
|
|
||||||
cs.root_module.addImport("universal_lambda_build_options", modules.universal_lambda_build_options);
|
// We need to add the interface module here as well, so universal_lambda.zig
|
||||||
cs.root_module.addImport("flexilib-interface", modules.flexilib_interface);
|
// can reference it. Unfortunately, this creates an issue that the consuming
|
||||||
cs.root_module.addImport("universal_lambda_interface", modules.universal_lambda_interface);
|
// build.zig.zon must have flexilib included, even if they're not building
|
||||||
cs.root_module.addImport("universal_lambda_handler", modules.universal_lambda_handler);
|
// flexilib. TODO: Accept for now, but we need to think through this situation
|
||||||
cs.root_module.addImport("aws_lambda_runtime", modules.aws_lambda_runtime);
|
// This might be fixed in 0.12.0 (see https://github.com/ziglang/zig/issues/16172).
|
||||||
|
// We can also possibly use the findFileLocation hack above in concert with
|
||||||
// universal lambda handler also needs these imports
|
// addAnonymousModule
|
||||||
modules.universal_lambda_handler.addImport("universal_lambda_interface", modules.universal_lambda_interface);
|
const flexilib_dep = b.dependency("flexilib", .{
|
||||||
modules.universal_lambda_handler.addImport("flexilib-interface", modules.flexilib_interface);
|
.target = cs.target,
|
||||||
modules.universal_lambda_handler.addImport("universal_lambda_build_options", modules.universal_lambda_build_options);
|
.optimize = cs.optimize,
|
||||||
modules.universal_lambda_handler.addImport("aws_lambda_runtime", modules.aws_lambda_runtime);
|
});
|
||||||
|
const flexilib_module = flexilib_dep.module("flexilib-interface");
|
||||||
return;
|
// Make the interface available for consumption
|
||||||
|
cs.addModule("flexilib-interface", flexilib_module);
|
||||||
|
cs.addAnonymousModule("universal_lambda_interface", .{
|
||||||
|
.source_file = .{ .path = b.pathJoin(&[_][]const u8{ file_location, "interface.zig" }) },
|
||||||
|
// We alsso need the interface module available here
|
||||||
|
.dependencies = &[_]std.Build.ModuleDependency{},
|
||||||
|
});
|
||||||
|
// Add module
|
||||||
|
cs.addAnonymousModule("universal_lambda_handler", .{
|
||||||
|
// Source file can be anywhere on disk, does not need to be a subdirectory
|
||||||
|
.source_file = .{ .path = b.pathJoin(&[_][]const u8{ file_location, "universal_lambda.zig" }) },
|
||||||
|
// We alsso need the interface module available here
|
||||||
|
.dependencies = &[_]std.Build.ModuleDependency{
|
||||||
|
// Add options module so we can let our universal_lambda know what
|
||||||
|
// type of interface is necessary
|
||||||
|
.{
|
||||||
|
.name = "universal_lambda_build_options",
|
||||||
|
.module = options_module,
|
||||||
|
},
|
||||||
|
.{
|
||||||
|
.name = "flexilib-interface",
|
||||||
|
.module = flexilib_module,
|
||||||
|
},
|
||||||
|
.{
|
||||||
|
.name = "universal_lambda_interface",
|
||||||
|
.module = cs.modules.get("universal_lambda_interface").?,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
return file_location;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// This function relies on internal implementation of the build runner
|
||||||
|
/// When a developer launches "zig build", a program is compiled, with the
|
||||||
|
/// main entrypoint existing in build_runner.zig (this can be overridden by
|
||||||
|
/// by command line).
|
||||||
|
///
|
||||||
|
/// The code we see in build.zig is compiled into that program. The program
|
||||||
|
/// is named 'build' and stuck in the zig cache, then it is run. There are
|
||||||
|
/// two phases to the build.
|
||||||
|
///
|
||||||
|
/// Build phase, where a graph is established of the steps that need to be run,
|
||||||
|
/// then the "make phase", where the steps are actually executed. Steps have
|
||||||
|
/// a make function that is called.
|
||||||
|
///
|
||||||
|
/// This function is reaching into the struct that is the build_runner.zig, and
|
||||||
|
/// finding the location of the dependency for ourselves to determine the
|
||||||
|
/// location of our own file. This is, of course, brittle, but there does not
|
||||||
|
/// seem to be a better way at the moment, and we need this to be able to import
|
||||||
|
/// modules appropriately.
|
||||||
|
///
|
||||||
|
/// For development of this process directly, we'll allow a build option to
|
||||||
|
/// override this process completely, because during development it's easier
|
||||||
|
/// for the example build.zig to simply import the file directly than it is
|
||||||
|
/// to pull from a download location and update hashes every time we change
|
||||||
|
fn findFileLocation(b: *std.Build) ![]const u8 {
|
||||||
|
if (module_root) |r| return b.pathJoin(&[_][]const u8{ r, "src" });
|
||||||
|
const build_root = b.option(
|
||||||
|
[]const u8,
|
||||||
|
"universal_lambda_build_root",
|
||||||
|
"Build root for universal lambda (development of universal lambda only)",
|
||||||
|
);
|
||||||
|
if (build_root) |br| {
|
||||||
|
return b.pathJoin(&[_][]const u8{ br, "src" });
|
||||||
|
}
|
||||||
|
// This is introduced post 0.11. Once it is available, we can skip the
|
||||||
|
// access check, and instead check the end of the path matches the dependency
|
||||||
|
// hash
|
||||||
|
// for (b.available_deps) |dep| {
|
||||||
|
// std.debug.print("{}", .{dep});
|
||||||
|
// // if (std.
|
||||||
|
// }
|
||||||
|
const ulb_root = outer_blk: {
|
||||||
|
// trigger initlialization if it hasn't been initialized already
|
||||||
|
_ = b.dependency("universal_lambda_build", .{}); //b.args);
|
||||||
|
var str_iterator = b.initialized_deps.iterator();
|
||||||
|
while (str_iterator.next()) |entry| {
|
||||||
|
const br = entry.key_ptr.*;
|
||||||
|
const marker_found = blk: {
|
||||||
|
// Check for a file that should only exist in our package
|
||||||
|
std.fs.accessAbsolute(b.pathJoin(&[_][]const u8{ br, "src", "flexilib.zig" }), .{}) catch break :blk false;
|
||||||
|
break :blk true;
|
||||||
|
};
|
||||||
|
if (marker_found) break :outer_blk br;
|
||||||
|
}
|
||||||
|
return error.CannotFindUniversalLambdaBuildRoot;
|
||||||
|
};
|
||||||
|
return b.pathJoin(&[_][]const u8{ ulb_root, "src" });
|
||||||
|
}
|
||||||
/// Make our target platform visible to runtime through an import
|
/// Make our target platform visible to runtime through an import
|
||||||
/// called "universal_lambda_build_options". This will also be available to the consuming
|
/// called "universal_lambda_build_options". This will also be available to the consuming
|
||||||
/// executable if needed
|
/// executable if needed
|
||||||
pub fn createOptionsModule(b: *std.Build, cs: *std.Build.Step.Compile) *std.Build.Module {
|
pub fn createOptionsModule(b: *std.Build, cs: *std.Build.Step.Compile) !*std.Build.Module {
|
||||||
if (b.modules.get("universal_lambda_build_options")) |m| return m;
|
|
||||||
|
|
||||||
// We need to go through the command line args, look for argument(s)
|
// We need to go through the command line args, look for argument(s)
|
||||||
// between "build" and anything prefixed with "-". First take, blow up
|
// between "build" and anything prefixed with "-". First take, blow up
|
||||||
// if there is more than one. That's the step we're rolling with
|
// if there is more than one. That's the step we're rolling with
|
||||||
// These frameworks I believe are inextricably tied to both build and
|
// These frameworks I believe are inextricably tied to both build and
|
||||||
// run behavior.
|
// run behavior.
|
||||||
//
|
//
|
||||||
const args = std.process.argsAlloc(b.allocator) catch @panic("OOM");
|
var args = try std.process.argsAlloc(b.allocator);
|
||||||
defer b.allocator.free(args);
|
defer b.allocator.free(args);
|
||||||
const options = b.addOptions();
|
const options = b.addOptions();
|
||||||
options.addOption(BuildType, "build_type", findBuildType(args) orelse .exe_run);
|
options.addOption(BuildType, "build_type", findBuildType(args) orelse .exe_run);
|
||||||
// The normal way to do this is with module.addOptions, but that actually just does
|
cs.addOptions("universal_lambda_build_options", options);
|
||||||
// an import, even though the parameter there is "module_name". addImport takes
|
return cs.modules.get("universal_lambda_build_options").?;
|
||||||
// a module, but in zig 0.12.0, that's using options.createModule(), which creates
|
|
||||||
// a private module. This is a good default for them, but doesn't work for us
|
|
||||||
const module = b.addModule("universal_lambda_build_options", .{
|
|
||||||
.root_source_file = options.getOutput(),
|
|
||||||
});
|
|
||||||
cs.root_module.addImport("universal_lambda_build_options", module);
|
|
||||||
return module;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn findBuildType(build_args: [][:0]u8) ?BuildType {
|
fn findBuildType(build_args: [][:0]u8) ?BuildType {
|
||||||
|
|
Loading…
Reference in New Issue
Block a user