pulumi
pulumi copied to clipboard
v3.114.0 FileArchive Issue: file not found when using a relative path of more than 2 levels above. Lambda zip: no such file or directory.
What happened?
Our CI process automatically pulls the latest version of pulumi. Seems like in the last update, we experience an issue where the lambda code reference is not finding our zip file. We have several lambda services across a few micro services, but only two of them reference a zip file that is 2 levels above the current directory. These two lambda services are failing. After troubleshooting, and ruling out any code changes, I found that reverting to the previous version works fine (3.113.3).
Example
const lambdaFunction = new lambda.Function("lambda", {
code: new asset.FileArchive("../../lambda.zip"),
role: lambdaRole.arn,
handler: "newrelic-lambda-wrapper.handler",
layers: [
"arn:aws:lambda:ca-central-1:451483290750:layer:NewRelicNodeJS20X:10",
],
vpcConfig: {
subnetIds: [lambdaPrivateSubnetId],
securityGroupIds: [securityGroup.id],
},
runtime: "nodejs18.x",
timeout: cfg.requireNumber("LAMBDA_TIMEOUT"),
memorySize: 2048,
environment: {
variables: {
...
},
},
});
Pulumi Error:
pulumi:pulumi:Stack (my-app):
error: Error: failed to register new resource lambda [aws:lambda/function:Function]: 2 UNKNOWN: failed to compute archive hash for "code": open ../../lambda.zip: no such file or directory
We have other lambda apps that use:
code: new asset.FileArchive("../lambda.zip")
or
code: new asset.FileArchive("lambda.zip")
without issue.
When reverting back to pulumi version 3.113.3, the above works as expected.
To reproduce:
- Update cli to v3.114.0
- Create a lambda function with code reference to a zip that is 2 levels above the current directory.
- Run
pulumi preview
-> should haveno such file or directory
error. - Compare results to cli version v3.113.3
Output of pulumi about
CLI
Version 3.114.0
Go Version go1.22.2
Go Compiler gc
Plugins KIND NAME VERSION language nodejs unknown
Host
OS ubuntu
Version 20.04
Arch x86_64
Additional context
We will fix the version to v3.113.3 for now. What's the best approach to making sure our builds don't break to do these very frequent new versions? We need to ensure better consistency and we may have to keep versions static and update periodically. What is the standard practise others are using?
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
Just been replicating this issue for someone over on Slack, and can confirm this is a v3.114.0 regression.
I suspect this PR is related: https://github.com/pulumi/pulumi/pull/15607
It seems the engine runs in the cwd of where pulumi is executed from, disregarding the value specified in pulumi.yaml//main
.
Minimal replication for pulumi preview
(using aws bucket as I couldn't find a local provider that utilises pulumi assets):
# Pulumi.yaml
name: pulumi-chdir
runtime:
name: python
options:
virtualenv: venv
main: stack/
description: A minimal Python Pulumi program
# stack/__main__.py
import pulumi
import pulumi_aws
import os
stub_path = "../stub"
print(os.getcwd()) ## reports in directory of this file for both v3.113.3 and v3.114.0
asset = pulumi.FileArchive(stub_path)
# create a bucket, and upload the asset
bucket = pulumi_aws.s3.Bucket("my-bucket")
bucket_object = pulumi_aws.s3.BucketObject(
"my-bucket-object",
bucket=bucket.id,
source=asset, # engine reading this seems to be from the cwd of the command, instead of the project path specified in the Pulumi.yaml
# make it private
acl="private",
)
# stub/stub.py
# just a stub file
Yup can confirm this is asset/archive hash calculations. I think it's a pretty easy fix of threading the current directory down to the right places. This is the right fix, we can't just revert this because we want sub-programs to work one day (soon) and they necesiarlly work from a different working directory.
Still facing the same problem with 3.115.0
pulumi about:
CLI
Version 3.115.0
Go Version go1.22.2
Go Compiler gc
Plugins
KIND NAME VERSION
resource aws 6.33.1
language python unknown
Host
OS arch
Version "23.1.4"
Arch x86_64
This project is written in python: executable='/home/anthony/.pyenv/shims/python3' version='3.12.3'
Current Stack: antdking/pulumi-chdir/dev
Found no resources associated with dev
Found no pending operations associated with dev
Backend
Name pulumi.com
URL https://app.pulumi.com/antdking
User antdking
Organizations antdking, antdking-testing
Token type personal
Dependencies:
NAME VERSION
pip 24.0
pulumi_aws 6.33.1
setuptools 69.5.1
wheel 0.43.0
pip show, as the pulumi sdk version doesn't seem to get detected:
$ venv/bin/pip show pulumi
Name: pulumi
Version: 3.115.0
Summary: Pulumi's Python SDK
Home-page: https://github.com/pulumi/pulumi
Author:
Author-email:
License: Apache 2.0
Location: /home/anthony/experiments/pulumi-chdir/venv/lib/python3.12/site-packages
Requires: dill, grpcio, protobuf, pyyaml, semver, six
Required-by: pulumi_aws
Presumably this will be fixed once providers are updated to build against 3.115.0?
Still facing the same problem with 3.115.0
Can you post the error message you're getting?
Can you post the error message you're getting?
Previewing update (dev)
View in Browser (Ctrl+O): https://app.pulumi.com/antdking/pulumi-chdir/dev/previews/693c06ac-5d79-4524-b43c-79eb44de87ea
Type Name Plan Info
+ pulumi:pulumi:Stack pulumi-chdir-dev create 1 error
+ └─ aws:s3:Bucket my-bucket create
Diagnostics:
pulumi:pulumi:Stack (pulumi-chdir-dev):
error: Program failed with an unhandled exception:
Traceback (most recent call last):
File "/home/anthony/experiments/pulumi-chdir/venv/lib/python3.12/site-packages/pulumi/runtime/resource.py", line 1009, in do_rpc_call
return monitor.RegisterResource(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anthony/experiments/pulumi-chdir/venv/lib/python3.12/site-packages/grpc/_channel.py", line 1160, in __call__
return _end_unary_response_blocking(state, call, False, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anthony/experiments/pulumi-chdir/venv/lib/python3.12/site-packages/grpc/_channel.py", line 1003, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "failed to compute archive hash for "source": couldn't read archive path '../stub': stat ../stub: no such file or directory"
debug_error_string = "UNKNOWN:Error received from peer {created_time:"2024-05-03T15:38:24.914057726+01:00", grpc_status:2, grpc_message:"failed to compute archive hash for \"source\": couldn\'t read archive path \'../stub\': stat ../stub: no such file or directory"}"
>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/anthony/.pulumi/bin/pulumi-language-python-exec", line 191, in <module>
loop.run_until_complete(coro)
File "/usr/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/anthony/experiments/pulumi-chdir/venv/lib/python3.12/site-packages/pulumi/runtime/stack.py", line 138, in run_in_stack
await run_pulumi_func(run)
File "/home/anthony/experiments/pulumi-chdir/venv/lib/python3.12/site-packages/pulumi/runtime/stack.py", line 52, in run_pulumi_func
await wait_for_rpcs()
File "/home/anthony/experiments/pulumi-chdir/venv/lib/python3.12/site-packages/pulumi/runtime/stack.py", line 114, in wait_for_rpcs
await task
File "/home/anthony/experiments/pulumi-chdir/venv/lib/python3.12/site-packages/pulumi/runtime/resource.py", line 1014, in do_register
resp = await asyncio.get_event_loop().run_in_executor(None, do_rpc_call)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anthony/experiments/pulumi-chdir/venv/lib/python3.12/site-packages/pulumi/runtime/resource.py", line 1011, in do_rpc_call
handle_grpc_error(exn)
File "/home/anthony/experiments/pulumi-chdir/venv/lib/python3.12/site-packages/pulumi/runtime/settings.py", line 307, in handle_grpc_error
raise grpc_error_to_exception(exn)
Exception: failed to compute archive hash for "source": couldn't read archive path '../stub': stat ../stub: no such file or directory
I just tested with FileAsset, and this is fixed in 3.115.0 (broken in 3.114).
So it's just FileArchive that still has the working directory missing.
Thanks I'll check that.
Yup can confirm, we fixed file archive to handle files but the path lookup is wrong for folders. I'll add that to the tests and get it fixed, should be able to get a release out early next week with this fixed.
confirmed working with v3.115.1, thanks
@Frassle @antdking @justinvp I can confirm that the issue is still present in 3.120.0
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "failed to compute archive hash for "source": failed to expand archive asset 'OceanGeoJSON_lowres.geojson': failed to open asset file '/home/runner/work/cerulean-cloud/cerulean_cloud/cloud_function_scene_relevancy/OceanGeoJSON_lowres.geojson': open /home/runner/work/cerulean-cloud/cerulean_cloud/cloud_function_scene_relevancy/OceanGeoJSON_lowres.geojson: no such file or directory"
debug_error_string = "{"created":"@1718982493.019236457","description":"Error received from peer ipv4:127.0.0.1:41871","file":"src/core/lib/surface/call.cc","file_line":1074,"grpc_message":"failed to compute archive hash for "source": failed to expand archive asset 'OceanGeoJSON_lowres.geojson': failed to open asset file '/home/runner/work/cerulean-cloud/cerulean_cloud/cloud_function_scene_relevancy/OceanGeoJSON_lowres.geojson': open /home/runner/work/cerulean-cloud/cerulean_cloud/cloud_function_scene_relevancy/OceanGeoJSON_lowres.geojson: no such file or directory","grpc_status":2}"
https://github.com/SkyTruth/cerulean-cloud/actions/runs/9615592908/job/26523245717
Pinning to 3.113.3
solves the issue. Just like to the comment of the original poster.
The following Python code is used here:
# The Cloud Function source code itself needs to be zipped up into an
# archive, which we create using the pulumi.AssetArchive primitive.
PATH_TO_SOURCE_CODE = "../cerulean_cloud/cloud_function_historical_run"
assets = {}
for file in os.listdir(PATH_TO_SOURCE_CODE):
location = os.path.join(PATH_TO_SOURCE_CODE, file)
asset = pulumi.FileAsset(path=location)
assets[file] = asset
archive = pulumi.AssetArchive(assets=assets)
# Create the single Cloud Storage object, which contains all of the function's
# source code. ("main.py" and "requirements.txt".)
source_archive_object = storage.BucketObject(
construct_name("source-cloud-function-historical-run"),
name="handler.py-%f" % time.time(),
bucket=cloud_function_scene_relevancy.bucket.name,
source=archive,
)
Quick look at this I think it's the Archive.readAssets
function needs the working directory threaded through. I'll see if we can get a repro test for that and fix it up.
This issue has been addressed in PR #16455 and shipped in release v3.122.0.