[DEPLOYMENT] Problem increasing upload size
Describe the issue Although we increased the uploadlimit serverwise, we aren't able to upload images or files larger than 2 MB via our open form installation
Deployment Environment
- OpnForm Version: latest
- Hosting Platform: [Self-hosted server]
- OS: [Linux 6.1.0-30-amd64 Debian 12]
Deployment Method
- Docker
Steps Taken Describe the steps you've taken to deploy the application:
- following docker installation guide
- run .env scripts
- created own domain
Error Messages This error is seen in the inspector when trying to upload a large file (forms-dev.culturebase.org being the domain we set up): Bk3RVgqs.js:19 POST https://forms-dev.culturebase.org/api/upload-file 422 (Unprocessable Content)
Configuration Files If relevant, provide snippets of your configuration files (make sure to remove any sensitive information). client/.env: NUXT_PUBLIC_APP_URL=https://forms-dev.culturebase.org NUXT_PUBLIC_API_BASE=https://forms-dev.culturebase.org/api NUXT_PRIVATE_API_BASE=http://ingress/api NUXT_PUBLIC_ENV=dev NUXT_API_SECRET=***
Logs To help us diagnose the issue, please provide the following logs:
-
laravel.loginapi/storage/logson the back-end image -
Nuxt logs in the client docker logs
-
those are both empty, we cannot find them
Additional context we checked with the existing issues #570 and followed the steps there, but with no result
Hey, 422 is a validation error. Any content/payload in the response? By default the limit should be of 50mb and we don't currently offer a way of changing this in the self-hosted version
I see this in the response: Bk3RVgqs.js:14 Hydration completed but contains mismatches.
And 50MB is fine with us, but somehow everything larger than 2MB seems to get blocked, smaller files/images work fine.
Is this the information you needed? Let me know, if you need more.
Hey can you please take a look at this post on our Discord: https://discord.com/channels/1203373199765545000/1203373200218791947/1310762475129671781
Thanks for the reply - we already enhanced that and restarted the docker and nginx accordingly, but unfortunately it didn't fix the issue.
Do you have any other suggestions? Any help would be highly appreciated - thanks in advance!
Hey @Kulturserver I just updated the docker images and setup. This can maybe make things easier (there's a php.ini file that can be changed)
Can you please share more details about the request/response etc?
Thanks for the reply! Unfortunately, we run into a new problem when trying to install the new image.
We deleted everything and updated the git repo in /opt/opnform if we call docker-compose up -d it stops after a short while:
Building ui Sending build context to Docker daemon 26.82MB Step 1/12 : FROM node:20-alpine AS javascript-builder 20-alpine: Pulling from library/node 1f3e46996e29: Pull complete 280cf903519d: Pull complete 3e4c58ea8b08: Pull complete f5c4456c2e24: Pull complete Digest: sha256:2cd2a6f4cb37cf8a007d5f1e9aef090ade6b62974c7a274098c390599e8c72b4 Status: Downloaded newer image for node:20-alpine ---> f97665f3387c Step 2/12 : WORKDIR /app ---> Running in 86a2f3e41d44 Removing intermediate container 86a2f3e41d44 ---> 533076395bec Step 3/12 : ADD ./client/package.json ./client/package-lock.json ./ ---> 36e20d9d7d7d Step 4/12 : RUN apk add --no-cache git ---> Running in 7d5cf9b6cca5 fetch https://dl-cdn.alpinelinux.org/alpine/v3.21/main/x86_64/APKINDEX.tar.gz
Here it stops, the fetch doesn't get finalized. Are there any suggestions? Thanks in advance!
It seems like you're rebuilding the image instead of using the one on docker hub - any reasons why? We included a script to help you get started here: https://docs.opnform.com/deployment/docker
Thank you very much - we will look into this.
We did a new installation following the instructions, unfortunately, the problems remain:
- When trying to upload an image larger than 1 MB we get a 422 (Unprocessable Content)-error without further details
- Uploading an image smaller than 1 MB works, but the paths aren't set correctly, as described in #592
So, any additional help would be welcome, thanks a lot in advance!
Any additional ideas? Help would be highly appreciated, thanks!
+1, any update on how to fix this.
I am experiencing same exact issue in a completely new install using your install script for docker production. This occurs with a plain docker install of Opnform and directly connecting to your nginx container to access the frontend.
I also already added to docker/nginx.conf the line: client_max_body_size 100M;
I cannot find fix. I cannot find related errors in logs.
Anyone find any information or fix?
i have the same problem, using release 1.6.6 - my limit is 1MB
I had the same issue, and it was due to nginx body size limit.
I set client_max_body_size 0; (unlimited) in the API block and that solved my problem.
I had the same issue, and it was due to nginx body size limit.
I set
client_max_body_size 0;(unlimited) in the API block and that solved my problem.
@Shaokun-X can you please provide more details on your environment and how you solved the problem? Are you running the current OpnForm release version 1.6.6? Are you running the opnform docker install? Where did you set client_max_body_size 100M;, in docker/nginx.conf ? Did you need to change any different files or settings?
Can you please double check your testing and try to upload a file larger than 8MB?
If you are running an OpnForm docker installation and the release version 1.6.6 and probably previous releases I do not believe the problem is strictly with nginx or the nginx configuration.
I have tested php running in the containers and found that the containers that OpnForm docker installation creates named jhumanj/opnform-api:latest are missing the php.ini file. Regardless whatever setting are in docker/php/php.ini https://github.com/JhumanJ/OpnForm/blob/main/docker/php/php.ini the containers do not generate the php.ini
You can verify this by connecting to any of your containers named jhumanj/opnform-api:latest and run the following to generate this display:
# php -i | grep php.ini
Configuration File (php.ini) Path => /usr/local/etc/php
The above output fails to identify any Loaded Configuration File, in other words php cannot find the php.ini and is probably running with default settings.
You can verify this further by running the following:
# php -i |grep post_max_size
post_max_size => 8M => 8M
# php -i |grep upload_max_filesize
upload_max_filesize => 2M => 2M
This identifies that containers jhumanj/opnform-api:latest have php limiting post_max_size and upload_max_filesize.
This is further evidenced by any browser developer tools that identifies files sizes that fail show a status code 200 OK and no key, no uuid, instead the following is response:
<b>Warning</b>: POST Content-Length of 8474846 bytes exceeds the limit of 8388608 bytes in <b>Unknown</b> on line <b>0</b><br />
{
"message": "The POST data is too large.",
"exception": "Illuminate\\Http\\Exceptions\\PostTooLargeException",
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Http/Middleware/ValidatePostSize.php",
"line": 24,
"trace": [
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 209,
"function": "handle",
"class": "Illuminate\\Http\\Middleware\\ValidatePostSize",
"type": "->"
},
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/PreventRequestsDuringMaintenance.php",
"line": 110,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 209,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Middleware\\PreventRequestsDuringMaintenance",
"type": "->"
},
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Http/Middleware/HandleCors.php",
"line": 62,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 209,
"function": "handle",
"class": "Illuminate\\Http\\Middleware\\HandleCors",
"type": "->"
},
{
"file": "/usr/share/nginx/html/app/Http/Middleware/DevCorsMiddleware.php",
"line": 14,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 209,
"function": "handle",
"class": "App\\Http\\Middleware\\DevCorsMiddleware",
"type": "->"
},
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Http/Middleware/TrustProxies.php",
"line": 58,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 209,
"function": "handle",
"class": "Illuminate\\Http\\Middleware\\TrustProxies",
"type": "->"
},
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php",
"line": 127,
"function": "Illuminate\\Pipeline\\{closure}",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php",
"line": 176,
"function": "then",
"class": "Illuminate\\Pipeline\\Pipeline",
"type": "->"
},
{
"file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php",
"line": 145,
"function": "sendRequestThroughRouter",
"class": "Illuminate\\Foundation\\Http\\Kernel",
"type": "->"
},
{
"file": "/usr/share/nginx/html/public/index.php",
"line": 51,
"function": "handle",
"class": "Illuminate\\Foundation\\Http\\Kernel",
"type": "->"
}
]
Inside the containers jhumanj/opnform-api:latest in the directory /usr/local/etc/php/ there is no php.ini file.
# ls -la /usr/local/etc/php/
drwxr-xr-x 1 root root 4096 Feb 14 15:39 conf.d
-rw-r--r-- 1 root root 73253 Feb 14 15:39 php.ini-development
-rw-r--r-- 1 root root 73399 Feb 14 15:39 php.ini-production
You need to cp /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.ini and next you also need to modify /usr/local/etc/php/php.ini to increase the values for lines post_max_size = 8M and upload_max_filesize = 2M. After restarting the containers you can verify and see that php is loading your new php.ini Configuration File:
# php -i | grep php.ini
Configuration File (php.ini) Path => /usr/local/etc/php
Loaded Configuration File => /usr/local/etc/php/php.ini
Now, you will finally be able to upload larger files.
@JhumanJ at your convenience can you fix the build process of these containers to correct for the missing php.ini and missing settings?
Thanks
I had the same issue, and it was due to nginx body size limit. I set
client_max_body_size 0;(unlimited) in the API block and that solved my problem.@Shaokun-X can you please provide more details on your environment and how you solved the problem? Are you running the current OpnForm release version 1.6.6? Are you running the opnform docker install? Where did you set
client_max_body_size 100M;, indocker/nginx.conf? Did you need to change any different files or settings?Can you please double check your testing and try to upload a file larger than 8MB?
If you are running an OpnForm docker installation and the release version 1.6.6 and probably previous releases I do not believe the problem is strictly with nginx or the nginx configuration.
I have tested php running in the containers and found that the containers that OpnForm docker installation creates named
jhumanj/opnform-api:latestare missing thephp.inifile. Regardless whatever setting are indocker/php.inimain/docker/php/php.ini the containers do not generate thephp.iniYou can verify this by connecting to any of your containers named
jhumanj/opnform-api:latestand run the following to generate this display:# php -i |grep post_max_size post_max_size => 8M => 8M # php -i |grep upload_max_filesize upload_max_filesize => 2M => 2MThis identifies that containers
jhumanj/opnform-api:latesthave php limitingpost_max_sizeandupload_max_filesize.This is further evidenced by any browser developer tools that identifies files sizes that fail show a
status code 200 OKand no key, no uuid, instead the following is response:<b>Warning</b>: POST Content-Length of 8474846 bytes exceeds the limit of 8388608 bytes in <b>Unknown</b> on line <b>0</b><br /> { "message": "The POST data is too large.", "exception": "Illuminate\\Http\\Exceptions\\PostTooLargeException", "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Http/Middleware/ValidatePostSize.php", "line": 24, "trace": [ { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 209, "function": "handle", "class": "Illuminate\\Http\\Middleware\\ValidatePostSize", "type": "->" }, { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/PreventRequestsDuringMaintenance.php", "line": 110, "function": "Illuminate\\Pipeline\\{closure}", "class": "Illuminate\\Pipeline\\Pipeline", "type": "->" }, { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 209, "function": "handle", "class": "Illuminate\\Foundation\\Http\\Middleware\\PreventRequestsDuringMaintenance", "type": "->" }, { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Http/Middleware/HandleCors.php", "line": 62, "function": "Illuminate\\Pipeline\\{closure}", "class": "Illuminate\\Pipeline\\Pipeline", "type": "->" }, { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 209, "function": "handle", "class": "Illuminate\\Http\\Middleware\\HandleCors", "type": "->" }, { "file": "/usr/share/nginx/html/app/Http/Middleware/DevCorsMiddleware.php", "line": 14, "function": "Illuminate\\Pipeline\\{closure}", "class": "Illuminate\\Pipeline\\Pipeline", "type": "->" }, { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 209, "function": "handle", "class": "App\\Http\\Middleware\\DevCorsMiddleware", "type": "->" }, { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Http/Middleware/TrustProxies.php", "line": 58, "function": "Illuminate\\Pipeline\\{closure}", "class": "Illuminate\\Pipeline\\Pipeline", "type": "->" }, { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 209, "function": "handle", "class": "Illuminate\\Http\\Middleware\\TrustProxies", "type": "->" }, { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php", "line": 127, "function": "Illuminate\\Pipeline\\{closure}", "class": "Illuminate\\Pipeline\\Pipeline", "type": "->" }, { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php", "line": 176, "function": "then", "class": "Illuminate\\Pipeline\\Pipeline", "type": "->" }, { "file": "/usr/share/nginx/html/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php", "line": 145, "function": "sendRequestThroughRouter", "class": "Illuminate\\Foundation\\Http\\Kernel", "type": "->" }, { "file": "/usr/share/nginx/html/public/index.php", "line": 51, "function": "handle", "class": "Illuminate\\Foundation\\Http\\Kernel", "type": "->" } ]Inside the containers
jhumanj/opnform-api:latestin the directory/usr/local/etc/php/there is nophp.inifile.# ls -la /usr/local/etc/php/ drwxr-xr-x 1 root root 4096 Feb 14 15:39 conf.d -rw-r--r-- 1 root root 73253 Feb 14 15:39 php.ini-development -rw-r--r-- 1 root root 73399 Feb 14 15:39 php.ini-productionYou need to
cp /usr/local/etc/php/php.ini-production /usr/local/etc/php/php.iniand next you also need to modify/usr/local/etc/php/php.inito increase the values for linespost_max_size = 8Mandupload_max_filesize = 2M. After restarting the containers you will finally be able to upload larger files.@JhumanJ at your convenience can you fix the build process of these containers to correct for the missing
php.iniand missing settings?Thanks
Hello,
I forked from 8cb6e1238fea168bbff18a32687853867db031f3 and running in docker environment.
I basically added this line in the docker/nginx.config:
location ~/(api|open|local\/temp|forms\/assets)/ {
set $original_uri $uri;
try_files $uri $uri/ /index.php$is_args$args;
+ client_max_body_size 0;
}
In the end I implemented a new controller to support multipart upload to S3, so this was not used.
Hello,
I forked from 8cb6e12 and running in docker environment. I basically added this line in the
docker/nginx.config:location ~/(api|open|local\/temp|forms\/assets)/ { set $original_uri $uri; try_files $uri $uri/ /index.php$is_args$args;
}client_max_body_size 0;In the end I implemented a new controller to support multipart upload to S3, so this was not used.
Nice of you to respond that quick.
I now tried to follow your lead and to use that commit 8cb6e12 https://github.com/JhumanJ/OpnForm/commit/8cb6e1238fea168bbff18a32687853867db031f3 I created a new directory and performed the following commands:
git init
git remote add origin https://github.com/JhumanJ/OpnForm.git
git fetch origin 8cb6e1238fea168bbff18a32687853867db031f3
git checkout FETCH_HEAD
Is that wrong or how should I have performed the operation to fork the commit you referred to?
Next I edited the file docker/nginx.config exactly as you displayed and then I followed all the normal instructions according to https://docs.opnform.com/deployment/docker
I logged into the new Opnform web environment website, created a new form and added file upload, and went to upload a 10M file and it fails same as identified in previous posts. Can you doublecheck that using that commit all you added was the line in docker/nginx.config and you can upload 10M+ files? Or have you made more changes ie your new controller and your ability to upload larger files may be because you are not using the php in the jhumanj/opnform-api:latest containers?
I have this forked commit 8cb6e12 environment running now and when I connect into the jhumanj/opnform-api:latest containers they continue to be missing the php.ini file
# php -i | grep php.ini
Configuration File (php.ini) Path => /usr/local/etc/php
# ls -la /usr/local/etc/php/
total 156
drwxr-xr-x 1 root root 4096 Feb 14 03:30 .
drwxr-xr-x 1 root root 4096 Feb 14 03:30 ..
drwxr-xr-x 1 root root 4096 Feb 14 15:39 conf.d
-rw-r--r-- 1 root root 73251 Feb 14 03:30 php.ini-development
-rw-r--r-- 1 root root 73397 Feb 14 03:30 php.ini-production
Am I forking the commit incorrectly? If not, appears that the containers are not properly setup as I described in my post above.
By the way, can you share how you setup a "controller to support multipart upload to S3"? I may be interested in giving it a try and maybe other people would too.
Thanks
Hello, I forked from 8cb6e12 and running in docker environment. I basically added this line in the
docker/nginx.config:location ~/(api|open|local\/temp|forms\/assets)/ { set $original_uri $uri; try_files $uri $uri/ /index.php$is_args$args;
client_max_body_size 0;}
In the end I implemented a new controller to support multipart upload to S3, so this was not used.
Nice of you to respond that quick.
I now tried to follow your lead and to use that commit
8cb6e128cb6e12 I created a new directory and performed the following commands:git init git remote add origin https://github.com/JhumanJ/OpnForm.git git fetch origin 8cb6e1238fea168bbff18a32687853867db031f3 git checkout FETCH_HEADIs that wrong or how should I have performed the operation to fork the commit you referred to?
Next I edited the file
docker/nginx.configexactly as you displayed and then I followed all the normal instructions according to docs.opnform.com/deployment/dockerI logged into the new Opnform web environment website, created a new form and added file upload, and went to upload a 10M file and it fails same as identified in previous posts. Can you doublecheck that using that commit all you added was the line in
docker/nginx.configand you can upload 10M+ files? Or have you made more changes ie your new controller and your ability to upload larger files may be because you are not using the php in thejhumanj/opnform-api:latestcontainers?I have this forked commit
8cb6e12environment running now and when I connect into thejhumanj/opnform-api:latestcontainers they continue to be missing thephp.inifile# php -i | grep php.ini Configuration File (php.ini) Path => /usr/local/etc/php # ls -la /usr/local/etc/php/ total 156 drwxr-xr-x 1 root root 4096 Feb 14 03:30 . drwxr-xr-x 1 root root 4096 Feb 14 03:30 .. drwxr-xr-x 1 root root 4096 Feb 14 15:39 conf.d -rw-r--r-- 1 root root 73251 Feb 14 03:30 php.ini-development -rw-r--r-- 1 root root 73397 Feb 14 03:30 php.ini-productionAm I forking the commit incorrectly? If not, appears that the containers are not properly setup as I described in my post above.
By the way, can you share how you setup a "controller to support multipart upload to S3"? I may be interested in giving it a try and maybe other people would too.
Thanks
Hi,
Yes that is the correct commit. In fact I did not test uploading file larger than 10MB because in my use case the default upload experience is not good enough. So you might be right, the php.ini missing issue might still exist.
Instead, I tried 2 solutions, one using tus, one using S3 multipart + Uppy.
For the tus solution, I tried both local FS and S3 backends. I did the following changes for it to work with local FS:
--- a/client/components/forms/FileInput.vue
+++ b/client/components/forms/FileInput.vue
@@ -47,9 +47,25 @@
<div class="flex w-full items-center justify-center">
<div
v-if="loading"
- class="text-gray-600 dark:text-gray-400"
+ class="text-gray-600 dark:text-gray-400 flex flex-col items-center"
>
- <Loader class="mx-auto h-6 w-6" />
+ <div class="relative size-10">
+ <svg
+ class="size-full -rotate-90"
+ viewBox="0 0 36 36"
+ xmlns="http://www.w3.org/2000/svg"
+ >
+ <!-- Background Circle -->
+ <circle cx="18" cy="18" r="16" fill="none" class="stroke-current text-gray-200 dark:text-neutral-700" stroke-width="2"></circle>
+ <!-- Progress Circle -->
+ <circle cx="18" cy="18" r="16" fill="none" class="stroke-current text-blue-600 dark:text-blue-500" stroke-width="2" stroke-dasharray="100" :stroke-dashoffset="100 - progress" stroke-linecap="round"></circle>
+ </svg>
+ <!-- Percentage Text -->
+ <div class="absolute top-1/2 start-1/2 transform -translate-y-1/2 -translate-x-1/2">
+ <span class="text-center text-xs font-bold flex text-blue-600 dark:text-blue-500">{{ progress }}%</span>
+ </div>
+ </div>
<p class="mt-2 text-center text-sm text-gray-500">
{{ $t('forms.fileInput.uploadingFile') }}
</p>
@@ -131,7 +147,7 @@ import {inputProps, useFormInput} from './useFormInput.js'
import InputWrapper from './components/InputWrapper.vue'
import UploadedFile from './components/UploadedFile.vue'
import CameraUpload from './components/CameraUpload.vue'
-import {storeFile} from "~/lib/file-uploads.js"
+import {storeFileTus} from "~/lib/file-uploads.js"
export default {
name: 'FileInput',
@@ -156,7 +172,8 @@ export default {
files: [],
uploadDragoverEvent: false,
loading: false,
- isInWebcam: false
+ isInWebcam: false,
+ progress: 0,
}),
computed: {
@@ -265,11 +282,13 @@ export default {
uploadFileToServer(file) {
if (this.disabled) return
this.loading = true
- storeFile(file)
+ this.progress = 0
+ storeFileTus(file, (bytesUploaded, bytesTotal) => {
+ this.progress = Math.round((bytesUploaded / bytesTotal) * 100)
+ })
.then((response) => {
if (!this.multiple) {
this.files = []
}
+ // NOTE don't think this is ever true
if (this.moveToFormAssets) {
// Move file to permanent storage for form assets
opnFetch('/open/forms/assets/upload', {
--- a/client/lib/file-uploads.js
+++ b/client/lib/file-uploads.js
@@ -1,3 +1,50 @@
+import * as tus from 'tus-js-client'
+
+export function storeFileTus(file, onProgress) {
+ return new Promise((resolve, reject) => {
+ // Create a new tus upload
+ let upload = new tus.Upload(file, {
+ endpoint: "/tus/upload-file",
+ retryDelays: [0, 3000, 5000, 10000, 20000],
+ metadata: {
+ filename: file.name,
+ filetype: file.type,
+ },
+ onError: reject,
+ onProgress,
+ onSuccess: () => {
+ const response = {
+ // this is not a real UUID, but we have no control of the file name created by tus
+ uuid: upload.file.name,
+ extension: file.name.split('.').pop().toLowerCase(),
+ }
+ resolve(response)
+ },
+ })
+
+ // Check if there are any previous uploads to continue.
+ upload.findPreviousUploads().then(function (previousUploads) {
+ // Found previous uploads so we select the first one.
+ if (previousUploads.length) {
+ upload.resumeFromPreviousUpload(previousUploads[0])
+ }
+
+ // Start the upload
+ upload.start()
+ })
+ })
+}
+
async function storeLocalFile(file) {
let formData = new FormData()
formData.append("file", file)
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -71,6 +71,21 @@ services:
env_file:
- ./client/.env
+ opnform-tus-init-folder:
+ image: busybox
+ container_name: opnform-tus-init-folder
+ command: ["sh", "-c", "mkdir -p /data/app/tmp"]
+ volumes:
+ - ./api/storage:/data:rw
+ restart: "no"
+
+ tus:
+ image: tusproject/tusd:latest
+ container_name: opnform-tus
+ command: ["-upload-dir", "/srv/tusd-data/data/app/tmp", "-base-path", "/", "-port", "4000"]
+ volumes:
+ - ./api/storage:/srv/tusd-data/data:rw
+
redis:
image: redis:7
container_name: opnform-redis
--- a/docker/nginx.conf
+++ b/docker/nginx.conf
@@ -28,6 +28,15 @@ server {
try_files $uri $uri/ /index.php$is_args$args;
}
+ location ~ /tus/upload {
+ proxy_pass http://tus:4000;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-Host $host;
+ proxy_set_header X-Forwarded-Port $server_port;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection "Upgrade";
+ }
+
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass opnform-api:9000;
--- a/api/app/Service/Storage/StorageFileNameParser.php
+++ b/api/app/Service/Storage/StorageFileNameParser.php
@@ -53,8 +53,8 @@ class StorageFileNameParser
$candidateString = substr($fileName, strrpos($fileName, '_') + 1);
if (
+ // do not check UUID format here because tus filename is not configurable
!str_contains($candidateString, '.')
- || !Str::isUuid(substr($candidateString, 0, strpos($candidateString, '.')))
) {
return;
}
For tus to work with S3 you just need to pass the same S3 credentials to both opnform-api and tus, and then override the route of SignedStorageUrlController to the tus service. I did not take this solution in the end because the S3 compatible service I had was actually not fully compatible, and it did not work well tus even after some customization to tusd.
In the end I just used multipart upload directly to the S3 service with the following diffs:
--- /dev/null
+++ b/api/app/Http/Controllers/Content/S3UploadController.php
@@ -0,0 +1,122 @@
+<?php
+
+namespace App\Http\Controllers\Content;
+
+use Illuminate\Http\Request;
+use Illuminate\Support\Str;
+use Aws\S3\S3Client;
+use App\Http\Controllers\Controller;
+
+class S3UploadController extends Controller
+{
+ protected $s3Client;
+ protected $bucket;
+
+ public function __construct()
+ {
+ $this->bucket = env('AWS_BUCKET');
+
+ $this->s3Client = new S3Client([
+ 'region' => env('AWS_DEFAULT_REGION'),
+ 'version' => 'latest',
+ 'credentials' => [
+ 'key' => env('AWS_ACCESS_KEY_ID'),
+ 'secret' => env('AWS_SECRET_ACCESS_KEY'),
+ ],
+ ]);
+ }
+
+ public function create(Request $request)
+ {
+ $uuid = (string) Str::uuid();
+ $key = 'tmp/' . $uuid;
+
+ $result = $this->s3Client->createMultipartUpload([
+ 'Bucket' => $this->bucket,
+ 'Key' => $key,
+ ]);
+
+ return response()->json([
+ 'uploadId' => $result['UploadId'],
+ 'key' => $key,
+ ]);
+ }
+
+ public function getPresignedPartUrl(Request $request, $uploadId, $partNumber)
+ {
+ $key = $request->query('key');
+
+ $cmd = $this->s3Client->getCommand('UploadPart', [
+ 'Bucket' => $this->bucket,
+ 'Key' => $key,
+ 'UploadId' => $uploadId,
+ 'PartNumber' => (int)$partNumber,
+ ]);
+
+ $signedUrl = $this->s3Client->createPresignedRequest($cmd, '+6 hours')->getUri()->__toString();
+
+ return response()->json([
+ 'url' => $signedUrl,
+ 'method' => 'PUT',
+ ]);
+ }
+
+ public function complete(Request $request, $uploadId)
+ {
+ $result = $this->s3Client->completeMultipartUpload([
+ 'Bucket' => $this->bucket,
+ 'Key' => $request->input('key'),
+ 'UploadId' => $uploadId,
+ 'MultipartUpload' => [
+ 'Parts' => $request->input('parts'),
+ ],
+ ]);
+
+ $uuid = substr($request->input('key'), strlen('tmp/'));
+
+ return response()->json([
+ 'location' => $result['Location'],
+ 'bucket' => $result['Bucket'],
+ 'uuid' => $uuid,
+ ]);
+ }
+
+ public function abort(Request $request, $uploadId)
+ {
+ $key = $request->query('key');
+
+ try {
+ $this->s3Client->abortMultipartUpload([
+ 'Bucket' => $this->bucket,
+ 'Key' => $key,
+ 'UploadId' => $uploadId,
+ ]);
+
+ return response()->json([]);
+ } catch (\Exception $e) {
+ return response()->json([
+ 'error' => 'Failed to abort multipart upload',
+ 'details' => $e->getMessage(),
+ ], 500);
+ }
+ }
+
+ public function sign(Request $request)
+ {
+ $uuid = (string) Str::uuid();
+ $key = 'tmp/' . $uuid;
+ $expiresAfter = config('vapor.signed_storage_url_expires_after', 5);
+
+ $cmd = $this->s3Client->getCommand('PutObject', [
+ 'Bucket' => $this->bucket,
+ 'Key' => $key,
+ ]);
+
+ $signedUrl = $this->s3Client->createPresignedRequest($cmd, sprintf('+%s minutes', $expiresAfter))->getUri()->__toString();
+
+ return response()->json([
+ 'method' => 'PUT',
+ 'url' => $signedUrl,
+ ]);
+ }
+}
--- a/api/routes/api.php
+++ b/api/routes/api.php
@@ -354,12 +354,17 @@ Route::post(
'/upload-file',
[\App\Http\Controllers\Content\FileUploadController::class, 'upload']
)->name('upload-file');
+Route::prefix('s3')->group(function () {
+ Route::prefix('multipart')->group(function () {
+ Route::post('/', [S3UploadController::class, 'create']);
+ Route::get('{uploadId}/{partNumber}', [S3UploadController::class, 'getPresignedPartUrl']);
+ Route::post('{uploadId}/complete', [S3UploadController::class, 'complete']);
+ Route::delete('{uploadId}', [S3UploadController::class, 'abort']);
+ });
+
+ Route::get('params', [S3UploadController::class, 'sign']);
+ Route::post('sign', [S3UploadController::class, 'sign']);
+});
Route::get('local/temp/{path}', function (Request $request, string $path) {
if (!$request->hasValidSignature()) {
--- a/api/app/Models/Workspace.php
+++ b/api/app/Models/Workspace.php
@@ -16,7 +16,7 @@ class Workspace extends Model implements CachableAttributes
public const MAX_FILE_SIZE_FREE = 5000000; // 5 MB
- public const MAX_FILE_SIZE_PRO = 50000000; // 50 MB
+ public const MAX_FILE_SIZE_PRO = 4_000_000_000; // 4 GB
public const MAX_DOMAIN_PRO = 1;
--- a/client/components/forms/FileInput.vue
+++ b/client/components/forms/FileInput.vue
@@ -47,9 +47,25 @@
<div class="flex w-full items-center justify-center">
<div
v-if="loading"
- class="text-gray-600 dark:text-gray-400"
+ class="text-gray-600 dark:text-gray-400 flex flex-col items-center"
>
- <Loader class="mx-auto h-6 w-6" />
+ <div class="relative size-10">
+ <svg
+ class="size-full -rotate-90"
+ viewBox="0 0 36 36"
+ xmlns="http://www.w3.org/2000/svg"
+ >
+ <!-- Background Circle -->
+ <circle cx="18" cy="18" r="16" fill="none" class="stroke-current text-gray-200 dark:text-neutral-700" stroke-width="2"></circle>
+ <!-- Progress Circle -->
+ <circle cx="18" cy="18" r="16" fill="none" class="stroke-current text-blue-600 dark:text-blue-500" stroke-width="2" stroke-dasharray="100" :stroke-dashoffset="100 - progress" stroke-linecap="round"></circle>
+ </svg>
+ <!-- Percentage Text -->
+ <div class="absolute top-1/2 start-1/2 transform -translate-y-1/2 -translate-x-1/2">
+ <span class="text-center text-xs font-bold flex text-blue-600 dark:text-blue-500">{{ progress }}%</span>
+ </div>
+ </div>
<p class="mt-2 text-center text-sm text-gray-500">
{{ $t('forms.fileInput.uploadingFile') }}
</p>
@@ -131,7 +147,7 @@ import {inputProps, useFormInput} from './useFormInput.js'
import InputWrapper from './components/InputWrapper.vue'
import UploadedFile from './components/UploadedFile.vue'
import CameraUpload from './components/CameraUpload.vue'
-import {storeFile} from "~/lib/file-uploads.js"
+import {storeFileS3} from "~/lib/file-uploads.js"
export default {
name: 'FileInput',
@@ -156,7 +172,8 @@ export default {
files: [],
uploadDragoverEvent: false,
loading: false,
- isInWebcam: false
+ isInWebcam: false,
+ progress: 0,
}),
computed: {
@@ -265,11 +282,13 @@ export default {
uploadFileToServer(file) {
if (this.disabled) return
this.loading = true
- storeFile(file)
+ this.progress = 0
+ storeFileS3(file, (progress) => {
+ this.progress = progress
+ })
.then((response) => {
if (!this.multiple) {
this.files = []
}
+ // NOTE don't think this is ever true
if (this.moveToFormAssets) {
// Move file to permanent storage for form assets
opnFetch('/open/forms/assets/upload', {
--- a/client/lib/file-uploads.js
+++ b/client/lib/file-uploads.js
@@ -1,3 +1,50 @@
+import Uppy from '@uppy/core'
+import AwsS3 from '@uppy/aws-s3'
+
+export function storeFileS3(file, onProgress) {
+ return new Promise((resolve, reject) => {
+ const uppy = new Uppy()
+ .use(AwsS3, {
+ shouldUseMultipart(file) {
+ // Use multipart only for files larger than 10MiB.
+ return file.size > 10 * 2 ** 20
+ },
+ id: 'AWSPlugin',
+ endpoint: '/api',
+ })
+ uppy.on('progress', onProgress)
+ uppy.on('error', reject)
+ uppy.on('complete', (result) => {
+ const response = {
+ uuid: result.successful[0]?.uploadURL.split('/').pop(),
+ extension: file.name.split('.').pop().toLowerCase(),
+ }
+ resolve(response)
+ })
+
+ uppy.addFile(file)
+
+ uppy.upload()
+ })
+}
+
async function storeLocalFile(file) {
let formData = new FormData()
formData.append("file", file)
Note that all the code above are just proofs of concept, if used in production you may want to for example add more headers when signing S3 URLs for better security.
I hope this helps!
Thanks for these suggestions, we tried these also, but unfortunately, larger images still don't get uploaded, the process just seems to die without an error message in the inspector.
Just updated the docker images - all previously mentioned issues (including file upload limit size) should be fixed! Make sure to get the latest version of the docker-compose file when updating 🙂
Also worse checking the .env.docker of both api and client for available options.
We installed the latest version, but unfortunately we still get the same error when connecting with our own url and trying to upload a larger image (13 MB):
POST https://forms-xxxx.org/api/open/forms/assets/upload 422 (Unprocessable Content)
in the assets/upload itself we get
"message": "The GET method is not supported for route open/forms/assets/upload. Supported methods: POST."
Any idea what still might be causing trouble?
Hey @Kulturserver we've updated the docker-compose file which now includes PHP-related variables. I've just updated the docs to reflect this here: https://docs.opnform.com/configuration/environment-variables#php-configuration-environment-variables.
I suggest you try with these values:
PHP_MEMORY_LIMIT: "1G"
PHP_MAX_EXECUTION_TIME: "600"
PHP_UPLOAD_MAX_FILESIZE: "64M"
PHP_POST_MAX_SIZE: "64M"
Still, I'm a bit surprised that you have a 422 error - please let me know once you've added those!
Hi @JhumanJ thanks - we modified the values and I tried with a different image (Size 5 MB) to rule this out as a problem, but still get the error:
/api/open/forms/assets/upload:1 Failed to load resource: the server responded with a status of 422 (Unprocessable Content)
Can you please share the response of this 422? Also the logs from api and ngninx containers?
The response we get is the following:
"message": "The GET method is not supported for route open/forms/assets/upload. Supported methods: POST."
The logs are added. Thanks!
@Kulturserver are you on the discord server? Can we check together over a call? Please make sure that you have the latest images!
@JhumanJ unfortunately, we don't have access to discord yet, but we can manage to get one so that we could get in a call on monday, if that suits you?