masonite icon indicating copy to clipboard operation
masonite copied to clipboard

masonite api

Open ssestantech opened this issue 2 years ago • 7 comments

Describe the bug

i am using masonite api . its working perfect on localhost. but when i go to production then its showing api routes not found 404.

Expected behaviour

i am using masonite api . its working perfect on localhost. but when i go to production then its showing api routes not found 404.

Steps to reproduce the bug

No response

Screenshots

No response

OS

Linux

OS version

Linux

Browser

No response

Masonite Version

4.6.1

Anything else ?

No response

ssestantech avatar Nov 17 '22 10:11 ssestantech

Can you test if your routes are correctly displayed in production with the python craft routes:list command ? It should display a list of the configured routes for the current environement. If it is displayed there then maybe it's not an issue with Masonite itself ?

Are you using a proxy like Nginx ?

girardinsamuel avatar Nov 18 '22 16:11 girardinsamuel

masonite.exceptions.exceptions.MissingContainerBindingNotFound: routes.api.location key was not found in the container

ssestantech avatar Nov 18 '22 18:11 ssestantech

image

ssestantech avatar Nov 18 '22 18:11 ssestantech

but api/users give 404

ssestantech avatar Nov 18 '22 18:11 ssestantech

masonite api not working on subdomain

ssestantech avatar Nov 18 '22 20:11 ssestantech

any solution

ssestantech avatar Nov 20 '22 14:11 ssestantech

@ssestantech

masonite.exceptions.exceptions.MissingContainerBindingNotFound: routes.api.location key was not found in the container

As per the error shown in your screenshot the routes.api.location binding is missing. As per the documentstion: https://docs.masoniteproject.com/features/api You will then have to add a binding to the container for the location of this file. You can do so in your Kernel.py file inside the register_routes method:

Additionally check your APP_URL in config.application on your production system as this needs to be the full base endpoint as I understand it ie https://sub.domain.acme.com

Personally I found this quite limiting when it comes to deploying to multiple environments (ie local, dev, staging, prod) as I had to manually modify each env config for specific usage. This is entirely due to visibility of env vars available to AWS Lambda. So I extended the config mechanism to be fully inherited config stack that could be tracked in version control and built dynamically accoring to the deployed env requirements. But thats just how I decided to do it for repeatability and ease of deployment

circulon avatar Jan 05 '23 02:01 circulon

is this syill an issue or was this resolved?

josephmancuso avatar Aug 13 '24 17:08 josephmancuso