Google cloud build failure
I have been trying to build and deploy the hapi project using google cloud build and run. The build process seem to complete successfully, but the container does not start. I get this error:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
Where and how do i modify the hapi-project code to remove this error? I have done everything i can in docker-compose.yml, dockerfile and applicationproperties files. Please i need help. Thanks in advance.
Can you please provide some more information detailing the steps you are taking and also provide your configuration files (yaml, properties, etc)?
Thank you for the response. The build process on the google cloud uses the Dockerfile. The datasource segment of application.yaml file is modified cos i plan using cloud hosted postgressql database. I added the 'ENV port 8080' and 'EXPOSE ${port}' commands before the last line in the Dockerfile. A copy of the files are below.
please don't mind the formatting.
Application.yaml ` #Adds the option to go to eg. http://localhost:8080/actuator/health for seeing the running configuration #see https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints management: endpoints: web: exposure: include: "health,prometheus" spring: main: allow-circular-references: true #allow-bean-definition-overriding: true flyway: enabled: false check-location: false baselineOnMigrate: true datasource: url: jdbc:postgresql:***************/hapi_dstu3 username: ************** password: *************** driverClassName: org.postgresql.Driver max-active: 20
# database connection pool size
hikari:
maximum-pool-size: 10
jpa: properties: hibernate.format_sql: false hibernate.show_sql: false #Hibernate dialect is automatically detected except Postgres and H2. #If using H2, then supply the value of ca.uhn.fhir.jpa.model.dialect.HapiFhirH2Dialect #If using postgres, then supply the value of ca.uhn.fhir.jpa.model.dialect.HapiFhirPostgres94Dialect
hibernate.dialect: ca.uhn.fhir.jpa.model.dialect.HapiFhirPostgres94Dialect
hibernate.hbm2ddl.auto: update
hibernate.jdbc.batch_size: 20
hibernate.cache.use_query_cache: false
hibernate.cache.use_second_level_cache: false
hibernate.cache.use_structured_entries: false
hibernate.cache.use_minimal_puts: false
These settings will enable fulltext search with lucene
hibernate.search.enabled: false
hibernate.search.backend.type: lucene
hibernate.search.backend.analysis.configurer: ca.uhn.fhir.jpa.search.HapiHSearchAnalysisConfigurers$HapiLuceneAnalysisConfigurer
hibernate.search.backend.directory.type: local-filesystem
hibernate.search.backend.directory.root: target/lucenefiles
hibernate.search.backend.lucene_version: lucene_current
batch: job: enabled: false hapi: fhir: ### This enables the swagger-ui at /fhir/swagger-ui/index.html as well as the /fhir/api-docs (see https://hapifhir.io/hapi-fhir/docs/server_plain/openapi.html) openapi_enabled: true ### This is the FHIR version. Choose between, DSTU2, DSTU3, R4 or R5 fhir_version: R4 ### enable to use the ApacheProxyAddressStrategy which uses X-Forwarded-* headers ### to determine the FHIR server address # use_apache_address_strategy: false ### forces the use of the https:// protocol for the returned server address. ### alternatively, it may be set using the X-Forwarded-Proto header. # use_apache_address_strategy_https: false ### enable to set the Server URL # server_address: http://hapi.fhir.org/baseR4 # defer_indexing_for_codesystems_of_size: 101 # install_transitive_ig_dependencies: true # implementationguides: ### example from registry (packages.fhir.org) # swiss: # name: swiss.mednet.fhir # version: 0.8.0 # example not from registry # ips_1_0_0: # url: https://build.fhir.org/ig/HL7/fhir-ips/package.tgz # name: hl7.fhir.uv.ips # version: 1.0.0 # supported_resource_types: # - Patient # - Observation ################################################## # Allowed Bundle Types for persistence (defaults are: COLLECTION,DOCUMENT,MESSAGE) ################################################## # allowed_bundle_types: COLLECTION,DOCUMENT,MESSAGE,TRANSACTION,TRANSACTIONRESPONSE,BATCH,BATCHRESPONSE,HISTORY,SEARCHSET # allow_cascading_deletes: true # allow_contains_searches: true # allow_external_references: true # allow_multiple_delete: true # allow_override_default_search_params: true # auto_create_placeholder_reference_targets: false # cql_enabled: true # default_encoding: JSON # default_pretty_print: true # default_page_size: 20 # delete_expunge_enabled: true # enable_repository_validating_interceptor: true # enable_index_missing_fields: false # enable_index_of_type: true # enable_index_contained_resource: false ### !!Extended Lucene/Elasticsearch Indexing is still a experimental feature, expect some features (e.g. _total=accurate) to not work as expected!! ### more information here: https://hapifhir.io/hapi-fhir/docs/server_jpa/elastic.html advanced_lucene_indexing: false bulk_export_enabled: false bulk_import_enabled: false # enforce_referential_integrity_on_delete: false # This is an experimental feature, and does not fully support _total and other FHIR features. # enforce_referential_integrity_on_delete: false # enforce_referential_integrity_on_write: false # etag_support_enabled: true # expunge_enabled: true # client_id_strategy: ALPHANUMERIC # fhirpath_interceptor_enabled: false # filter_search_enabled: true # graphql_enabled: true # narrative_enabled: true # mdm_enabled: true # local_base_urls: # - https://hapi.fhir.org/baseR4 mdm_enabled: false # partitioning: # allow_references_across_partitions: false # partitioning_include_in_search_hashes: false cors: allow_Credentials: true # These are allowed_origin patterns, see: https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/cors/CorsConfiguration.html#setAllowedOriginPatterns-java.util.List- allowed_origin: - '*'
# Search coordinator thread pool sizes
search-coord-core-pool-size: 20
search-coord-max-pool-size: 100
search-coord-queue-capacity: 200
# Threadpool size for BATCH'ed GETs in a bundle.
# bundle_batch_pool_size: 10
# bundle_batch_pool_max_size: 50
# logger:
# error_format: 'ERROR - ${requestVerb} ${requestUrl}'
# format: >-
# Path[${servletPath}] Source[${requestHeader.x-forwarded-for}]
# Operation[${operationType} ${operationName} ${idOrResourceName}]
# UA[${requestHeader.user-agent}] Params[${requestParameters}]
# ResponseEncoding[${responseEncodingNoDefault}]
# log_exceptions: true
# name: fhirtest.access
# max_binary_size: 104857600
# max_page_size: 200
# retain_cached_searches_mins: 60
# reuse_cached_search_results_millis: 60000
tester:
home:
name: Local Tester
server_address: 'http://localhost:8080/fhir'
refuse_to_fetch_third_party_urls: false
fhir_version: R4
global:
name: Global Tester
server_address: "http://hapi.fhir.org/baseR4"
refuse_to_fetch_third_party_urls: false
fhir_version: R4
# validation:
# requests_enabled: true
# responses_enabled: true
# binary_storage_enabled: true
inline_resource_storage_below_size: 4000
bulk_export_enabled: true
subscription:
resthook_enabled: true
websocket_enabled: false
email:
from: [email protected]
host: google.com
port:
username:
password:
auth:
startTlsEnable:
startTlsRequired:
quitWait:
lastn_enabled: true
store_resource_in_lucene_index_enabled: true
This is configuration for normalized quantity serach level default is 0
0: NORMALIZED_QUANTITY_SEARCH_NOT_SUPPORTED - default
1: NORMALIZED_QUANTITY_STORAGE_SUPPORTED
2: NORMALIZED_QUANTITY_SEARCH_SUPPORTED
normalized_quantity_search_level: 2
#elasticsearch:
debug:
pretty_print_json_log: false
refresh_after_write: false
enabled: false
password: SomePassword
required_index_status: YELLOW
rest_url: 'localhost:9200'
protocol: 'http'
schema_management_strategy: CREATE
username: SomeUsername
`
Dockerfile ` FROM maven:3.8-openjdk-17-slim as build-hapi WORKDIR /tmp/hapi-fhir-jpaserver-starter
ARG OPENTELEMETRY_JAVA_AGENT_VERSION=1.17.0 RUN curl -LSsO https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/download/v${OPENTELEMETRY_JAVA_AGENT_VERSION}/opentelemetry-javaagent.jar
COPY pom.xml . COPY server.xml . RUN mvn -ntp dependency:go-offline
COPY src/ /tmp/hapi-fhir-jpaserver-starter/src/ RUN mvn clean install -DskipTests -Djdk.lang.Process.launchMechanism=vfork
FROM build-hapi AS build-distroless RUN mvn package spring-boot:repackage -Pboot RUN mkdir /app && cp /tmp/hapi-fhir-jpaserver-starter/target/ROOT.war /app/main.war
########### bitnami tomcat version is suitable for debugging and comes with a shell
########### it can be built using eg. docker build --target tomcat .
FROM bitnami/tomcat:9.0 as tomcat
RUN rm -rf /opt/bitnami/tomcat/webapps/ROOT &&
mkdir -p /opt/bitnami/hapi/data/hapi/lucenefiles &&
chmod 775 /opt/bitnami/hapi/data/hapi/lucenefiles
USER root RUN mkdir -p /target && chown -R 1001:1001 target USER 1001
COPY --chown=1001:1001 catalina.properties /opt/bitnami/tomcat/conf/catalina.properties COPY --chown=1001:1001 server.xml /opt/bitnami/tomcat/conf/server.xml COPY --from=build-hapi --chown=1001:1001 /tmp/hapi-fhir-jpaserver-starter/target/ROOT.war /opt/bitnami/tomcat/webapps/ROOT.war COPY --from=build-hapi --chown=1001:1001 /tmp/hapi-fhir-jpaserver-starter/opentelemetry-javaagent.jar /app
ENV ALLOW_EMPTY_PASSWORD=yes
########### distroless brings focus on security and runs on plain spring boot - this is the default image FROM gcr.io/distroless/java17-debian11:nonroot as default
65532 is the nonroot user's uid
used here instead of the name to allow Kubernetes to easily detect that the container
is running as a non-root (uid != 0) user.
USER 65532:65532 WORKDIR /app
COPY --chown=nonroot:nonroot --from=build-distroless /app /app COPY --chown=nonroot:nonroot --from=build-hapi /tmp/hapi-fhir-jpaserver-starter/opentelemetry-javaagent.jar /app
ENV port 8080 EXPOSE ${port}
CMD ["/app/main.war"] `
Closing as stale